Commit graph

3881 commits

Author SHA1 Message Date
Till JS
a8cce79e4c fix(monitoring): comment-out mana-ai metrics scrape after Phase 2f-3 move
mana-ai's /metrics endpoint is no longer exposed on Mini's
192.168.178.131:3067 (service moved to GPU-Box, no public /metrics
tunnel since the endpoint is internal). The blackbox-api job
already probes mana-ai.mana.how/health for liveness, which gives
us up/down without needing the metrics scrape.

Status-page is now 58/58 UP after VM rolled past the stale 3067
samples.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 17:06:04 +02:00
Till JS
dcd16067b5 feat(cards-server): Phase γ — public reads + browse + search + engagement
Marketplace discovery surface lights up. Anonymous browsers can
explore + search; signed-in users get the same surface plus star/
follow mutations.

  - middleware/optional-auth.ts: opportunistic JWT — sets c.get('user')
    if a token validates, otherwise leaves it undefined. Read paths
    use this; mutating routes call requireUser() inline.
  - services/explore.ts: browse() with q (ilike on title/description),
    tag, language, author-slug, sort (recent/popular/trending), pagination.
    explore() composes featured + trending for the landing.
    tagTree()/curatedTagsOnly() round it out. Subqueries for star/
    subscriber counts avoid N+1.
  - services/engagement.ts: star/unstar deck, follow/unfollow author.
    Idempotent via ON CONFLICT DO NOTHING. Self-follow rejected.
  - routes/explore.ts mounts /v1/explore, /v1/decks (browse list),
    /v1/tags. routes/engagement.ts mounts /v1/decks/:slug/star
    (POST/DELETE) + /v1/authors/:slug/follow (POST/DELETE).
  - index.ts replaces the previous strict-jwt-on-everything middleware
    with optionalAuth on all of /v1, then individual routers gate
    their write paths via local requireUser(). Hono context type
    relaxes from `user: AuthUser` to `user?: AuthUser` accordingly.

Validated: tsc --noEmit clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 17:01:32 +02:00
Till JS
4c044e849d fix(monitoring): mana-ai probe now uses public mana-ai.mana.how/health
After Phase 2f-3 mana-ai lives on the GPU-Box, so the
blackbox-internal docker-DNS probe (http://mana-ai:3066/health) is
gone — that target sits in a Docker network the blackbox-exporter
can't reach across LAN. Move the probe into blackbox-api against
the public hostname; gives the same up/down signal plus exercises
the Cloudflare-tunnel hop.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:55:39 +02:00
Till JS
ec8abfe6b8 feat(cards-web): Phase β.2 — author onboarding + publish flow
End-to-end "publish my local deck to the marketplace" surface in
the Cards standalone app. Hooks into cards-server (Phase β) so a
user can take a deck they've been editing locally and put it under
cards.mana.how/d/<slug> with one modal.

Pipeline:
  • lib/api/cards-api.ts — typed fetch wrapper around the cards-server
    /v1 surface. Reads the JWT from authStore, never from storage
    directly. CardsApiError carries `{status, message, details}`
    so UI can branch on 401/409/etc.
  • lib/stores/author.svelte.ts — lazy-loaded author state. Caches
    `cardsApi.authors.me()` on first access; resets cleanly on logout.
  • lib/util/slug.ts — best-effort slugify mirror of the server-side
    validator (server still has final say).
  • lib/components/PublishDeckModal.svelte — three-stage flow:
    become-author (slug + displayName + pseudonym), deck-meta (title,
    description, language, license picker, semver, changelog), then
    publishing → done with moderation-flag surface if AI mod returned
    'flag'. Keys off authorStore.isAuthor to skip stage 1 for
    returning authors.
  • routes/decks/[id]/+page.svelte gets a "🌍 Veröffentlichen" button
    next to "Lernen". Disabled until the deck has cards.

Wiring:
  • hooks.server.ts injects __PUBLIC_CARDS_API_URL__ on every SSR'd
    page so the client knows where cards-server lives.
  • compose adds PUBLIC_CARDS_API_URL_CLIENT=https://cards-api.mana.how
    to the cards-web container.

Validated: svelte-check 0/0, vite build green.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:53:17 +02:00
Till JS
f47edc14af feat(gpu-box): mana-ai (AI Mission Runner) migrated, mana-ai.mana.how → GPU tunnel
Phase 2f-3 (final of the 2f-trio). The background tick-loop runner is
the most coupled of the three: it queries mana-api, mana-llm, and
mana-research, and writes through to the mana_sync DB. Wired up via
cross-LAN host-IPs to those Mini-side services + the existing RSA
key-pair for Mission-Grant decryption (MANA_AI_PRIVATE_KEY_PEM moved
into /srv/mana/.env on the GPU-Box; the matching MANA_AI_PUBLIC_KEY_PEM
stays on mana-auth's env-set as before).

Bonus rationale: AI Mission Runner now sits in the same compose
network as the GPU-Box's gpu-llm/gpu-ollama tasks, so future
"agent talks to local LLM" paths skip the Cloudflare round-trip.

Tunnel: mana-ai.mana.how repointed at the mana-gpu-server tunnel
(config v28). The Mini-side ingress was removed in the same step.
OTEL_EXPORTER_OTLP_ENDPOINT cleared since Tempo was retired in 2c.

Mini-side: container stopped + removed from docker-compose.macmini.yml.
Running count went from 39 → 42 because of unrelated services that
re-appeared on the latest CD pull (cards-server, memoro-web), but the
actual mana-ai service is gone — net move accomplished.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:40:57 +02:00
Till JS
be155ca737 fix(cards-server): error classes extend Hono HTTPException
Shared-hono's serviceErrorHandler only translates HTTPException
instances; anything else degrades to 500. Our custom Error subclasses
were silently bypassing the translation layer, so a missing JWT came
back as `500 Internal server error` instead of the expected `401
Unauthorized`. Confirmed in prod logs after the Phase-β deploy.

Switching the error hierarchy to extend HTTPException directly. The
JSON body now carries the right status code + the existing `cause`
object surfaces our `code` discriminator + zod-style `details` for
BadRequest. No call-site changes needed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:40:20 +02:00
Till JS
044d948155 feat(cards-server): Phase β — author profiles + deck init/publish
First user-facing surface on cards-server. Three endpoint groups:

Authors (/v1/authors):
  - POST /me — upsert author profile (slug, displayName, bio,
    avatarUrl, pseudonym). Slug validated for length, charset, and
    against a small reserved-words list (admin, api, me, ...).
  - GET /me — read own profile (returns null if not yet an author).
  - GET /:slug — public profile (omits banned-reason, etc.)

Decks (/v1/decks):
  - POST / — claim a slug + create the metadata-only deck row.
    License defaults to Cards-Personal-Use-1.0; paid decks
    (priceCredits > 0) must use Cards-Pro-Only-1.0 (CHECK constraint
    + service-side guard).
  - GET /:slug — deck + latestVersion.
  - POST /:slug/publish — version semver enforced strictly increasing,
    AI-mod first-pass via mana-llm (block → 403; flag → publish + log
    for human review; pass → publish silently). Per-card and per-
    version SHA-256 content hashes computed; cards persisted; deck's
    latest_version_id flipped atomically in a single transaction.

Helpers:
  - lib/slug.ts — slugify (best-effort) + validateSlug (strict).
  - lib/hash.ts — canonical SHA-256 over (type, fields) for cards
    and (sorted, ord-stable) for versions.
  - lib/ai-moderation.ts — mana-llm /v1/chat/completions wrapper
    with system prompt that forces JSON output. Fail-open: if
    mana-llm is down or returns malformed JSON, the verdict is
    'flag' so a human reviewer catches it. Better slow than silent.

Index-mounting of /v1/authors and /v1/decks is gated behind jwtAuth.
Anonymous public reads (Phase γ optionalAuth middleware) come later.

Validated: tsc --noEmit clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:36:34 +02:00
Till JS
b03165ce97 feat(gpu-box): news-ingester migrated, Mini compose drops the service block
Phase 2f-2. RSS/Atom ingester (15-min tick → mana_platform.news.curated_articles)
moved to GPU-Box. Service has zero hot-path coupling, all the writes go
cross-LAN to Mini-postgres analog to the Glitchtip pattern.

Two implementation gotchas worth recording:

1. cross-arch image transfer doesn't work. Saved news-ingester:local
   from the Mini (Apple M4 → linux/arm64), tried `docker load` on the
   GPU-Box (linux/amd64) and got 'exec format error' on every restart.
   Native build on the GPU-Box was the only path forward.

2. The original services/news-ingester/Dockerfile assumes
   pnpm-workspace state from prior builds (no COPY for packages/shared-rss
   in the build context). Fresh builds error with
   ERR_PNPM_WORKSPACE_PKG_NOT_FOUND.

Workaround: a GPU-Box-specific Dockerfile at infrastructure/news-ingester/
that vendors shared-rss into the build via a workspace:* → file:ref
sed swap. Build context is the repo root (sparse-clone provides
packages/shared-rss + services/news-ingester). The Mini-side Dockerfile
stays untouched so existing CD builds aren't disturbed.

Mini-side: container stopped + removed from docker-compose.macmini.yml,
running container count 44 → 39.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:27:45 +02:00
Till JS
71ec5e7123 feat(cards-server): Phase α.4 — Dockerfile + compose + tunnel route
Wires cards-server into the Mac-mini stack so we can deploy alongside
the rest of the Mana services.

  - Dockerfile mirrors the mana-credits 2-stage pattern (node+pnpm
    installer → bun runtime), exposes :3072, includes a /health
    healthcheck.
  - docker-compose.macmini.yml: new cards-server block right after
    mana-credits — depends on postgres + mana-auth, 128m mem, all the
    env knobs from the Phase-α config (author payout BPS, community-
    verified thresholds, sibling-service URLs).
  - cloudflared-config.yml: cards-api.mana.how → :3072. Distinct from
    cards.mana.how (the user-facing PWA) so the API surface is clearly
    separated.
  - sso-origins.ts: cards-api.mana.how added to PRODUCTION_TRUSTED_ORIGINS.
  - mana-auth CORS_ORIGINS in compose: cards-api.mana.how added.
    Restored whopxl.mana.how that had drifted out — sso-config.spec.ts
    had been flagging it but the missing entry surfaced when I added
    cards-api. spec is back to 8/8 green.

Deploy plan (next steps, not in this commit):
  1. ./scripts/mac-mini/build-app.sh cards-server
  2. docker exec mana-app-cards-server bun run db:push  (creates the
     `cards` schema + 16 tables in mana_platform)
  3. ./scripts/mac-mini/sync-tunnel-config.sh
  4. Smoke: curl https://cards-api.mana.how/health → 200

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:22:48 +02:00
Till JS
a7b62ea8ae feat(cards-server): Phase α — service skeleton + 16-table schema
Lays the foundation for the Cards marketplace + community backend per
apps/cards/docs/MARKETPLACE_PLAN.md. Phase α scope: skeleton, schema,
JWT auth wiring, health endpoint. Routes follow in Phase β.

Stack: Hono + Bun + Drizzle + Postgres + jose-JWKS — mirrors the
mana-credits service template.

Schema: pgSchema('cards') inside mana_platform, 16 tables across six
groups in src/db/schema/:
  - authors.ts: authors, author_follows
  - decks.ts: decks, deck_versions, deck_cards (with cards_card_type
    enum mirroring @mana/cards-core; per-card content_hash for
    smart-merge; CHECK constraint that paid decks must use
    Cards-Pro-Only-1.0 license)
  - tags.ts: tag_definitions (hierarchical), deck_tags
  - engagement.ts: deck_stars, deck_subscriptions, deck_forks
  - discussions.ts: deck_pull_requests (with diff jsonb +
    pr_status enum), card_discussions (bound to card_content_hash
    so threads survive version bumps)
  - moderation.ts: deck_reports (with category/status enums),
    ai_moderation_log
  - credits.ts: deck_purchases (snapshot price + author/mana split),
    author_payouts

Phase λ's co_learn_sessions intentionally not yet here.

Service plumbing:
  - src/index.ts: Hono entry on :3072, /health unauth, /v1 stub
  - src/config.ts: env loader with author-payout BPS knobs
    (defaults 80/20 standard, 90/10 verified-mana) and
    community-verified thresholds
  - src/middleware/jwt-auth.ts + service-auth.ts: JWKS validation
    + X-Service-Key check (mirrors mana-credits)
  - src/lib/errors.ts: HttpError + named subclasses
  - drizzle.config.ts pointing at mana_platform with schemaFilter:cards
  - drizzle/0000_*.sql committed so other devs / prod migration path
    has a reproducible starting point

Validated: tsc --noEmit clean, drizzle-kit generate produces
233-line SQL with all 16 tables + 5 enums + indexes.

Next (Phase α.4): Dockerfile + docker-compose + cloudflare tunnel
route cards-api.mana.how → :3072.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:01:08 +02:00
Till JS
33bc654238 chore(infra): drop npm.mana.how from Mini tunnel — verdaccio moved to GPU-Box
Phase 2f-1 cutover. npm.mana.how DNS now CNAMEs to mana-gpu-server
tunnel (config v27), Mini-side route entry no longer needed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 15:59:40 +02:00
Till JS
6e40546119 feat(gpu-box): add verdaccio service + bundle config in repo
Phase 2f-1: verdaccio (npm.mana.how) was the heaviest non-hot-path
service still left on the Mini after Phase 2 — read-mostly registry
that ci/local pnpm-installs hit, latency-unkritisch. Moved into
infrastructure/docker-compose.gpu-box.yml. Storage volume content
(@mana/* packages + htpasswd) migrated via tar-stream.

Config came from the mana-platform repo's
infrastructure/verdaccio/config.yaml. Copied into mana-monorepo so the
GPU-Box's sparse-clone (already pulling scripts/ +
packages/shared-branding) can also bind-mount it without needing a
second repo on the box.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 15:54:37 +02:00
Till JS
0686300243 docs(cards): Marktplatz Plan — Vollvision mit mana-credits + dual verification
Vollvision-Plan für den Cards-Decks-Marktplatz, basierend auf der
Konkurrenz-Analyse vom selben Tag. Bewusst nicht MVP-getrieben —
unbegrenzte Ressourcen-Annahme, optimale Lösung als Ziel.

Kern-Entscheidungen aus dem User-Briefing:
  - Versionierte Decks + Live-Updates + Pull-Requests = volle Vision
  - mana-credits zentral: User kaufen, Authoren verdienen (80/20 Cut,
    90/10 für verified-mana)
  - Verifizierung zweigleisig: Mana-Verein-Kuration UND Community-
    Schwelle, mit unterschiedlichen Badges (🛡️ und )
  - Co-Learn-Sessions explizit auf Phase λ verschoben
  - Mobile-Apps explizit auf Phase μ verschoben

Inhalt:
  - 17 Tabellen-Schema (Authoren, Decks, Versionen, Karten,
    Subscriptions, Forks, Stars, PRs, Discussions, Reports,
    Tags, Käufe, Auszahlungen, AI-Mod-Log)
  - mana-credits Integration end-to-end (2-phase Reservation,
    Author-Payout, Refund-Workflow, Buyer-Protection)
  - Service-Architektur: cards-server (neu), cards-search (neu,
    später), Erweiterungen an mana-llm/mana-credits/mana-notify/
    mana-media
  - 7 API-Endpoint-Bereiche mit konkreten Routes
  - 9 Phasen (α–ι) plus 4 explizit-später-Phasen
  - Cold-Start: Verein-Seed + Anki-Top-100 + Influencer-Outreach
  - Risiko-Matrix mit Mitigationen
  - "Was wir NICHT tun"-Sektion (Sterne-Bewertung, Reddit-Voting,
    Anki-Bashing, Klarname-Pflicht, > 30% Cut)
  - 7 konkrete Differenzierungs-Hebel gegen die 17 Konkurrenten

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 15:48:45 +02:00
Till JS
cf5349cdd2 feat(gpu-box): adopt photon container into compose with healthcheck
photon was the last 'health: none' container on the GPU-Box —
pre-existing user setup created via raw docker-run before Phase 2.
Adopted into infrastructure/docker-compose.gpu-box.yml with the
exact same image / volumes / cmd / port mapping so the OSM index in
/opt/photon-data survives untouched, plus a curl-based healthcheck
against /api?q=Berlin&limit=1 (Photon has no /health endpoint —
this is the canonical liveness probe).

start_period 120s gives Java the warmup window without false-flagging.
Recreate took ~10s including healthy state, no perceptible downtime
on photon.mana.how.

After this, all 20 GPU-Box containers report healthy. Mac Mini still
has 2 long-standing 'unhealthy' (mana-verdaccio's wget probe is
broken but npm.mana.how serves 200; mana-mail/Stalwart in bootstrap
mode, never configured) — both pre-existing, neither user-impacting.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 15:42:54 +02:00
Till JS
384be93274 feat(gpu-box): healthchecks for glitchtip-worker, gpu-promtail, status-gen
Three containers were running with no healthcheck — Docker showed them
as 'none', so an actual crash inside the container would only surface
once the process itself exited (and got restarted by restart-policy).
Added container-internal probes that don't depend on tools the image
doesn't ship:

- glitchtip-worker: bash + /dev/tcp/glitchtip-redis/6379 — confirms the
  Celery broker is reachable. Bare-metal probe, no extra deps.
- gpu-promtail: bash + /dev/tcp/loki/3100 — confirms the loki sink the
  worker is shipping to is reachable. Replaces the wget-based check
  that errored 'executable file not found' on every tick.
- status-page-gen: stat + date — confirms /output/status.json was
  rewritten in the last 3 min (script writes it every 60s). Catches
  the case where the apk-install loop wedges or the generator
  silently dies.

CMD-SHELL is /bin/sh which is dash on Debian-based images and dash
doesn't support /dev/tcp — used CMD form with explicit bash for the
two TCP probes.

photon stays without a healthcheck — pre-existing user container, not
in this compose file. Adding it would require a recreate which loses
the warm OSM cache.

After rollout: 17/20 GPU-Box containers healthy + 3 'none' (status-nginx,
glitchtip-redis, gpu-node-exporter — all standard upstream images
without built-in /health endpoints; their service is checked indirectly
via downstream consumers' healthchecks).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 15:29:04 +02:00
Till JS
8a90cd296c docs(cards): competitor analysis Mai 2026
Recherche zu 17 Spaced-Repetition-Konkurrenten (Anki, Quizlet,
RemNote, Mochi, Brainscape, Memrise, SuperMemo, AnkiPro/Noji,
AnkiApp/AlgoApp, Quizgecko, Knowt, Wisdolia, Mnemosyne, Traverse,
Cerego, NeuraCache, AnkiHub) inkl. USP, Lizenz, Kosten, User-
Stimmen, Firma + Bedrohungs-Ranking.

Wichtigste Erkenntnisse für die Cards-Strategie:
  - Free Markdown+FSRS+Cloud-Sync ist eine objektive Marktlücke
    (Mochi: $5/mo Sync, RemNote: $8/mo, Brainscape: ~$20/mo).
  - AI-Karten-Generierung ist Tischeinsatz, kein USP mehr — Quizlet,
    Quizgecko, Knowt, RemNote, Wisdolia haben es alle.
  - Quizlet ist mit Trustpilot 1.4/5 verwundbar (Paywall-Walls);
    Quizlet-Refugee-Markt ist offen.
  - AnkiPro (Noji) und AnkiApp (AlgoApp) haben mit "Anki"-Brand-
    Sniping ihre Reputation verbrannt — Lehre für unsere Brand-
    Hygiene.

Empfohlene Hebel: (1) Free Sync explizit ausspielen, (2) Anki-
Migration als First-Class-Feature (eigene from-anki-Landing),
(3) Local-First PWA als Tech-Identität.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 15:28:02 +02:00
Till JS
cd888cd54a fix(gpu-box): drop gpu-promtail healthcheck — image has no curl/wget/nc
Promtail v3.0.0 ships a minimal alpine-ish image with only the
promtail binary. The original Mini compose's wget-based healthcheck
errored out with 'executable file not found' on every tick, marking
the container as 'unhealthy' for hours despite Loki actively
receiving logs from it. Restart-policy unless-stopped catches real
crashes anyway, so the healthcheck adds noise without value.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 15:13:47 +02:00
Till JS
ceed8ccd64 feat(mana-sync): per-app billing exemption — Cards bypasses sync gate
mana-sync's billing middleware short-circuited every push/pull with
402 for users without a sync subscription. Cards promises free Sync
in its Phase-1 GUIDELINES, so it shouldn't gate its own users on a
mana-credits subscription it never sells.

Implementation:
  • billing.NewChecker now takes an exemptApps slice. The middleware
    extracts {appId} from the URL path and short-circuits before the
    user lookup if the app is in the set.
  • Configurable via the BILLING_EXEMPT_APPS env var (comma-separated).
  • Set BILLING_EXEMPT_APPS=cards on the mana-sync container so the
    cards.mana.how Sync loop stops 402-ing.
  • Tests cover the exemption + the empty/whitespace edge cases. All
    other apps keep the original behaviour (fail-open if mana-credits
    is unreachable, 402 if it explicitly says inactive).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 15:01:54 +02:00
Till JS
6f1b0329f0 docs(infra): document photon.mana.how + cross-LAN workaround pattern
Phase 2c had 3 cross-LAN-routing pain points; Phase 2e + the photon
fix solved 2 of them, so the doc was misleading. Refactored the
"Bekannte Limits" block in PLAN_OPTION_C.md into a proper
cross-LAN-pattern table that lists each known case + its current
status. Phase-2c-original gpu-* and Mini-Promtail entries kept as
the remaining open items, with the same Cloudflare-Tunnel-as-LAN-bridge
workaround spelled out (Loki-HTTP-Push via loki.mana.how would be the
next obvious move).

Plus infrastructure/README.md now lists every active public-hostname
the mana-gpu-server tunnel exposes (v26).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 14:45:34 +02:00
Till JS
c1423d2f72 fix(cards-web): missing static assets — sql-wasm-browser.wasm + PWA icons
Two 404s reported from prod:

1) sql.js's production-bundled loader requested /sql-wasm-browser.wasm
   but only sql-wasm.wasm was in static/. The browser bundle is its own
   file; copy both so the loader hits whichever variant the runtime
   picks. Without this the .apkg import dies before reading SQLite.

2) shared-pwa's manifest references pwa-192x192.png, pwa-512x512.png
   and apple-touch-icon.png. None existed → Chrome's manifest-icon
   validator failed and there was no usable icon for A2HS. Generated
   minimal indigo-stacked-cards PNGs at the three required sizes.

Note: the sync 402 reports in the same console output are a separate,
intentional behaviour — mana-sync's billing middleware blocks pull/
push when the user has no active sync subscription. No code change
needed; handled at the mana-credits layer.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 14:42:47 +02:00
Till JS
1e8d18ac8d fix(monitoring): photon via Cloudflare-Tunnel, drop dead whopxl
Two cleanups against the status-page DOWN list:

photon-self (photon.mana.how route):
  mana-geocoding's /health/photon-self pings the photon backend, which
  lives as a Docker container on the GPU-Box (port 2322). PHOTON_SELF_API_URL
  was http://192.168.178.11:2322 — Mini-host can hit that fine but
  Mini-Docker-containers can't (Colima-NAT-quirk we keep running into).
  Routed photon through the mana-gpu-server tunnel (config v26) and
  flipped the env var to https://photon.mana.how. Probe goes UP, geocoding
  for sensitive queries (privacy:'local' provider tier) actually works
  now too — was effectively orphaned before.

whopxl removed everywhere it still lingered:
  Container hasn't existed on the Mini in months (no compose service,
  no source dir under apps/, no listener on :5100 — only the dead
  cloudflared route + a stale CORS_ORIGINS entry on mana-auth). Cleaned
  cloudflared-config.yml, prometheus.yml blackbox-web target, and the
  mana-auth CORS list. Old DNS CNAME for whopxl.mana.how stays for now;
  no harm.

Plus while we were here: who-api.mana.how/api/decks was the right probe
for who-server's deck catalogue (root /api/decks lives on who-api, not
who.mana.how which is the SSR shell).

Live: status.mana.how shows 58/59 UP; the last 'whopxl' entry will
fall off after VM's TSDB rolls past the probe_success staleness window.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 14:39:50 +02:00
Till JS
82db4eb794 feat(cards-web): Anki import carries images + audio along
Closes the gap from the first Anki-import pass: media files are now
uploaded alongside the cards instead of stripped.

Pipeline:
  • parse.ts: read the .apkg's `media` JSON manifest, build a
    filename → ZIP-entry map (Anki names files numerically; the
    manifest is the original-name lookup table). Returned alongside
    decks/cards as parsed.mediaByFilename.
  • import.ts: collectMediaRefs() walks every card field, gathers
    distinct <img src=…> and [sound:…] references — orphan media
    bundled in the .apkg are ignored. Referenced files upload to
    mana-media in 4 parallel workers, returning a filename → URL map.
  • parse.sanitizeAnkiHtml() now takes that map: <img src="X"> →
    <img src="<url>" alt="" />, [sound:Y] → <audio controls
    preload="metadata" src="<url>"/>. The remaining-tag stripper has
    a negative lookahead for img/audio/video/source so the new tags
    survive.
  • CardFace already renders <img>/<audio> via @mana/cards-core's
    DOMPurify config (the image/audio attachments commit added the
    allowlist), so the freshly-imported cards just work in the
    learn session.

UI:
  • AnkiImport gains an "uploading-media" stage with X / N progress
    bar between preview and card creation.
  • Preview now shows the media count, copy promise updated from
    "Bilder/Audio bleiben raus" to "Bilder + Audio werden mit
    übernommen".
  • Result block reports `N Medien übernommen · M fehlgeschlagen`.

Phase-2 ideas: per-user media scoping in mana-media; verify-then-
upload via /media/hash/:sha256 to skip duplicates from re-imports.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 14:25:43 +02:00
Till JS
0ae1e70bf1 fix(monitoring): status-page covers all standalone apps + restore who.mana.how routing
Audit revealed status.mana.how was probing only the unified mana-app
path-routes (mana.how/{module}) plus a couple of GPU services. None
of the standalone deployments were monitored, and three probe targets
were stale.

Changes:

- prometheus.yml blackbox-web: drop mana.how/{context,who} (context
  module was dropped 2026-04-29; mana.how/who never existed —
  /who is a standalone stack on its own subdomain). Add the eight
  hosts that DO have separate deployments today: whopxl, manavoxel,
  memoro (landing), cards (Phase-1 spinoff), who.mana.how/cantina,
  npm (Verdaccio).
- prometheus.yml blackbox-api: add memoro-api/health,
  memoro-audio/health, who-api.mana.how/api/decks,
  admin.mana.how/health (admin's root is auth-walled, only /health
  returns 200).
- prometheus.yml blackbox-gpu: add gpu-llm.mana.how/health (was
  missing; gpu-stt/tts/img/video were in, gpu-llm was somehow not).
- cloudflared-config.yml: restore who.mana.how → :5092 +
  who-api.mana.how → :3092. The DNS CNAME points at the Mini tunnel
  but the route entries had been lost during a previous compose
  cleanup, so every who.* request was hitting the catch-all 404 and
  the standalone Bun stack was effectively orphaned at the edge
  (PM2 + LaunchAgent all healthy on Mini, just no public route).

Live state after rollout: status.mana.how shows 57/59 services UP,
the two remaining DOWN are pre-existing — photon-self (Phase-2c
cross-LAN routing limitation, documented in PLAN_OPTION_C.md) and
whopxl-web (container not running on the Mini, separate issue).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 14:09:31 +02:00
Till JS
daa1ef0513 feat(cards): image / audio attachments on cards via mana-media
Cards can now carry image, audio, and video attachments uploaded to
mana-media (the existing CAS service that already powers picture,
photos, wardrobe, etc.).

Pipeline:
  • lib/media/upload.ts wraps POST /api/v1/media/upload (multipart,
    app=cards). Returns { id, url, kind } with the right variant URL
    per kind (medium for images, full file for audio/video). 25 MB
    cap matches the website-upload pattern.
  • mediaToFieldSnippet(): drops Markdown ![]() for images; raw
    <audio>/<video controls> for the others — the user can later
    tweak attributes by hand.
  • Deck-detail card editor gains a "📎 Anhang" button next to every
    text field (front/back/cloze). Pick → upload → snippet appended
    to the field's content. Loading + error states surfaced inline.

Render:
  • @mana/cards-core/render.ts whitelists `audio`, `source`, `video`
    plus the `controls`/`preload`/`src`/`type` attrs in DOMPurify so
    inline media survives sanitization. Markdown's <img> already
    passed through the default policy.

Wiring:
  • hooks.server.ts injects __PUBLIC_MANA_MEDIA_URL__.
  • compose adds PUBLIC_MANA_MEDIA_URL_CLIENT=https://media.mana.how
    to cards-web.

Phase 2 ideas: drag-drop directly into the textarea, paste-from-
clipboard for screenshots, mana-media auth scoping per user, Anki
import bringing media files along.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 13:52:53 +02:00
Till JS
1f2206f10b feat(cards-web): PDF input for AI generator + study activity heatmap
PDF input:
  • lib/ai/pdf.ts wraps pdfjs-dist (Apache-2.0). Worker is bound via
    Vite's `?worker` suffix so the heavy parsing runs off-main-thread.
  • AiCardGen gains a "📄 PDF laden" button that pipes extracted text
    into the same textarea — the user can review/trim before
    generation. Reading state shows file name + page count + chars.

Heatmap:
  • queries.useStudyHeatmap(weeks=12) fills gaps with count=0 so the
    grid renders without holes.
  • StudyHeatmap.svelte: 7 rows × N columns (Monday-anchored), 5
    intensity buckets (neutral → emerald-300), tooltip per cell with
    date + count, legend strip.
  • Mounted on the dashboard between the deck list and the Anki import
    so the user lands on a quick visual progress receipt every visit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 13:37:01 +02:00
Till JS
d8a35afd99 infra(gpu-box): commit GPU-Box compose to repo + Phase 2e docs
The GPU-Box stack has been carrying real production workload since
Phase 2c (monitoring) but only existed as a /srv/mana/docker-compose.gpu-box.yml
on the box itself. If the WSL filesystem dies, none of it is
reproducible. Bring the file into infrastructure/ as the source of
truth (live file on the box must be kept synchronous; manual rsync
for now since there's no CD into the GPU box).

Plus:
- infrastructure/.env.gpu-box.example as the secrets template
- infrastructure/README.md describing what runs there + how the
  Cloudflare-tunnel ingress is API-managed (not config.yml)
- .gitignore for the live infrastructure/.env.gpu-box copy
- MAC_MINI_SERVER.md status-page section now points at the GPU-Box
  setup instead of the long-stopped Mini container
- PLAN_OPTION_C.md: Phase 2e row + GPU-Box service tree update

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 13:28:49 +02:00
Till JS
e3cca9e271 feat(cards-web): PWA installability + AI card generation from text
PWA:
  • SvelteKitPWA in vite.config via @mana/shared-pwa preset (name +
    theme color). Layout injects pwaInfo.webManifest.linkTag so
    Chrome's manifest pickup works → install icon + A2HS.
  • Service worker registers automatically (workbox auto-update); the
    cards-already-cached path now keeps working offline as long as the
    user has visited a deck once.

AI generation:
  • lib/ai/generate.ts — direct fetch to mana-llm /v1/chat/completions
    (OpenAI-compatible, mirrors playground module). System prompt
    constrains the model to a JSON array of {front, back}. Code-fence
    stripping handles models that wrap JSON in ```json blocks despite
    the prompt.
  • AiCardGen.svelte — text in, list of generated cards out, per-card
    checkbox preview, "X übernehmen" creates them via cardStore.
    Phase-1 lands them as basic cards.
  • Mounted on the deck-detail page next to "Neue Karte".

Wiring:
  • hooks.server.ts injects __PUBLIC_MANA_LLM_URL__.
  • compose adds PUBLIC_MANA_LLM_URL_CLIENT=https://llm.mana.how to the
    cards-web service.
  • app.d.ts gets vite-plugin-pwa virtual-module references so
    svelte-check can resolve `virtual:pwa-info`.

Phase 2: PDF/image input, mana-credits gating, model selector,
streaming preview as cards arrive.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 13:24:42 +02:00
Till JS
778e5a2ad7 chore(infra): drop status-page-gen from Mini, status.mana.how → GPU-Box tunnel
Phase 2e cleanup. status-page-gen + a dedicated nginx now run on the
GPU-Box (sparse repo clone provides the generator script + mana-apps.ts,
hourly git-pull via systemd timer). Container queries VictoriaMetrics
locally over docker-network ('http://victoriametrics:9090'), no public
vm.mana.how endpoint required — that hostname is also gone from the
GPU tunnel config (v25 → v26 effectively, removed in same PUT that
added status.mana.how).

DNS for status.mana.how now points at the mana-gpu-server tunnel.
Mini-tunnel ingress for it is removed; the previous 'mana-status-gen'
container on the Mini was stopped + rm'd.

Side benefit: closes the inode-stale-bind-mount bug that took status.
mana.how down for a few hours — single-file bind mounts on the Mini
break whenever the CD git-checkout rewrites the source file. The
GPU-Box mounts the same files but the systemd timer git-pulls in-
place, preserving the inode.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 13:22:20 +02:00
Till JS
22cce59c3a feat(cards-web): Anki .apkg import — first acquisition lever
Anki users have decks they hate the UI for but can't migrate. This
gives them a one-drop path: drop a .apkg on the homepage, see a
preview, confirm, the cards land in our DB and start syncing.

Pipeline (lib/anki/):
  • parse.ts — JSZip → sql.js (WASM SQLite) → walk Anki's three core
    tables (col, notes, cards). Models (col.models JSON) classify each
    note: type=0 → basic / basic-reverse, type=1 → cloze. Anki cards
    table has one row per generated learnable unit (basic-reverse = 2,
    cloze = N) — we dedupe at the note level since our model
    regenerates those automatically via reviewStore.ensureReviewsForCard.
  • import.ts — every Anki deck becomes one of ours (1:1, "::" → " / ");
    fields go through sanitizeAnkiHtml (drops <img>, [sound:], maps
    <b>/<i> to Markdown). Orphans land in a fallback "Anki-Import"
    deck.

UI: AnkiImport.svelte on the decks list — drag-drop or click,
parse → preview ("X Decks, Y Karten"), confirm → import. No images,
no audio, no review history (cards are FSRS-new on import) — those
are Phase 2.

Deps: sql.js 1.14, jszip 3.10, @types/sql.js. WASM blob copied into
static/ so SvelteKit serves it at /sql-wasm.wasm.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 13:02:29 +02:00
Till JS
0c2df08149 fix(status-page): point at vm.mana.how (GPU-Box VM) instead of localhost:9090
After Phase 2c VM moved off the Mini, but the status-page generator
still queried localhost:9090 — and Colima containers can't reach the
GPU-Box's LAN IP through the Mini's bridge. Result: status.mana.how
showed 0/0 services UP across the board.

Routed VM through a new vm.mana.how Public Hostname on the
mana-gpu-server tunnel (config v24) so the Mini-side container reaches
it the same way browsers do. /api/v1/query path is identical, no
script changes required. Network-mode no longer needs to be host now
that the URL is public.

Verified live: status.json reports 49/53 services UP.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 12:58:19 +02:00
Till JS
009fb3589e feat(cards-web): streak indicator + per-deck due counts
Two small UI surfaces over data the backend already computes:
  • Header shows current streak (🔥 N) — useStreak() walks back through
    cardStudyBlocks until it finds a gap.
  • Decks list shows a "fällig"-pill per deck and a total in the header
    subline — useDueCountByDeck() joins cardReviews→cards once and
    groups by deckId.

Both queries live in lib/queries.ts and use Dexie liveQuery, so the
header refreshes automatically the moment a learn session ticks the
study block forward.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 12:45:04 +02:00
Till JS
585bee42be docs(mac-mini): refresh container counts + memory budget after Phase 2c+2d
53→45 running, 9.6 GiB nominal → ~6 GiB nominal, build-app.sh's
monitoring-stop trigger no longer fires. Cross-link to PLAN_OPTION_C.md
for the GPU-box-side picture (grafana/git/stats/glitchtip live there
via the mana-gpu-server tunnel).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 12:44:22 +02:00
Till JS
0db64cb47b chore(infra): drop migrated services from Mini compose + tunnel config
Phase 2c+2d cleanup. The 14 services that moved to the GPU-Box stack
(grafana, victoriametrics, loki, tempo, promtail, alertmanager,
vmalert, pushgateway, blackbox-exporter, alert-notifier, umami,
glitchtip + worker, forgejo) are now stopped on the Mini and stable
on the GPU box, so the rollback insurance can come out:

- docker-compose.macmini.yml: drop 14 service blocks (-369 lines) +
  the now-orphan named volumes (victoriametrics_data, loki_data,
  alertmanager_data, grafana_data, tempo_data).
- cloudflared-config.yml: drop the four hostnames whose DNS already
  points at the mana-gpu-server tunnel — Mini-tunnel ingress for them
  has been dead routing since 2026-05-06, removing the rules just makes
  the file match reality. The hostnames now live in the GPU tunnel's
  dashboard config (token-managed).

Containers + volumes stay on the Mini for now; running
`docker compose -f docker-compose.macmini.yml --env-file .env.macmini up -d --remove-orphans`
on the box drops them in one go when ready.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 02:39:43 +02:00
Till JS
f422fd6779 fix(shared-error-tracking): point main at src/, strip dashes from Glitchtip DSN
Two real-world fixes from wiring mana-auth to Glitchtip:

1. The compiled dist/ folder was excluded from Docker builds via
   .dockerignore's '**/dist' rule, so any container that pnpm-installed
   the package found node_modules/@mana/shared-error-tracking but no
   loadable entry point ('Cannot find module' at startup). Match the
   pattern shared-hono uses — point main + types + exports straight at
   src/*.ts. Bun runs TS natively and the type-only consumers don't
   care.

2. Glitchtip projects expose UUID-format public keys (`556fbd2e-a720-…`)
   in their generated DSNs. @sentry/node v9 tightened its DSN regex to
   alphanumeric-only, so it silently rejects the DSN with "Invalid
   Sentry Dsn" and never sends events. Strip the dashes from the
   user/key portion before handing it to Sentry — the Glitchtip ingest
   endpoint accepts both forms over the wire, so no server change.

Plus the missing Dockerfile COPY lines for shared-error-tracking and
eslint-config (root package.json devDeps reference the latter, which
breaks pnpm-filter installs that don't include it in the build context).

Verified end-to-end: 4 issues now in Glitchtip from mana-auth
(2 manual probes + 1 captureException + 1 401 from a
real /api/v1/me/data request without auth).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 02:34:54 +02:00
Till JS
1bac7cf38a fix(mana-auth): COPY packages/shared-error-tracking in Dockerfile
Mirror the same fix as cards-core (dd2e60954): the new
shared-error-tracking workspace dep needs an explicit COPY line
in the installer stage, otherwise pnpm-install-with-filter can't
find the package and the runtime container is missing it under
node_modules/@mana/.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 02:03:25 +02:00
Till JS
96c06162e6 fix(cards-web): inject __PUBLIC_MANA_AUTH_URL__ on SSR — login was 404
createManaAuthStore from @mana/shared-auth-ui reads the auth backend
URL from window.__PUBLIC_MANA_AUTH_URL__ at runtime. Without the
injection it falls back to a relative URL, so signIn POSTs land at
cards.mana.how/api/v1/auth/login (SvelteKit 404, HTML body) instead
of auth.mana.how/api/v1/auth/login.

Adds a hooks.server.ts modeled after the mana-web one, but trimmed
to the two URLs the standalone app actually consumes today (auth +
sync). The values come from PUBLIC_MANA_AUTH_URL_CLIENT and
PUBLIC_MANA_SYNC_URL_CLIENT in docker-compose.macmini.yml.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 02:02:29 +02:00
Till JS
8b71d290f8 feat(mana-auth): wire Glitchtip/Sentry error tracking via shared-error-tracking
First server-side error-tracking integration. Pattern mirrors the
client-side one in apps/mana/apps/web/src/hooks.client.ts:

- pull @mana/shared-error-tracking into mana-auth's deps (workspace pkg
  with @sentry/node + a no-op fallback when GLITCHTIP_DSN is unset)
- call initErrorTracking() at the top of services/mana-auth/src/index.ts
  before the rest of the module body executes — this lets Sentry hook
  uncaughtException / unhandledRejection before any Hono handlers register
- wrap app.onError so non-HTTPException throws also flow into
  captureException with path/method/query context. HTTPExceptions are
  intentional 4xx/422 and stay out of the issue list (otherwise every
  401 from a stale session would page somebody at 3am)
- compose: pass GLITCHTIP_DSN_MANA_PLATFORM through as GLITCHTIP_DSN per
  service so each container's events get tagged with serverName='mana-auth'

DSN itself isn't in the repo; lives in .env.macmini on the Mac Mini and
is referenced from the Glitchtip credentials doc.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 01:55:07 +02:00
Till JS
dd2e609545 fix(docker): COPY packages/cards-core in SvelteKit Dockerfiles
The cards-spinoff commit (0a544ac41) added @mana/cards-core as a
workspace dependency for apps/mana/apps/web but didn't update the
two Dockerfiles that COPY-and-pnpm-install the workspace into the
image. CD's --no-cache build for mana-web therefore failed at
`pnpm install` with ERR_PNPM_WORKSPACE_PKG_NOT_FOUND, leaving the
container on a stale pre-cleanup image whose ListView28 chunk still
referenced the dropped contextSpaces Dexie table — every mana.how
route 500'd.

Adding the COPY line to both files (the shared sveltekit-base layer
and the per-app layer that does a second pnpm install) makes the
package available to the workspace resolver and lets the build go
through.

Plus the Phase 2c-d doc updates that piled up today (Glitchtip
on dedicated GPU-box stack, gitignore for *_CREDENTIALS.md files).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 01:47:07 +02:00
Till JS
86f14bcc19 fix(cards-web): drop unused @mana/shared-crypto dep — not in sveltekit-base image
The Phase-1 crypto wrapper is a no-op stub; it never actually imports
@mana/shared-crypto. The dep was forward-looking for the Phase-2 vault
wiring, but it broke `pnpm install` inside the cards-web Dockerfile
because the sveltekit-base image only ships a curated subset of
@mana/* packages and shared-crypto isn't one of them.

The wiring will come back when the vault roundtrip is on, together
with a base-image bump.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 01:46:50 +02:00
Till JS
f94c047daa chore: silence pre-existing svelte-check a11y warnings
Pre-push hook runs svelte-check with --fail-on-warnings; nine
long-standing warnings in unrelated files (forms / website-blocks)
were blocking otherwise-clean pushes.

Each <label> here is a visual label whose control follows on the next
line — accessible to a screen reader through proximity but not through
a `for=`/`id` association. The state_referenced_locally cases capture
a prop on first render by design (re-running the hook on prop change
would be a different feature). The <nav role=tablist> is the existing
tab-strip semantic.

All seven sites get scoped svelte-ignore comments rather than functional
rewrites — the goal is to unblock CI, not redesign these components.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 01:34:36 +02:00
Till JS
0a544ac410 feat(cards): Phase-1 Spinoff — standalone cards.mana.how + cards-core extraction
Builds out the Cards spinoff end-to-end so the standalone app at
cards.mana.how shares its data layer with the in-mana cards module
through a single pure-utility package.

Why a spinoff and not just a deeper module: per the GUIDELINES, Cards
gets its own brand + URL while reusing mana-auth, mana-sync, and the
mana-credits/billing stack. The in-mana module under mana.how/cards
stays untouched as the integrated experience.

Phase 0 — mana-modul foundation
  • New tables cardReviews + cardStudyBlocks (Dexie v61) + plaintext
    classification in the crypto registry.
  • LocalCard learns a {type, fields} shape; legacy front/back columns
    kept as a back-compat mirror so older builds keep rendering.
  • FSRS v6 scheduler + Cloze parser + Markdown render pipeline.
  • UI in apps/mana/.../routes/(app)/cards/ gets a learn session
    (learn/[deckId]), 4-type card editor, due-counter, markdown lists.

Phase 1 — standalone (apps/cards/apps/web)
  • SvelteKit 2 + Svelte 5 + Tailwind 4, port 5180.
  • Own Dexie 'cards' DB with a slim 5-table schema.
  • Own sync engine: pending-changes hooks, 1 s push / 5 s pull against
    POST /sync/cards, server-apply with suppression to avoid ping-pong.
  • Auth-Gate via @mana/shared-auth-ui (LoginPage / RegisterPage).
  • Encryption hooks at every write/read/apply path, currently no-op
    stubs — flipping to real vault-backed AES-GCM is a single-file
    change in src/lib/data/crypto.ts.

Shared package — @mana/cards-core
  • Pulls types, cloze, card-reviews, FSRS wrapper, and Markdown
    renderer out of the mana module so both frontends import from one
    source. mana-modul keeps thin re-export shims so consumers don't
    need to change imports.
  • 19 vitest tests carried over from the mana module.

Server-side wiring
  • cards.mana.how added to mana-auth PRODUCTION_TRUSTED_ORIGINS and
    its CORS_ORIGINS env (sso-config.spec.ts stays green).
  • New cards-web container in docker-compose.macmini.yml (mirrors
    manavoxel-web pattern, 128m, depends on mana-auth healthy).
  • cloudflared-config.yml repoints cards.mana.how from :5000 (the
    unified mana-web container) to :5180. mana.how/cards is unchanged.

Cleanup
  • Removed an unrelated 2026-03/04 NestJS+Supabase+Expo experiment
    that was lingering under apps/cards/ (apps/landing, supabase/,
    .github/workflows, MANA_CORE_*.md, etc.). It predated this plan
    and would have confused future readers.

Validation
  • svelte-check on mana-web: 0 errors over 7697 files
  • svelte-check on cards-web: 0 errors over 3481 files
  • vitest on cards-core: 19/19 pass
  • pnpm check:crypto: 214 tables classified
  • bun test sso-config.spec.ts: 8/8 pass
  • vite build on cards-web: green

Not done in this commit (deliberate)
  • Real encryption (vault roundtrip) — Phase 2.
  • WebSocket-driven pull (5 s polling for now).
  • Mobile/landing standalone surfaces — Phase 2/3.
  • The actual production cutover on the Mac mini (build, deploy,
    cloudflared sync) — config is staged, deploy is a user action.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 01:20:43 +02:00
Till JS
950b822070 docs(cards): Phase-1 Spinoff-Guidelines — Core-Gameloop, Stack, Datenpfad
Verbindliche Leitlinien für den Cards-Spinoff (Karteikarten-App mit
Spaced Repetition). Status: Planungsphase, noch kein Code. Doc dient
als nicht-verhandelbarer Kontext für PRs sobald gebaut wird.

Wichtigste Festlegungen:
- Game-Dev-Prinzip: Phase 1 baut NUR den Core-Gameloop (Lernsession).
  KI-Generierung, Voice, Sharing, Stripe, Mobile, Dashboards = Phase 2+.
- Open-Source-only: jede Dep braucht OSI-konforme Lizenz.
- Zentrale Mana-Bausteine sind Pflicht, kein Eigen-Auth/Sync/Analytics.
- Daten-Contract mit dem bestehenden mana-Modul: gleiche Postgres-
  Tabellen (cardDecks/cards + neu cardReviews/cardStudyBlocks),
  appId='cards'. Schema-Änderungen rolled-out gemeinsam, nicht einseitig.
- FSRS v6 via ts-fsrs für Spaced-Repetition-Algorithmik.
- Phase 1 hat keinen eigenen Service — Lese-/Schreibpfad geht
  ausschließlich über IndexedDB → mana-sync → Postgres.

Definition of Done in §7 ist die Acceptance-Liste fürs MVP.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 20:39:26 +02:00
Till JS
c8a292b891 tunnel: route memoro.mana.how + memoro-app/api/audio to standalone Memoro stack
memoro ist seit längerem ein eigener Repo (Code/memoro/) mit eigenem
Compose-Stack auf dem Mini (~/projects/memoro-deploy/). Der Tunnel
zeigte bisher trotzdem auf die unified mana web app (Port 5000) — d.h.
memoro.mana.how rendert nur das Mana-Dashboard, nicht die echte
Memoro-Marketing-Landing.

Vier Hostnames in einem eigenen Memoro-Block:
  memoro.mana.how       → :3120  (Astro-Landing, Marketing-Site)
  memoro-app.mana.how   → :3130  (SvelteKit-SPA, Web-App)
  memoro-api.mana.how   → :3110  (API)
  memoro-audio.mana.how → :3101  (Audio-Service)

memoro-app vs memoro auf erster Subdomain-Tiefe gelassen damit
Cloudflare Universal SSL ohne Wildcard-Konfig greift.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 20:39:12 +02:00
Till JS
c14aef9f85 docs(infra): Mac-Mini ↔ Windows-GPU-Box workload-split — Plan Option C
Hilfsdienste (Monitoring, Forgejo, Glitchtip, Umami) wandern von der
auslast­ungs-kritischen Mac-Mini-Box auf die Windows-GPU-Box, die
ohnehin 95 % System-RAM idle hat. Production-Hot-Path bleibt auf dem
Mini, kein Geld ausgegeben, Single-Point-of-Failure am Standort
reduziert.

Stand 2026-05-06: Phase 0–2b shipped (WSL2-Docker, Grafana cross-box,
Forgejo, Umami healthy). Phase 2c (Loki+VM+Alerts) und Phase 4
(Cloudflare-Cutover für grafana.mana.how) brauchen eigene Sessions —
beides Pre-existing-Mis-config-Aufräumen, kein Architektur-Risiko.

Hardware-Inventar in WINDOWS_GPU_SERVER_SETUP.md ergänzt: Ryzen 9 5950X,
64 GB DDR4, RTX 3090, 660 GB frei C:. WSL2 auf 24 GB / 12 vCPU
gedeckelt damit AI-Scheduled-Tasks > 30 GB Reserve haben.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 20:39:01 +02:00
Till JS
1b579ab0b0 chore(mana-events): move from port 3065 to 3115 — collision with platform mana-media
Platform-Repo (Code/mana/) reserviert 3065 für mana-media; um Doppel-
Belegung zu vermeiden wandert mana-events (Public-RSVP / Event-Sharing)
auf 3115. Neuer Port-Block 311x ist unbenutzt und gehört strukturell
neben mana-mail (3042) bzw. die anderen 30xx Service-Ports.

Berührt jeden harden-coded 3065-Default — Server-Config, Webapp-Config,
SSR-Routes (rsvp/[token], status), Playwright-Webserver-Setup, e2e-Spec.
PUBLIC_MANA_EVENTS_URL in .env.development zieht beide Variablen mit.

PORT_SCHEMA.md trägt jetzt den Wechsel mit Datum + Begründung —
zukünftiges Ich soll nicht raten warum der Port aus der 30xx-Reihe
ausschert.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 20:38:46 +02:00
Till JS
546b94d472 feat(personas): move admin + internal endpoints from mana-auth to apps/api
Schließt die platform/product-split-Lücke: HEAD's apps/api/src/index.ts
referenziert seit dem Forms-M10d-Commit personasInternalRoutes /
personasAdminRoutes — die Implementierung lag aber noch nicht im Repo.
Build war strukturell broken bis hierhin.

Was wandert von mana-auth nach apps/api:

  apps/api/src/modules/personas/
    ├── schema.ts          — pgSchema('personas') mit personas /
    │                        persona_actions / persona_feedback;
    │                        userId ist plain text (Cross-DB-FK auf
    │                        mana-auth's auth.users geht nach Split nicht).
    ├── internal-routes.ts — service-key gated GET /due, POST /:id/actions
    │                        und POST /:id/feedback. Append-only +
    │                        idempotent über deterministische row-ids
    │                        (tickId-i-tool / tickId-module).
    └── admin-routes.ts    — admin-JWT gated CRUD; ruft mana-auth via
                             /api/v1/admin/users + /api/v1/auth/register
                             + /api/v1/internal/users/:id/persona-stamp
                             für den User-Lifecycle.

Persona-runner-Client zeigt jetzt auf apps/api:

  - config.ts: neues apiUrl-Feld (default http://localhost:3060,
    Env MANA_API_URL); authUrl bleibt für /api/v1/auth/login + spaces.
  - clients/mana-auth-internal.ts: drei Calls treffen jetzt
    /api/v1/personas/internal/* statt mana-auth's
    /api/v1/internal/personas/* — Datei-Name bleibt um Call-Site-Diff
    klein zu halten.
  - index.ts: ManaAuthInternalClient bekommt config.apiUrl statt authUrl.

Seed/Cleanup-Skripte:

  - --api= als bevorzugter Flag, --auth= als Legacy-Alias (cached
    Shell-History würde sonst hart brechen).
  - default http://localhost:3060, Env MANA_API_URL.
  - Endpoint-Pfade umgeschrieben:
      POST   /api/v1/admin/personas        → /api/v1/personas/admin
      DELETE /api/v1/admin/personas/:id    → /api/v1/personas/admin/:id

drizzle.config.ts: schema-Array + schemaFilter um 'personas' erweitert.
DB-push ist Pflicht-Schritt vor erstem Boot, sonst 42P01 auf /due.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 20:38:29 +02:00
Till JS
795b39e065 feat(forms): M10d headless wave-cron — server-worker + private internal_meta
Echter Server-Cron für recurring forms — wave-send läuft jetzt
unabhängig von Owner-Tab-State. Bisheriger M10c webapp-side scheduler
bleibt als Belt-and-suspenders aktiv (idempotent).

Architektur:
1. **Owner-private internal_meta auf unlisted snapshots**
   - Drizzle: neue jsonb-column `internal_meta` (Drizzle migration
     0001_internal_meta.sql).
   - public-routes.ts strippt sie strukturell — die explicit select()-
     projection enthält sie nicht (recipients + sender würden sonst
     via share-link leaken).
   - publish-route akzeptiert sie im Body, persistiert auf insert +
     update.
   - ALLOWED_COLLECTIONS um 'lasts' und 'forms' erweitert (war ein
     latenter Bug — formsStore.setVisibility('unlisted') hätte ohne
     diese Ergänzung 400 zurückbekommen; M4b lief vermutlich nie
     end-to-end durch).

2. **shared-privacy publishUnlistedSnapshot**
   - PublishUnlistedOptions erweitert um optionales `internalMeta`.
     Forwarded an /api/v1/unlisted/:collection/:recordId body.

3. **Webapp formsStore**
   - lib/wave-mail.ts: buildFormInternalMeta(form, broadcastSettings)
     baut den Owner-Private-Blob: { kind, recurrence: {frequency,
     recipientEmails, lastSentAt}, sender: {fromEmail, fromName,
     replyTo, legalAddress}, formMeta: {title, description} }.
     Returns null wenn Voraussetzungen fehlen (kein recurrence, keine
     recipients, fehlende broadcast-settings).
   - stores/forms.svelte.ts: setVisibility / regenerateUnlistedToken /
     setUnlistedExpiry laden broadcastSettings via Dexie + decrypt,
     bauen internalMeta, übergeben an publishUnlistedSnapshot. Form
     wird vor dem buildFormInternalMeta-Call dekrypted.

4. **mana-mail internal bulk-send route**
   - createInternalRoutes(accountService, broadcastOrchestrator,
     maxRecipients) — Signature erweitert.
   - Neue POST /api/v1/internal/mail/bulk-send: gleicher Payload-shape
     wie user-facing /v1/mail/bulk-send aber userId aus Body statt
     JWT. X-Service-Key-gate sitzt bei /api/v1/internal/* prefix.
     Audit-trail trägt principalId aus Body. Cap = 5000 (gleicher
     Wert wie user-facing).

5. **apps/api forms wave-worker**
   - 5-min setInterval, advisory-lock-gated (key 0x464f5257 'FORW').
   - Tick: select snapshots WHERE collection='forms' AND
     internal_meta IS NOT NULL AND revoked_at IS NULL. Filter auf
     kind='forms-recurrence' + isWaveDue (lastSentAt + period <= now,
     never-sent fires sofort). Pro fälligem snapshot: build HTML/text
     mailbody (mirror webapp wave-mail-render), POST an mana-mail
     internal-bulk-send mit X-Service-Key + userId, dann jsonb_set
     auf internal_meta.recurrence.lastSentAt. Per-snapshot errors
     werden als console.warn geloggt, Tick läuft weiter.
   - Disable via FORMS_WAVE_WORKER_DISABLED=true (tests / multi-
     replica deployments).
   - Wired in apps/api/src/index.ts neben startArticleImportWorker().

Trade-offs:
- internal_meta wird beim setVisibility/regenerate/setExpiry frisch
  aus broadcast-settings gebaut — wenn der User später broadcast-
  settings ändert (zB neuer fromEmail) muss er das Form re-publishen
  damit die snapshot-internal_meta aktualisiert wird. Doc-it: zukünftiger
  Patch könnte ein "settings drift"-Warning ins UI surfacen.
- Worker-Update von lastSentAt geht NICHT zurück in den webapp-form
  (settings.recurrence.lastSentAt ist verschlüsselt, server kann
  nicht schreiben). Owner-UI zeigt ältere lastSentAt von manuellen
  Sends; auto-cron-sends sind in den Server-Logs sichtbar. Future
  patch: GET /api/v1/forms/:id/recurrence-status (auth) gibt das
  snapshot.internal_meta zurück, UI rendert Auto-Cron-State.
- Webapp-side wave-scheduler (M10c) läuft parallel weiter — wenn
  Owner-Tab offen ist, kann beides feuern. Idempotent durch
  lastSentAt-check (weekly/monthly buckets), aber theoretisch könnte
  double-fire passieren wenn die Calls innerhalb 1ms versetzt sind.
  Real-world ignorierbar; future patch: scheduler liest jetzt
  internal_meta.lastSentAt vom server-side state.

apps/api buildet (1776 modules). mana-mail buildet (523 modules).
svelte-check 0 errors in forms/. Forms-Tests 70/70 unverändert.

DB-Migration 0001_internal_meta.sql muss manuell appliziert werden
(siehe feedback memory: hand-authored SQL migrations sind nicht in
pnpm setup:db).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 17:18:05 +02:00
Till JS
82dbfe6ee7 feat(forms): M7c auto-sync zu library + space_member
Schließt M7 ab: Form-Antworten erzeugen jetzt zusätzlich zu Kontakten
(M7a) und Event-RSVPs (M7b) auch Library-Einträge und Space-
Einladungen. feedback bleibt bewusst aus dem UI raus —
Architektur-Mismatch.

- types.ts:
  - AutoSyncConfig erweitert um optional `libraryKind`
    ('book'|'movie'|'series'|'comic') und `spaceMemberRole`
    ('member'|'admin', default 'member').
  - Form domain-type bekommt `spaceId: string` (war intern auf
    LocalForm vorhanden, wird jetzt durch toForm exposed). Brauchen
    wir, weil space_member-Invite den organizationId der Form-Owner-
    Space schicken muss.
- queries.ts toForm: spaceId aus LocalForm.spaceId mappen, Fallback ''.
- lib/auto-sync.ts:
  - buildLibraryEntryFromAnswers (pure): mappt title / creators /
    year / review. creators-strings werden auf , ; \n gesplittet
    (multi-author-mapping). year bounds-checked 1900..2100.
  - buildSpaceInviteFromAnswers (pure): findet das erste Form-Feld
    mit mapping='email', validiert per Loose-Regex, gibt
    {email}-payload zurück.
  - dispatchTarget('library'): wirft wenn libraryKind fehlt; ruft
    libraryEntriesStore.createEntry mit kind+title+creators+year+
    review.
  - dispatchTarget('space_member'): wirft wenn form.spaceId fehlt;
    POSTet an /api/auth/organization/invite-member über authFetch
    mit role aus cfg.spaceMemberRole. Returns invitation.id oder
    Fallback `invite:<email>` (better-auth response-shape kann je
    nach Version variieren).
  - dispatchTarget('feedback') wirft jetzt mit klarem Kommentar:
    architektur-Mismatch — feedback ist zentraler Public-Hub,
    nicht per-Owner-Daten. UI filtert die Option raus.
  - applyAutoSync reicht `form` durch zu dispatchTarget (statt nur
    cfg/answers), damit Space-Invite die spaceId hat.
- lib/auto-sync.spec.ts: 9 weitere Tests (4 library: title/creators/
  year-bounds/empty, 5 space: extract/malformed/non-mapped/no-mapping/
  non-string). Total Forms-Tests jetzt 70/70.
- SettingsPanel:
  - SUPPORTED_TARGETS auf [contacts, events, library, space_member]
    erweitert. feedback erscheint NICHT — Type bleibt für Legacy-
    Daten erhalten, aber UI bietet ihn nicht an.
  - Library-Block: kind-picker (book/movie/series/comic) +
    LIBRARY_KEYS-Mapping (title, creators, year, review).
  - Space-Member-Block: role-picker (member/admin) +
    SPACE_KEYS-Mapping (nur 'email'). Hint "mappe genau ein Feld".
  - setMappingFor preserved jetzt alle target-spezifischen Felder
    (targetId, libraryKind, spaceMemberRole) damit ein Mapping-Edit
    nicht den Rest droppt.
- 25 neue i18n-Keys × 5 Locales (autoSync.targetLibrary/SpaceMember,
  libraryKindPicker/libraryKind.*, libraryKey.*, libraryHint,
  spaceMemberRolePicker/RoleMember/RoleAdmin/Hint/MappingHint).
  Parity 6515 keys aligned.

Trade-offs:
- Library-Auto-Sync erzeugt einen Eintrag pro Antwort. Deduplizierung
  (gleicher Titel kommt schon vor) bleibt manueller User-Workflow —
  Autosync hat kein Wissen über die existierenden Bibliothek.
- Space-Invite-Flow läuft asynchron: Submitter kriegt Mana-Mail mit
  Invite-Link, klickt → wird Member. Bei nicht-Mana-Identitäten muss
  der Submitter erst registrieren. Owner sieht den Pending-State unter
  /spaces.
- feedback: bewusst nicht implementiert. Form-Antworten als public-
  feedback einzukippen wäre semantisch falsch (Owner sammelt für
  sich, nicht zur Veröffentlichung).

Forms-Tests 70/70. svelte-check 0 errors. apps/api unverändert.
i18n-parity 6515 keys × 5 locales aligned.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 16:27:40 +02:00
Till JS
6d67db48d5 feat(forms): M9b conversation LLM-extract — free-text → typed Antwort
Killer-Feature für den Conversation-Mode (M9): User kann auf
choice/yes_no/rating-Feldern in eigenen Worten antworten ("ich nehme
den zweiten Vorschlag" / "klar bin ich dabei" / "so 4 von 5"), ein
LLM mappt das auf die strikte Option-ID / boolean / Integer.

- apps/api/modules/forms/public-routes.ts: neuer
  POST /api/v1/forms/public/:token/conversation/extract Endpoint.
  Rate-limited (30/min/token + 60/min/IP — Owner-Side-Costs für haiku
  trotz unauthenticated-Pfad). freeText hard-cap 1000 Zeichen.
  Token-resolve via unlistedSnapshots, fieldId muss im publish-Schema
  existieren. Dispatch:
    - text/email/number/date: passthrough (free-text IST die Antwort)
    - single_choice/multi_choice/yes_no/rating: mana-llm haiku-Call
      mit field-spezifischem System-Prompt + JSON-only-Output, Parser
      validiert Option-IDs gegen das Schema (Hallucination-Schutz).
  Response { extracted, confidence: 'high' | 'low', alternatives? }.
  confidence='low' wenn LLM unsicher → Client zeigt Warnung im
  Preview-Block, User kann manuell auswählen.

- ConversationFormView: collapsible <details>"Lieber in eigenen
  Worten antworten?"-Block unter den quick-reply-Buttons aller
  choice/yes_no/rating-Felder. User tippt Free-Text → "Verstehen"
  ruft endpoint → Preview-Karte mit der erkannten Antwort
  (teal=high-confidence, amber=low-confidence) → "Übernehmen" oder
  "Abbrechen". commitExtract löst setAnswerAndAdvance aus, läuft
  über den selben Pfad wie quick-reply-Klick.

Schema-Validierung im Parser:
  - single_choice: optionId muss in field.options sein, sonst null
  - multi_choice: filtert nur valide IDs raus, Array kann leer sein
  - yes_no: nur true/false/null erlaubt
  - rating: round(value), bounds-check 1..ratingScale

LLM-Call:
  - model claude-haiku-4-5 (cheapest)
  - temperature 0 (deterministisch)
  - maxTokens 200 (JSON-Output ist klein)
  - Markdown-code-fence-Strip für robustes JSON-Parsing

Trade-offs:
  - Public-Endpoint = ungated LLM-Spend für Form-Owner. Rate-Limits
    + freeText-Cap mitigaten Spam, aber 30 Calls/min × 200 tokens =
    moderate Kosten pro Form. Owner sollte das im Hinterkopf haben.
  - Confidence='low' eskaliert zur User-Sichtbarkeit, bricht aber
    nicht den Flow — User kann übernehmen oder abbrechen.

Forms-Tests 61/61 unverändert (extract braucht Live-LLM für E2E,
absichtlich kein vitest-Mock). svelte-check 0 errors. apps/api
buildet (1772 modules).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 15:31:25 +02:00
Till JS
c1ed45e574 feat(forms): M9 form-as-conversation — Typeform-Chat-Render
Public-Form-Variante als linearer Chat-Flow (M9, KF4 aus dem Plan).
Owner wählt im Builder zwischen "klassisch" (alle Felder
gleichzeitig, M4b-View) und "conversation" (eine Frage nach der
anderen, mobile-friendly).

LLM-gestützte free-text → typed-Antwort-Extraktion (z.B. "Ich nehme
den zweiten Vorschlag" → option-id) bleibt M9b — die jetzige
Implementierung nutzt typed widgets pro Field-Type für einen
deterministischen ersten Wurf.

- types.ts: FormSettings.experience: 'classic' | 'conversation'
  (default 'classic'). Reist im Settings-Blob mitverschlüsselt.
- data/unlisted/resolvers.ts: buildFormBlob whitelistet experience
  ins public-snapshot — nur ein Enum, kein PII.
- SharedFormView (M4b) bleibt der classic-Renderer.
- ConversationFormView (neu, ~600 Zeilen):
  - Linear: stepIndex zeigt durch das Visible-Subset von
    resolveVisibleFields (gleicher branching-resolver wie classic).
  - Pro Step: question-bubble + Field-Type-spezifischer Widget:
    short_text/long_text/email/number → Free-Text-Input mit
    Enter-Submit, date → datepicker, yes_no → 2 Quick-Reply-Buttons,
    rating → Skala-Buttons, single_choice → vertikale
    Quick-Reply-Liste, multi_choice → Toggle-Chips + "Weiter",
    section → "Verstanden"-Step, consent → Yes(/Nein optional).
  - Answer-Bubble nach Submit; "← Vorherige" droppt das letzte
    Q/A-Pair und löscht die Antwort, damit der branching-resolver
    den nächsten Step neu berechnet.
  - Final-Step: Submitter-Name+Email (optional) + bestehender
    POST /api/v1/forms/public/:token/submit.
  - Progress-Bar oben, "via Mana Forms"-Footer.
- routes/share/[token]/+page.svelte: dispatched bei
  collection='forms' auf experience-Wert — 'conversation' →
  ConversationFormView, sonst SharedFormView.
- SettingsPanel: dropdown unter den Anonymous-Toggle, dt./eng./es./
  fr./it. (15 neue i18n-Keys × 5 Locales = 6498 keys aligned).

Trade-offs:
- Branching reagiert pro-step: wenn der User auf einer späteren Frage
  zurückgeht und die Quelle einer Hide-Regel ändert, fällt der
  zwischenzeitlich gerenderte Pfad weg — eventuell taucht eine neue
  Frage als "next" auf. Dokumentiert als linearer "tree-walk" statt
  WYSIWYG-Snapshot, üblich für Typeform-Klone.
- Ohne LLM-Extraction (M9b) sind die Quick-Replies nicht fluide; das
  ist intent: deterministic > magical for first ship.

Forms-Tests 61/61. svelte-check 0 errors.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 15:22:56 +02:00