Commit graph

694 commits

Author SHA1 Message Date
Till JS
0a30b91200 chore(cutover): remove services/mana-auth/ — moved to mana-platform
Live containers on the Mac Mini build out of `../mana/services/mana-auth/`
since the 8-Doppel-Cutover commit (774852ba2). Smoke test green
2026-05-08 — health endpoints, JWKS, login flow, Stripe-webhook all
reachable from the new build path. Removing the now-stale duplicate.

Was 560K in this repo, gone now. Active code lives in
`Code/mana/services/mana-auth/` (siehe ../mana/CLAUDE.md).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 18:53:56 +02:00
Till JS
af3f21a179 chore(cutover): remove services/mana-credits/ — moved to mana-platform
Live containers on the Mac Mini build out of `../mana/services/mana-credits/`
since the 8-Doppel-Cutover commit (774852ba2). Smoke test green
2026-05-08 — health endpoints, JWKS, login flow, Stripe-webhook all
reachable from the new build path. Removing the now-stale duplicate.

Was 156K in this repo, gone now. Active code lives in
`Code/mana/services/mana-credits/` (siehe ../mana/CLAUDE.md).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 18:53:56 +02:00
Till JS
fcc36eadcb chore(cutover): remove services/mana-media/ — moved to mana-platform
Live containers on the Mac Mini build out of `../mana/services/mana-media/`
since the 8-Doppel-Cutover commit (774852ba2). Smoke test green
2026-05-08 — health endpoints, JWKS, login flow, Stripe-webhook all
reachable from the new build path. Removing the now-stale duplicate.

Was  17M in this repo, gone now. Active code lives in
`Code/mana/services/mana-media/` (siehe ../mana/CLAUDE.md).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 18:53:55 +02:00
Till JS
af8ef60fe4 chore(cutover): remove services/mana-notify/ — moved to mana-platform
Live containers on the Mac Mini build out of `../mana/services/mana-notify/`
since the 8-Doppel-Cutover commit (774852ba2). Smoke test green
2026-05-08 — health endpoints, JWKS, login flow, Stripe-webhook all
reachable from the new build path. Removing the now-stale duplicate.

Was  43M in this repo, gone now. Active code lives in
`Code/mana/services/mana-notify/` (siehe ../mana/CLAUDE.md).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 18:53:55 +02:00
Till JS
2b07f6ef89 chore(cutover): remove services/mana-llm/ — moved to mana-platform
Live containers on the Mac Mini build out of `../mana/services/mana-llm/`
since the 8-Doppel-Cutover commit (774852ba2). Smoke test green
2026-05-08 — health endpoints, JWKS, login flow, Stripe-webhook all
reachable from the new build path. Removing the now-stale duplicate.

Was  90M in this repo, gone now. Active code lives in
`Code/mana/services/mana-llm/` (siehe ../mana/CLAUDE.md).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 18:53:54 +02:00
Till JS
6103d4d2d9 chore(cutover): remove services/mana-tts/ — moved to mana-platform
Live containers on the Mac Mini build out of `../mana/services/mana-tts/`
since the 8-Doppel-Cutover commit (774852ba2). Smoke test green
2026-05-08 — health endpoints, JWKS, login flow, Stripe-webhook all
reachable from the new build path. Removing the now-stale duplicate.

Was 148K in this repo, gone now. Active code lives in
`Code/mana/services/mana-tts/` (siehe ../mana/CLAUDE.md).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 18:53:53 +02:00
Till JS
3c4a6d4f69 chore(cutover): remove services/mana-stt/ — moved to mana-platform
Live containers on the Mac Mini build out of `../mana/services/mana-stt/`
since the 8-Doppel-Cutover commit (774852ba2). Smoke test green
2026-05-08 — health endpoints, JWKS, login flow, Stripe-webhook all
reachable from the new build path. Removing the now-stale duplicate.

Was 132K in this repo, gone now. Active code lives in
`Code/mana/services/mana-stt/` (siehe ../mana/CLAUDE.md).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 18:53:53 +02:00
Till JS
879975b665 chore(cutover): remove services/mana-mail/ — moved to mana-platform
Live containers on the Mac Mini build out of `../mana/services/mana-mail/`
since the 8-Doppel-Cutover commit (774852ba2). Smoke test green
2026-05-08 — health endpoints, JWKS, login flow, Stripe-webhook all
reachable from the new build path. Removing the now-stale duplicate.

Was 188K in this repo, gone now. Active code lives in
`Code/mana/services/mana-mail/` (siehe ../mana/CLAUDE.md).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 18:53:52 +02:00
Till JS
7b362066bb feat(auth): SSO + CORS origins for zitare.mana.how/zitare-api.mana.how
Adds the two zitare hostnames to PRODUCTION_TRUSTED_ORIGINS in
sso-origins.ts and to the mana-auth CORS_ORIGINS in
docker-compose.macmini.yml. Pre-condition for the first Zitare
live-cut on the Mac Mini — the running mana-auth container must
be rebuilt for the new TRUSTED_ORIGINS list to take effect (see
zitare/DEPLOY.md Schritt 3).

sso-config.spec.ts asserts symmetry between sso-origins.ts and
the CORS_ORIGINS env in compose. Test runs 8/8 green after this
change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 18:07:39 +02:00
Till JS
8acf35eecf chore(dev): finish --watch → --hot sweep across remaining Bun services
Some checks failed
CD Mac Mini / Detect Changes (push) Failing after 11s
CI / Detect Changes (push) Successful in 7s
CI / Validate (push) Has been skipped
CI / Build mana-auth (push) Waiting to run
CI / Build mana-search (push) Waiting to run
CI / Build mana-sync (push) Waiting to run
CI / Build mana-notify (push) Waiting to run
CI / Build mana-api-gateway (push) Waiting to run
CI / Build mana-crawler (push) Waiting to run
CI / Build mana-media (push) Waiting to run
CI / Build mana-credits (push) Waiting to run
CI / Auth flow integration test (push) Has been skipped
Docker Validate / Validate Dockerfiles (push) Failing after 1m35s
Docker Validate / Build calendar-web (push) Has been skipped
Docker Validate / Build quotes-web (push) Has been skipped
Docker Validate / Build todo-backend (push) Has been skipped
Docker Validate / Build todo-web (push) Has been skipped
Docker Validate / Build mana-auth (push) Has been skipped
Docker Validate / Build mana-sync (push) Has been skipped
Docker Validate / Build mana-media (push) Has been skipped
Mirror to Forgejo / Push to Forgejo (push) Failing after 1s
CD Mac Mini / Deploy (push) Has been cancelled
Catches the service-level package.json files that the previous
sweep (4cca25ed0) missed — they don't appear in any dev:*:full
orchestrator but get invoked when someone runs `pnpm --filter
@mana/<service> dev` directly.

Touched: mana-geocoding, mana-mail, mana-subscriptions, mana-mcp,
news-ingester, mana-persona-runner, mana-research, mana-user,
plus apps/memoro (server + audio-server).

mana-ai stays on --watch on purpose: its entry uses an explicit
`Bun.serve({...})` call instead of `export default { port,
fetch }`, plus a SIGTERM/SIGINT handler that calls
`server.stop()`. --hot would replace the module without releasing
the old server reference and produce exactly the EADDRINUSE we're
trying to avoid. If mana-ai gets refactored to the standard
default-export shape, flip its dev script too.
2026-05-08 14:33:27 +02:00
Till JS
4cca25ed03 chore(dev): switch all Bun services from --watch to --hot
Fans out the cards-server fix from 08f422340 to every other Bun
service in the monorepo. --hot keeps the port bound across HMR
reloads via globalThis[hmrSymbol]; --watch restarts the process
and races old/new Bun.serve for the port.

Touched: dev:auth, dev:credits, dev:events, dev:analytics,
dev:memoro:server, dev:memoro:audio-server, dev:uload:server in
the root package.json plus the matching `dev` script in each
service's own package.json. All six services already export the
`{ port, fetch }` default that Bun's --hot expects.

Smoke-tested: pnpm dev:cardecky:full boots clean, then touching
auth/credits/cards-server entry files all hot-reload without
dropping their port.

(memoro/apps/audio-server doesn't have a `dev: bun --watch ...`
script in its own package.json, so only the root entry got the
swap there.)
2026-05-08 14:24:24 +02:00
Till JS
08f4223404 fix(dev): cards-server uses --hot + setup-databases creates mana_notify/credits
cards-server: switch from `bun run --watch` to `bun run --hot`.
--watch restarts the whole process on file change, racing the
old + new Bun.serve calls for the port (the EADDRINUSE you see
right after `listening on :3072`). --hot does in-process HMR via
the globalThis[hmrSymbol] pattern; the port stays bound across
reloads. apps/api already uses --hot for the same reason.

scripts/setup-databases.sh:
- create_db_if_not_exists "mana_notify" + "mana_credits" so a
  fresh-machine `pnpm setup:db` no longer leaves these two DBs
  off (mana-notify was crashing on boot with SASL fallback,
  mana-credits was less obvious because its drizzle config
  defaults to mana_platform — but the runtime config can point
  at mana_credits, so safer to have the DB exist).
- Fix the cards branch: was pointing at the non-existent
  @mana/cards-database package; now points at @mana/cards-server
  where the actual schema lives.

Verified: drop+re-create flow + cardecky:full boot + touch-trigger
hot-reload all clean.
2026-05-08 14:10:55 +02:00
Till JS
61f2772789 chore(brand): rename Cards → Cardecky (display, infra, license-IDs)
- App display name → Cardecky in mana-apps.ts, MODULE_REGISTRY, alle Docs
- Domains: cardecky.mana.how (App), cardecky-api.mana.how (Marketplace
  API), cardecky.com (Marketing-Landing — cloudflared-route + nginx-Block
  vorbereitet, DNS muss noch gesetzt werden)
- 301-Redirect cards.mana.how → cardecky.mana.how (nginx + cloudflared)
  für alte Bookmarks; kann nach 6–12 Monaten wieder raus
- SPDX license IDs Cards-Personal-Use/Pro-Only-1.0 → Cardecky-* via
  Drizzle 0001-Migration (DROP CHECK → UPDATE rows → SET DEFAULT → ADD
  CHECK), inkl. _journal- und 0001_snapshot-Update
- In-mana cards-Modul: dezenter Banner zur Standalone-App (GUIDELINES
  §12), einmal schließbar via localStorage
- Docker-CORS-Listen, sso-origins.ts, Prometheus-Target aktualisiert

Technische IDs bleiben bewusst: appId 'cards', schema
mana_platform.cards.*, Verzeichnis apps/cards/, Package @cards/web,
services/cards-server, Env-Vars CARDS_*, UMAMI_WEBSITE_ID_CARDS*, Class
CardsEvents — Mana-Konvention (Brand ≠ technischer Identifier).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 13:49:47 +02:00
Till JS
c05022611e feat(cards): Phase η.1 — Reports + admin moderation actions
Server (cards-server):
- ModerationService: createReport, listOpen, resolveReport,
  takedownDeck, banAuthor, setVerifiedMana, restoreDeck. Takedown
  also auto-closes any sibling open reports against the deck and
  closes any open PRs (so contributors see clean state). Ban
  cascades to all of the author's decks.
- routes/moderation.ts: POST /v1/reports (any authed user),
  GET /v1/admin/reports (admin only), POST /v1/admin/reports/:id/
  resolve, POST /v1/admin/decks/:slug/{takedown,restore}, POST
  /v1/admin/authors/:slug/verify. Admin gate is `role === 'admin'`
  for now — verified-mana-only mods land later.
- Notify hooks: takedown emails the deck owner, mana-verify status
  change emails the author.

Frontend (cards-web):
- <ReportButton> (icon or inline variant) with category picker
  + optional explanation. On /d/<slug> as a discreet 🚩 next to
  the published-date stamp; in <CardDiscussions> per non-own
  comment.
- /d/<slug> shows a red "wurde von der Moderation entfernt" banner
  when isTakedown is true.
- /admin/reports inbox: lists open reports with category badges,
  Abweisen / Deck entfernen / Author bannen actions. Renders a
  forbidden state if the current user isn't admin.
2026-05-07 23:24:23 +02:00
Till JS
5dbc9ace2d feat(cards): Phase ζ.1 — Paid decks via mana-credits
Server (cards-server):
- lib/credits.ts: thin internal-API client for mana-credits
  (reserve / commit / refund-reservation / grant). Service-to-
  service via X-Service-Key. Throws InsufficientCreditsError
  separately so the buy flow can branch on UX.
- services/purchases.ts: 4-step purchase pipeline: reserve →
  insert deck_purchases row → commit reservation → grant
  author share + insert author_payouts. Idempotent on
  (buyer, deck) so a refresh-spam-click can't double-charge.
  Verified-mana authors get the 90/10 split, others 80/20
  (already in config). Refunds intentionally out of scope —
  see MARKETPLACE_PLAN §13a.
- routes/purchases.ts: POST /v1/decks/:slug/purchase,
  GET /v1/me/purchases, GET /v1/authors/me/payouts.
- decks.bySlug now returns hasPurchased (null when anonymous,
  bool when authed) so the deck-detail page can pick the right
  CTA.
- subscriptions.subscribe now blocks paid decks unless the
  caller has a non-refunded purchase row (owner exempt for
  testing).
- Notify: author gets a "Verkauf"-Email at grant time, with a
  deterministic externalId for dedup.

Frontend (cards-web):
- /d/<slug> shows "Kaufen für N 💎" instead of "Abonnieren"
  when paid + not yet bought; flips to subscribe path once
  purchased.
- /me/purchases page listing buyer history + (when present)
  author-payout history. Linked from the top nav.
2026-05-07 23:10:18 +02:00
Till JS
4fcc15737f feat(auth): add memoro-app.mana.how to SSO trusted origins
Memoro's SvelteKit SPA at memoro-app.mana.how is a separate deploy
under mana e.V. that needs to use the central mana-auth (login,
session, JWT). Without this entry Better-Auth rejects its preflight
silently (no Access-Control-Allow-Origin header) and the SPA can't
even reach POST /api/v1/auth/login.

Updates both SSOTs per the rule in CLAUDE.md / mana-auth/CLAUDE.md:
  1. PRODUCTION_TRUSTED_ORIGINS in services/mana-auth/src/auth/sso-origins.ts
  2. CORS_ORIGINS for mana-auth in docker-compose.macmini.yml

sso-config.spec.ts will pick up the consistency between the two.
2026-05-07 23:07:22 +02:00
Till JS
46fefd5cc4 feat(cards): Phase ε.4 — Card list + discussions on /d/<slug>
- DiscussionService.countsForDeck: bulk count (visible) comments per
  card-content-hash for one deck. Mounted at GET
  /v1/decks/:slug/discussion-counts so the public deck page can
  render comment badges without N+1 fetches.
- <DeckCardList> on /d/<slug>: lists the latest version's cards,
  renders a one-line preview + "💬 N" badge, and expands the
  inline <CardDiscussions> on click. Anonymous visitors see counts;
  posting requires auth (CardDiscussions already gates that).
2026-05-07 22:46:47 +02:00
Till JS
a8ddb6dea4 feat(cards): Phase ε.3 — PR notifications + Card-Discussions UI
- mana-notify integration in cards-server: PR-create notifies the
  deck owner, merge/reject notifies the PR author. Fire-and-forget
  via lib/notify.ts so a notify-service outage never rolls back a
  domain action. ExternalIDs are deterministic (cards.pr.{event}.{id})
  so retries dedupe.
- <CardDiscussions> on the learn page: collapsed by default, opens
  via "💬 Diskussion" alongside the "✏️ Verbessern" trigger. Resets
  whenever the current card changes so the panel doesn't bleed
  between flashcards.
- MARKETPLACE_PLAN.md §13a — known limitations: PR-merge is
  stale-blind (no rebase yet), diff-preview flat, threading 1-level.
2026-05-07 22:24:45 +02:00
Till JS
61fc16e8e9 feat(cards): Phase ε — Pull-Requests + Card-Discussions
Server (cards-server):
- PullRequestService: create / list / get / merge / close / reject.
  Merge applies the PR's {add, modify, remove} diff to the latest
  version's cards in a single transaction, writes a new
  deck_version + deck_cards, bumps latest_version_id, and stamps
  the PR with mergedIntoVersionId.
- DiscussionService: post / listForCard / hide. Threads are keyed
  by card_content_hash so they survive version bumps.
- Routes mounted under /v1: POST/GET /decks/:slug/pull-requests,
  GET /pull-requests/:id, POST /pull-requests/:id/{merge,close,reject},
  GET/POST /cards/:contentHash/discussions, POST /discussions/:id/hide.

Frontend (cards-web):
- cardsApi.pullRequests + cardsApi.discussions client surface.
- <PullRequestsSection> on /d/:slug — lists PRs with diff preview;
  owner sees Merge/Reject/Close buttons.
- <SuggestEditModal> + "✏️ Verbessern" button on /learn/:deckId for
  cards from a subscribed deck — submits a one-card modify (or
  remove) PR using the card's serverContentHash as the previous
  hash.
- Deck/Card DTOs gain subscribedFromSlug + serverContentHash so the
  learn page can decide whether to show the suggest-edit affordance.
2026-05-07 21:56:20 +02:00
Till JS
86a01426e8 feat(cards-server): Phase δ.1 — subscriptions + version reads + smart-merge diff
Server-side plumbing for Phase δ. Frontend hookup follows in δ.2.

  - services/subscriptions.ts: subscribe/unsubscribe (idempotent
    upsert on (user, deck), stamps the latest_version_id at
    subscribe-time so the client knows what it pulled). listForUser
    returns each sub with `updateAvailable: currentVersion !== latest`
    so the client can render an update indicator without a second
    round-trip. Refuses paid decks with 403 — that path comes back
    in Phase ζ once the credits Marketplace lands.
  - versionWithCards: deterministic ord-ordered card payload for a
    specific version. Read-public so anonymous browsers can preview
    a deck's content.
  - diffSince: smart-merge payload between any two versions. Splits
    the latest cards into added/changed/unchanged + lists removed
    by content_hash. The 'changed' bucket is heuristic (ord-position
    pair where one was removed and one was added) — solid enough
    until Phase ε's pull-request pipeline gives us real card
    lineage.
  - routes/subscriptions.ts mounts: GET /v1/me/subscriptions,
    POST/DELETE /v1/decks/:slug/subscribe (auth required),
    GET /v1/decks/:slug/versions/:semver (public),
    GET /v1/decks/:slug/diff?from=<semver> (public).

cards-web layout fix:
  - Marketplace surface (/explore, /u/, /d/) was previously gated
    behind the AuthGate — anonymous browsers got pushed to /login
    via client-side navigate. PUBLIC_PATHS extended so those routes
    SSR + render unauthed.

Validated: tsc clean on cards-server, svelte-check 0/0 on cards-web.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 19:33:58 +02:00
Till JS
dcd16067b5 feat(cards-server): Phase γ — public reads + browse + search + engagement
Marketplace discovery surface lights up. Anonymous browsers can
explore + search; signed-in users get the same surface plus star/
follow mutations.

  - middleware/optional-auth.ts: opportunistic JWT — sets c.get('user')
    if a token validates, otherwise leaves it undefined. Read paths
    use this; mutating routes call requireUser() inline.
  - services/explore.ts: browse() with q (ilike on title/description),
    tag, language, author-slug, sort (recent/popular/trending), pagination.
    explore() composes featured + trending for the landing.
    tagTree()/curatedTagsOnly() round it out. Subqueries for star/
    subscriber counts avoid N+1.
  - services/engagement.ts: star/unstar deck, follow/unfollow author.
    Idempotent via ON CONFLICT DO NOTHING. Self-follow rejected.
  - routes/explore.ts mounts /v1/explore, /v1/decks (browse list),
    /v1/tags. routes/engagement.ts mounts /v1/decks/:slug/star
    (POST/DELETE) + /v1/authors/:slug/follow (POST/DELETE).
  - index.ts replaces the previous strict-jwt-on-everything middleware
    with optionalAuth on all of /v1, then individual routers gate
    their write paths via local requireUser(). Hono context type
    relaxes from `user: AuthUser` to `user?: AuthUser` accordingly.

Validated: tsc --noEmit clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 17:01:32 +02:00
Till JS
be155ca737 fix(cards-server): error classes extend Hono HTTPException
Shared-hono's serviceErrorHandler only translates HTTPException
instances; anything else degrades to 500. Our custom Error subclasses
were silently bypassing the translation layer, so a missing JWT came
back as `500 Internal server error` instead of the expected `401
Unauthorized`. Confirmed in prod logs after the Phase-β deploy.

Switching the error hierarchy to extend HTTPException directly. The
JSON body now carries the right status code + the existing `cause`
object surfaces our `code` discriminator + zod-style `details` for
BadRequest. No call-site changes needed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:40:20 +02:00
Till JS
044d948155 feat(cards-server): Phase β — author profiles + deck init/publish
First user-facing surface on cards-server. Three endpoint groups:

Authors (/v1/authors):
  - POST /me — upsert author profile (slug, displayName, bio,
    avatarUrl, pseudonym). Slug validated for length, charset, and
    against a small reserved-words list (admin, api, me, ...).
  - GET /me — read own profile (returns null if not yet an author).
  - GET /:slug — public profile (omits banned-reason, etc.)

Decks (/v1/decks):
  - POST / — claim a slug + create the metadata-only deck row.
    License defaults to Cards-Personal-Use-1.0; paid decks
    (priceCredits > 0) must use Cards-Pro-Only-1.0 (CHECK constraint
    + service-side guard).
  - GET /:slug — deck + latestVersion.
  - POST /:slug/publish — version semver enforced strictly increasing,
    AI-mod first-pass via mana-llm (block → 403; flag → publish + log
    for human review; pass → publish silently). Per-card and per-
    version SHA-256 content hashes computed; cards persisted; deck's
    latest_version_id flipped atomically in a single transaction.

Helpers:
  - lib/slug.ts — slugify (best-effort) + validateSlug (strict).
  - lib/hash.ts — canonical SHA-256 over (type, fields) for cards
    and (sorted, ord-stable) for versions.
  - lib/ai-moderation.ts — mana-llm /v1/chat/completions wrapper
    with system prompt that forces JSON output. Fail-open: if
    mana-llm is down or returns malformed JSON, the verdict is
    'flag' so a human reviewer catches it. Better slow than silent.

Index-mounting of /v1/authors and /v1/decks is gated behind jwtAuth.
Anonymous public reads (Phase γ optionalAuth middleware) come later.

Validated: tsc --noEmit clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:36:34 +02:00
Till JS
71ec5e7123 feat(cards-server): Phase α.4 — Dockerfile + compose + tunnel route
Wires cards-server into the Mac-mini stack so we can deploy alongside
the rest of the Mana services.

  - Dockerfile mirrors the mana-credits 2-stage pattern (node+pnpm
    installer → bun runtime), exposes :3072, includes a /health
    healthcheck.
  - docker-compose.macmini.yml: new cards-server block right after
    mana-credits — depends on postgres + mana-auth, 128m mem, all the
    env knobs from the Phase-α config (author payout BPS, community-
    verified thresholds, sibling-service URLs).
  - cloudflared-config.yml: cards-api.mana.how → :3072. Distinct from
    cards.mana.how (the user-facing PWA) so the API surface is clearly
    separated.
  - sso-origins.ts: cards-api.mana.how added to PRODUCTION_TRUSTED_ORIGINS.
  - mana-auth CORS_ORIGINS in compose: cards-api.mana.how added.
    Restored whopxl.mana.how that had drifted out — sso-config.spec.ts
    had been flagging it but the missing entry surfaced when I added
    cards-api. spec is back to 8/8 green.

Deploy plan (next steps, not in this commit):
  1. ./scripts/mac-mini/build-app.sh cards-server
  2. docker exec mana-app-cards-server bun run db:push  (creates the
     `cards` schema + 16 tables in mana_platform)
  3. ./scripts/mac-mini/sync-tunnel-config.sh
  4. Smoke: curl https://cards-api.mana.how/health → 200

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:22:48 +02:00
Till JS
a7b62ea8ae feat(cards-server): Phase α — service skeleton + 16-table schema
Lays the foundation for the Cards marketplace + community backend per
apps/cards/docs/MARKETPLACE_PLAN.md. Phase α scope: skeleton, schema,
JWT auth wiring, health endpoint. Routes follow in Phase β.

Stack: Hono + Bun + Drizzle + Postgres + jose-JWKS — mirrors the
mana-credits service template.

Schema: pgSchema('cards') inside mana_platform, 16 tables across six
groups in src/db/schema/:
  - authors.ts: authors, author_follows
  - decks.ts: decks, deck_versions, deck_cards (with cards_card_type
    enum mirroring @mana/cards-core; per-card content_hash for
    smart-merge; CHECK constraint that paid decks must use
    Cards-Pro-Only-1.0 license)
  - tags.ts: tag_definitions (hierarchical), deck_tags
  - engagement.ts: deck_stars, deck_subscriptions, deck_forks
  - discussions.ts: deck_pull_requests (with diff jsonb +
    pr_status enum), card_discussions (bound to card_content_hash
    so threads survive version bumps)
  - moderation.ts: deck_reports (with category/status enums),
    ai_moderation_log
  - credits.ts: deck_purchases (snapshot price + author/mana split),
    author_payouts

Phase λ's co_learn_sessions intentionally not yet here.

Service plumbing:
  - src/index.ts: Hono entry on :3072, /health unauth, /v1 stub
  - src/config.ts: env loader with author-payout BPS knobs
    (defaults 80/20 standard, 90/10 verified-mana) and
    community-verified thresholds
  - src/middleware/jwt-auth.ts + service-auth.ts: JWKS validation
    + X-Service-Key check (mirrors mana-credits)
  - src/lib/errors.ts: HttpError + named subclasses
  - drizzle.config.ts pointing at mana_platform with schemaFilter:cards
  - drizzle/0000_*.sql committed so other devs / prod migration path
    has a reproducible starting point

Validated: tsc --noEmit clean, drizzle-kit generate produces
233-line SQL with all 16 tables + 5 enums + indexes.

Next (Phase α.4): Dockerfile + docker-compose + cloudflare tunnel
route cards-api.mana.how → :3072.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:01:08 +02:00
Till JS
ceed8ccd64 feat(mana-sync): per-app billing exemption — Cards bypasses sync gate
mana-sync's billing middleware short-circuited every push/pull with
402 for users without a sync subscription. Cards promises free Sync
in its Phase-1 GUIDELINES, so it shouldn't gate its own users on a
mana-credits subscription it never sells.

Implementation:
  • billing.NewChecker now takes an exemptApps slice. The middleware
    extracts {appId} from the URL path and short-circuits before the
    user lookup if the app is in the set.
  • Configurable via the BILLING_EXEMPT_APPS env var (comma-separated).
  • Set BILLING_EXEMPT_APPS=cards on the mana-sync container so the
    cards.mana.how Sync loop stops 402-ing.
  • Tests cover the exemption + the empty/whitespace edge cases. All
    other apps keep the original behaviour (fail-open if mana-credits
    is unreachable, 402 if it explicitly says inactive).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 15:01:54 +02:00
Till JS
f422fd6779 fix(shared-error-tracking): point main at src/, strip dashes from Glitchtip DSN
Two real-world fixes from wiring mana-auth to Glitchtip:

1. The compiled dist/ folder was excluded from Docker builds via
   .dockerignore's '**/dist' rule, so any container that pnpm-installed
   the package found node_modules/@mana/shared-error-tracking but no
   loadable entry point ('Cannot find module' at startup). Match the
   pattern shared-hono uses — point main + types + exports straight at
   src/*.ts. Bun runs TS natively and the type-only consumers don't
   care.

2. Glitchtip projects expose UUID-format public keys (`556fbd2e-a720-…`)
   in their generated DSNs. @sentry/node v9 tightened its DSN regex to
   alphanumeric-only, so it silently rejects the DSN with "Invalid
   Sentry Dsn" and never sends events. Strip the dashes from the
   user/key portion before handing it to Sentry — the Glitchtip ingest
   endpoint accepts both forms over the wire, so no server change.

Plus the missing Dockerfile COPY lines for shared-error-tracking and
eslint-config (root package.json devDeps reference the latter, which
breaks pnpm-filter installs that don't include it in the build context).

Verified end-to-end: 4 issues now in Glitchtip from mana-auth
(2 manual probes + 1 captureException + 1 401 from a
real /api/v1/me/data request without auth).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 02:34:54 +02:00
Till JS
1bac7cf38a fix(mana-auth): COPY packages/shared-error-tracking in Dockerfile
Mirror the same fix as cards-core (dd2e60954): the new
shared-error-tracking workspace dep needs an explicit COPY line
in the installer stage, otherwise pnpm-install-with-filter can't
find the package and the runtime container is missing it under
node_modules/@mana/.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 02:03:25 +02:00
Till JS
8b71d290f8 feat(mana-auth): wire Glitchtip/Sentry error tracking via shared-error-tracking
First server-side error-tracking integration. Pattern mirrors the
client-side one in apps/mana/apps/web/src/hooks.client.ts:

- pull @mana/shared-error-tracking into mana-auth's deps (workspace pkg
  with @sentry/node + a no-op fallback when GLITCHTIP_DSN is unset)
- call initErrorTracking() at the top of services/mana-auth/src/index.ts
  before the rest of the module body executes — this lets Sentry hook
  uncaughtException / unhandledRejection before any Hono handlers register
- wrap app.onError so non-HTTPException throws also flow into
  captureException with path/method/query context. HTTPExceptions are
  intentional 4xx/422 and stay out of the issue list (otherwise every
  401 from a stale session would page somebody at 3am)
- compose: pass GLITCHTIP_DSN_MANA_PLATFORM through as GLITCHTIP_DSN per
  service so each container's events get tagged with serverName='mana-auth'

DSN itself isn't in the repo; lives in .env.macmini on the Mac Mini and
is referenced from the Glitchtip credentials doc.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 01:55:07 +02:00
Till JS
0a544ac410 feat(cards): Phase-1 Spinoff — standalone cards.mana.how + cards-core extraction
Builds out the Cards spinoff end-to-end so the standalone app at
cards.mana.how shares its data layer with the in-mana cards module
through a single pure-utility package.

Why a spinoff and not just a deeper module: per the GUIDELINES, Cards
gets its own brand + URL while reusing mana-auth, mana-sync, and the
mana-credits/billing stack. The in-mana module under mana.how/cards
stays untouched as the integrated experience.

Phase 0 — mana-modul foundation
  • New tables cardReviews + cardStudyBlocks (Dexie v61) + plaintext
    classification in the crypto registry.
  • LocalCard learns a {type, fields} shape; legacy front/back columns
    kept as a back-compat mirror so older builds keep rendering.
  • FSRS v6 scheduler + Cloze parser + Markdown render pipeline.
  • UI in apps/mana/.../routes/(app)/cards/ gets a learn session
    (learn/[deckId]), 4-type card editor, due-counter, markdown lists.

Phase 1 — standalone (apps/cards/apps/web)
  • SvelteKit 2 + Svelte 5 + Tailwind 4, port 5180.
  • Own Dexie 'cards' DB with a slim 5-table schema.
  • Own sync engine: pending-changes hooks, 1 s push / 5 s pull against
    POST /sync/cards, server-apply with suppression to avoid ping-pong.
  • Auth-Gate via @mana/shared-auth-ui (LoginPage / RegisterPage).
  • Encryption hooks at every write/read/apply path, currently no-op
    stubs — flipping to real vault-backed AES-GCM is a single-file
    change in src/lib/data/crypto.ts.

Shared package — @mana/cards-core
  • Pulls types, cloze, card-reviews, FSRS wrapper, and Markdown
    renderer out of the mana module so both frontends import from one
    source. mana-modul keeps thin re-export shims so consumers don't
    need to change imports.
  • 19 vitest tests carried over from the mana module.

Server-side wiring
  • cards.mana.how added to mana-auth PRODUCTION_TRUSTED_ORIGINS and
    its CORS_ORIGINS env (sso-config.spec.ts stays green).
  • New cards-web container in docker-compose.macmini.yml (mirrors
    manavoxel-web pattern, 128m, depends on mana-auth healthy).
  • cloudflared-config.yml repoints cards.mana.how from :5000 (the
    unified mana-web container) to :5180. mana.how/cards is unchanged.

Cleanup
  • Removed an unrelated 2026-03/04 NestJS+Supabase+Expo experiment
    that was lingering under apps/cards/ (apps/landing, supabase/,
    .github/workflows, MANA_CORE_*.md, etc.). It predated this plan
    and would have confused future readers.

Validation
  • svelte-check on mana-web: 0 errors over 7697 files
  • svelte-check on cards-web: 0 errors over 3481 files
  • vitest on cards-core: 19/19 pass
  • pnpm check:crypto: 214 tables classified
  • bun test sso-config.spec.ts: 8/8 pass
  • vite build on cards-web: green

Not done in this commit (deliberate)
  • Real encryption (vault roundtrip) — Phase 2.
  • WebSocket-driven pull (5 s polling for now).
  • Mobile/landing standalone surfaces — Phase 2/3.
  • The actual production cutover on the Mac mini (build, deploy,
    cloudflared sync) — config is staged, deploy is a user action.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 01:20:43 +02:00
Till JS
1b579ab0b0 chore(mana-events): move from port 3065 to 3115 — collision with platform mana-media
Platform-Repo (Code/mana/) reserviert 3065 für mana-media; um Doppel-
Belegung zu vermeiden wandert mana-events (Public-RSVP / Event-Sharing)
auf 3115. Neuer Port-Block 311x ist unbenutzt und gehört strukturell
neben mana-mail (3042) bzw. die anderen 30xx Service-Ports.

Berührt jeden harden-coded 3065-Default — Server-Config, Webapp-Config,
SSR-Routes (rsvp/[token], status), Playwright-Webserver-Setup, e2e-Spec.
PUBLIC_MANA_EVENTS_URL in .env.development zieht beide Variablen mit.

PORT_SCHEMA.md trägt jetzt den Wechsel mit Datum + Begründung —
zukünftiges Ich soll nicht raten warum der Port aus der 30xx-Reihe
ausschert.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 20:38:46 +02:00
Till JS
546b94d472 feat(personas): move admin + internal endpoints from mana-auth to apps/api
Schließt die platform/product-split-Lücke: HEAD's apps/api/src/index.ts
referenziert seit dem Forms-M10d-Commit personasInternalRoutes /
personasAdminRoutes — die Implementierung lag aber noch nicht im Repo.
Build war strukturell broken bis hierhin.

Was wandert von mana-auth nach apps/api:

  apps/api/src/modules/personas/
    ├── schema.ts          — pgSchema('personas') mit personas /
    │                        persona_actions / persona_feedback;
    │                        userId ist plain text (Cross-DB-FK auf
    │                        mana-auth's auth.users geht nach Split nicht).
    ├── internal-routes.ts — service-key gated GET /due, POST /:id/actions
    │                        und POST /:id/feedback. Append-only +
    │                        idempotent über deterministische row-ids
    │                        (tickId-i-tool / tickId-module).
    └── admin-routes.ts    — admin-JWT gated CRUD; ruft mana-auth via
                             /api/v1/admin/users + /api/v1/auth/register
                             + /api/v1/internal/users/:id/persona-stamp
                             für den User-Lifecycle.

Persona-runner-Client zeigt jetzt auf apps/api:

  - config.ts: neues apiUrl-Feld (default http://localhost:3060,
    Env MANA_API_URL); authUrl bleibt für /api/v1/auth/login + spaces.
  - clients/mana-auth-internal.ts: drei Calls treffen jetzt
    /api/v1/personas/internal/* statt mana-auth's
    /api/v1/internal/personas/* — Datei-Name bleibt um Call-Site-Diff
    klein zu halten.
  - index.ts: ManaAuthInternalClient bekommt config.apiUrl statt authUrl.

Seed/Cleanup-Skripte:

  - --api= als bevorzugter Flag, --auth= als Legacy-Alias (cached
    Shell-History würde sonst hart brechen).
  - default http://localhost:3060, Env MANA_API_URL.
  - Endpoint-Pfade umgeschrieben:
      POST   /api/v1/admin/personas        → /api/v1/personas/admin
      DELETE /api/v1/admin/personas/:id    → /api/v1/personas/admin/:id

drizzle.config.ts: schema-Array + schemaFilter um 'personas' erweitert.
DB-push ist Pflicht-Schritt vor erstem Boot, sonst 42P01 auf /due.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 20:38:29 +02:00
Till JS
795b39e065 feat(forms): M10d headless wave-cron — server-worker + private internal_meta
Echter Server-Cron für recurring forms — wave-send läuft jetzt
unabhängig von Owner-Tab-State. Bisheriger M10c webapp-side scheduler
bleibt als Belt-and-suspenders aktiv (idempotent).

Architektur:
1. **Owner-private internal_meta auf unlisted snapshots**
   - Drizzle: neue jsonb-column `internal_meta` (Drizzle migration
     0001_internal_meta.sql).
   - public-routes.ts strippt sie strukturell — die explicit select()-
     projection enthält sie nicht (recipients + sender würden sonst
     via share-link leaken).
   - publish-route akzeptiert sie im Body, persistiert auf insert +
     update.
   - ALLOWED_COLLECTIONS um 'lasts' und 'forms' erweitert (war ein
     latenter Bug — formsStore.setVisibility('unlisted') hätte ohne
     diese Ergänzung 400 zurückbekommen; M4b lief vermutlich nie
     end-to-end durch).

2. **shared-privacy publishUnlistedSnapshot**
   - PublishUnlistedOptions erweitert um optionales `internalMeta`.
     Forwarded an /api/v1/unlisted/:collection/:recordId body.

3. **Webapp formsStore**
   - lib/wave-mail.ts: buildFormInternalMeta(form, broadcastSettings)
     baut den Owner-Private-Blob: { kind, recurrence: {frequency,
     recipientEmails, lastSentAt}, sender: {fromEmail, fromName,
     replyTo, legalAddress}, formMeta: {title, description} }.
     Returns null wenn Voraussetzungen fehlen (kein recurrence, keine
     recipients, fehlende broadcast-settings).
   - stores/forms.svelte.ts: setVisibility / regenerateUnlistedToken /
     setUnlistedExpiry laden broadcastSettings via Dexie + decrypt,
     bauen internalMeta, übergeben an publishUnlistedSnapshot. Form
     wird vor dem buildFormInternalMeta-Call dekrypted.

4. **mana-mail internal bulk-send route**
   - createInternalRoutes(accountService, broadcastOrchestrator,
     maxRecipients) — Signature erweitert.
   - Neue POST /api/v1/internal/mail/bulk-send: gleicher Payload-shape
     wie user-facing /v1/mail/bulk-send aber userId aus Body statt
     JWT. X-Service-Key-gate sitzt bei /api/v1/internal/* prefix.
     Audit-trail trägt principalId aus Body. Cap = 5000 (gleicher
     Wert wie user-facing).

5. **apps/api forms wave-worker**
   - 5-min setInterval, advisory-lock-gated (key 0x464f5257 'FORW').
   - Tick: select snapshots WHERE collection='forms' AND
     internal_meta IS NOT NULL AND revoked_at IS NULL. Filter auf
     kind='forms-recurrence' + isWaveDue (lastSentAt + period <= now,
     never-sent fires sofort). Pro fälligem snapshot: build HTML/text
     mailbody (mirror webapp wave-mail-render), POST an mana-mail
     internal-bulk-send mit X-Service-Key + userId, dann jsonb_set
     auf internal_meta.recurrence.lastSentAt. Per-snapshot errors
     werden als console.warn geloggt, Tick läuft weiter.
   - Disable via FORMS_WAVE_WORKER_DISABLED=true (tests / multi-
     replica deployments).
   - Wired in apps/api/src/index.ts neben startArticleImportWorker().

Trade-offs:
- internal_meta wird beim setVisibility/regenerate/setExpiry frisch
  aus broadcast-settings gebaut — wenn der User später broadcast-
  settings ändert (zB neuer fromEmail) muss er das Form re-publishen
  damit die snapshot-internal_meta aktualisiert wird. Doc-it: zukünftiger
  Patch könnte ein "settings drift"-Warning ins UI surfacen.
- Worker-Update von lastSentAt geht NICHT zurück in den webapp-form
  (settings.recurrence.lastSentAt ist verschlüsselt, server kann
  nicht schreiben). Owner-UI zeigt ältere lastSentAt von manuellen
  Sends; auto-cron-sends sind in den Server-Logs sichtbar. Future
  patch: GET /api/v1/forms/:id/recurrence-status (auth) gibt das
  snapshot.internal_meta zurück, UI rendert Auto-Cron-State.
- Webapp-side wave-scheduler (M10c) läuft parallel weiter — wenn
  Owner-Tab offen ist, kann beides feuern. Idempotent durch
  lastSentAt-check (weekly/monthly buckets), aber theoretisch könnte
  double-fire passieren wenn die Calls innerhalb 1ms versetzt sind.
  Real-world ignorierbar; future patch: scheduler liest jetzt
  internal_meta.lastSentAt vom server-side state.

apps/api buildet (1776 modules). mana-mail buildet (523 modules).
svelte-check 0 errors in forms/. Forms-Tests 70/70 unverändert.

DB-Migration 0001_internal_meta.sql muss manuell appliziert werden
(siehe feedback memory: hand-authored SQL migrations sind nicht in
pnpm setup:db).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 17:18:05 +02:00
Till JS
b297f68ee4 fix(articles, mana-ai): rollout-block hardening for sync_changes projections
Four cross-cutting fixes that make the bulk-import worker safe to run
under real production load. All four were called out as live-rollout
risks in the post-ship review of docs/plans/articles-bulk-import.md.

#1 — Same fieldMetaTime bug fixed in mana-ai
   The articles fix in 054b9e5be hoists the helper to its own file
   `apps/api/src/modules/articles/field-meta.ts`. The same naive
   `rowFM[k] >= localTime` LWW comparison existed in three more
   projections under services/mana-ai (missions-projection,
   snapshot-refresh, agents-projection). Once any F3 stamp lands
   beside a legacy-string stamp, the comparison evaluates
   `'[object Object]' >= 'ISO-…'` (false) and the older value wins.
   New `services/mana-ai/src/db/field-meta.ts` — same helper,
   deliberately duplicated (each service treats sync_changes as a
   read-only event log; sharing infra across services is out of
   scope here). All 61 mana-ai bun tests still pass.

#2 — Stale 'extracting' items recycle
   If the worker dies mid-fetch (OOM, pod restart), items stay in
   state='extracting' forever and the job never completes. New sweep
   at the start of `processOneJob`: items whose lastAttemptAt is
   older than 5 minutes get bounced back to 'pending' so the next
   tick re-claims them. STALE_EXTRACTING_MS tuned for the 15s
   shared-rss fetch + JSDOM-parse worst case.

#3 — Pickup-row GC
   Every 30 ticks (~once per minute) the worker hard-deletes
   articleExtractPickup rows older than 24h. Without this a stuck
   pickup-consumer (all tabs closed, Web-Lock mismatch) would let
   sync_changes accumulate without bound. Logs the row count when
   non-zero so we can spot stuck consumers in the wild.

#4 — DRY consent-wall heuristic
   Identical CONSENT_KEYWORDS + threshold lived in routes.ts AND
   import-extractor.ts. Hoisted to
   `apps/api/src/modules/articles/consent-wall.ts`; both call sites
   now share one heuristic.

Plan: docs/plans/articles-bulk-import.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 00:53:39 +02:00
Till JS
8fbdc6db77 feat(notes): isSpaceContext flag replaces kontext module (Option B)
Retire the kontext module entirely; the per-Space standing-context
document is now a regular Note flagged with `isSpaceContext: true`.
Daily use ("URL → Notiz") moves to the notes module as a first-class
action; the same primitive is reused by the (planned) Brand/Firma-Space
onboarding wizard to seed a Space-context Note from a URL.

Why: kontext was inconsistent — its UI was a URL-crawler that wrote
to userContext.freeform (profile module), while its kontextDoc table
+ AI-Mission-Runner auto-injection was a write-only shell with no
real editor. One concept (Notes) now carries both ad-hoc noting and
Space-context, with mutex (max 1 flagged Note per Space).

Notes module:
- types: add `isSpaceContext?: boolean` to LocalNote + Note
- queries: add `useSpaceContextNote()` (the active Space's flagged note)
- store: `markAsSpaceContext(id | null)` with mutex sweep across Space
- ListView: "Aus URL importieren" inline form (URL + crawl-mode +
  KI-Zusammenfassung toggle); "Als Space-Kontext markieren" /
  "Space-Kontext lösen" context-menu item; ★-Badge on flagged notes
- new api.ts: `crawlUrl()` client for POST /api/v1/notes/import-url

Notes API (apps/api):
- new modules/notes/routes.ts with /import-url (ported from kontext;
  same crawler + LLM summary pipeline, NOTES_IMPORT_URL credit op)
- mount at /api/v1/notes; add 'notes' to RESOURCE_MODULES (beta+ tier)
- delete modules/context (UI-less /ai/generate + /ai/estimate had no
  consumers; /import-url moved to notes)
- packages/credits: rename AI_CONTEXT_GENERATION → NOTES_IMPORT_URL

AI Mission Runner:
- default-resolvers: drop kontextResolver + kontextIndexer; the
  notesIndexer flags `isSpaceContext` notes with "★ " prefix and
  bubbles them to the top of the picker
- writing reference-resolver: `kind: 'kontext'` now reads the flagged
  Note via scope-scan instead of the kontextDoc table; tests updated
- writing ReferencePicker: useSpaceContextNote replaces useKontextDoc
- AiDebugBlock + MissionGrantDialog + ai-missions ListView: drop
  'kontextDoc' from ENCRYPTED_SERVER_TABLES set
- ai-agents ListView: drop 'kontext' from POLICY_MODULES

Profile module:
- ContextFreeform.svelte: switch import from kontext/api to notes/api
  (the URL-crawl is the same primitive; it still writes to
  userContext.freeform — only the import path changed)

Dexie:
- v58: notes index gains `isSpaceContext`; kontextDoc table dropped

Kontext module deletion:
- delete apps/mana/apps/web/src/lib/modules/kontext/ entirely
- delete (app)/kontext/ route
- drop registerApp + Scroll icon from app-registry/apps.ts
- drop kontext entry from help-content
- drop kontextModuleConfig from data/module-registry.ts
- drop kontextDoc from crypto registry

mana-auth:
- bootstrap-singletons: drop bootstrapSpaceSingletons function entirely
  (kontextDoc was the only per-Space singleton); userContext bootstrap
  unchanged
- better-auth.config: drop kontextDoc bootstrap call from personal-space
  hook + organizationHooks.afterCreateOrganization
- me-bootstrap: drop per-space bootstrap loop; response shape kept
  (always-empty `spaces: {}`) for backwards-compat with older clients

Note: the still-existing legacy `context` module (CMS-style docs/spaces,
unrelated to kontext) is left in place; its cleanup landed on the
articles-bulk-import branch and is out of scope for this PR.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 00:14:32 +02:00
Till JS
230dfd5dad chore: extract arcade into standalone repo
Arcade lives as its own pnpm workspace at ~/Documents/Code/arcade
now, with no @mana/* coupling. This drops every reference and the
games/ directory from the monorepo.

Removes:
- games/ directory (89 files: web + server + 22 HTML games + screenshots)
- @arcade/web, @arcade/server pnpm workspace entries (games/* globs)
- arcade scripts in root package.json (4 scripts)
- arcade.mana.how from mana-auth trusted origins + CORS_ORIGINS
- arcade entries in mana-apps registry, app-icons, URL overrides
- arcade.mana.how from cloudflared tunnel + prometheus blackbox probes
- arcade-web service block in docker-compose.macmini.yml
- generate-env.mjs entries for arcade server + web
- BRANDING_ONLY 'arcade' entry in registry consistency spec
- dead arcade translation keys in GuestWelcomeModal (DE+EN)
- arcade mention in CLAUDE.md, authentication guideline, MODULE_REGISTRY

Verified:
- services/mana-auth/src/auth/sso-config.spec.ts: 8/8 pass
- pnpm install regenerates lockfile cleanly (-536 lines)
- no remaining 'arcade' refs outside historical snapshot docs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:40:01 +02:00
Till JS
8a5fad34df fix(geocoding): bump PROVIDER_TIMEOUT_MS to 20s for cold cross-LAN
Cold-start fetches from the mana-geocoding container to photon-self
on mana-gpu (over WSL2 mirrored networking) consistently take >10s on
the first probe and ~2s once warm. The previous 8s default caused the
chain to false-mark photon-self unhealthy on every cold path, leaking
to public photon for the next 30s health-cache window — and pinning
the public-photon answer in the 7d cache (now shortened to 1h).

Also wires the docker-compose macmini env to honor PROVIDER_TIMEOUT_MS
and CACHE_PUBLIC_TTL_MS overrides so production picks up the new
values without a code rebuild.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:19:21 +02:00
Till JS
2bbcf14aba chore(geocoding): remove Pelias + close 3 bypass paths to public Nominatim
Pelias was retired from the Mac mini on 2026-04-28; photon-self
(self-hosted Photon on mana-gpu) has been the live primary since then.
This removes the now-dead Pelias adapter, config, tests, and the
services/mana-geocoding/pelias/ stack — the entire compose file, the
geojsonify_place_details.js patch, the setup.sh import script.

Provider chain is now `photon-self → photon → nominatim`. The chain
keeps its `privacy: 'local' | 'public'` split, sensitive-query
blocking, coord quantization, and aggressive caching unchanged.

Three direct calls to nominatim.openstreetmap.org that bypassed
mana-geocoding now route through the wrapper:

- citycorners/add-city + citycorners/cities/[slug]/add use the shared
  searchAddress() client (browser → same-origin proxy → mana-geocoding
  → photon-self).
- memoro mobile drops its OSM reverse-geocoding fallback entirely;
  Expo's on-device reverse-geocoding stays as the sole path. Routing
  through the wrapper would require a memoro-server proxy endpoint —
  a follow-up if Expo's quality proves insufficient.

Other behavioral changes:

- CACHE_PUBLIC_TTL_MS dropped from 7d → 1h. The long TTL was a
  privacy-amplification trick from the Pelias era; with photon-self
  serving the bulk of traffic, a transient cross-LAN blip was pinning
  cached fallback answers for days. 1h gives quick recovery.
- /health/pelias renamed to /health/photon-self; prometheus blackbox
  config + status-page generator updated.
- mana-geocoding container no longer needs `extra_hosts:
  host.docker.internal:host-gateway` (was only there for the
  Pelias-on-host-network era).

113 tests passing. CLAUDE.md rewritten to reflect the post-Pelias
architecture.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:12:26 +02:00
Till JS
fc49198992 docs(geocoding): post-migration log + Photon weekly-refresh operator scripts
- Decision report: status flipped to MIGRATED; added migration log with
  five WSL2 gotchas (bzip2 missing, no official Photon image,
  firewall=true blocks cross-LAN, vmIdleTimeout=-1 ineffective,
  PowerShell pre-expansion of bash $(...)) and resource snapshot.
- mana-geocoding CLAUDE.md: PHOTON_SELF_API_URL note now reflects live
  primary status on mana-gpu since 2026-04-28.
- photon-self/: operator scripts for the weekly DB refresh — update.sh
  (atomic-swap with rollback), systemd unit + timer (Sun 03:30 +30min
  jitter, Persistent=true), README with re-installation instructions
  for DR. Currently installed and enabled on mana-gpu.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 21:31:37 +02:00
Till JS
248549b15a fix(feedback): keine doppelte Anzeige von Title + Body
Bei kurzen Posts (oder wenn mana-llm fehlschlug) hat der Auto-Title-
Fallback `feedbackText.slice(0, 80)` den Body 1:1 als Title gespeichert
— Card zeigte dann zwei Mal denselben Text.

Zwei Schichten Schutz:

1. **Server (mana-analytics)**: catch-Branch wirft den Prefix-Fallback
   raus (title bleibt null). Zusätzlich neue isRedundantTitle()-Heuristik
   verwirft auch Auto-Titles, die nur ein truncierter Prefix des Bodies
   sind (Whitespace-collapse + Ellipsis-strip).

2. **Frontend (ItemCard)**: defensive showTitle-Computed — ältere DB-
   Items mit redundantem Title rendern automatisch nur den Body, ohne
   dass eine Datenbank-Cleanup nötig ist.

Title-Slot bleibt für echte Auto-Summaries und manuelle Titel sichtbar.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 17:37:51 +02:00
Till JS
153ad8049c feat(geocoding): support dual-Photon (self-hosted + public) for GPU
migration

The chain now distinguishes two Photon instances:
  photon-self  privacy: 'local'   (self-hosted on mana-gpu)
  photon       privacy: 'public'  (komoot.io, last-resort fallback)

Both wrap the same `PhotonProvider` class with different config — only
the URL, name, and privacy stance differ. The new ProviderName variant
'photon-self' lets the chain track per-provider health for them
independently (a single 'photon' slot would collide in the health
Map).

Opt-in registration: `photon-self` is only built when
PHOTON_SELF_API_URL is set in the env. When unset (current state),
the chain has the same shape as before — full backward compat. After
the GPU migration, flipping the env-var on is the only deploy step
needed:
  PHOTON_SELF_API_URL=http://192.168.178.11:2322

Default chain order updated to:
  photon-self,pelias,photon,nominatim
  ^^^^^^^^^^^ silently skipped if not registered (env unset)

The privacy guarantee is structural: photon-self carries privacy:
'local', so the existing sensitive-query block from the previous
hardening commit now has a real local backend post-migration —
medical/crisis-service queries get real results instead of the
"sensitive_local_unavailable" notice.

Tests: 148 (was 141). New coverage:
- src/__tests__/app.test.ts: createChain registration logic — verifies
  photon-self appears iff PHOTON_SELF_API_URL is set, ordering
  honored, GEOCODING_PROVIDERS env-var filter respected
- providers/__tests__/photon-normalizer.test.ts: provider field
  carries 'photon' or 'photon-self' based on the call argument

Recon of mana-gpu (2026-04-28): Windows 11 Pro Build 26200, 64 GB
RAM (56 GB free), 739 GB disk free, no WSL2/Docker yet, no native
GPU services running. Setup plan documented in
docs/runbooks/photon-on-mana-gpu.md (3–4 h, ~1 h of which is
download/unpack waiting).
2026-04-28 17:19:04 +02:00
Till JS
941df57f77 feat(feedback): rename community-identity columns + settings-section
Letzter "community"-Rest aus dem Feedback-Hub räumt sich auf — DB-Spalten,
Settings-Search-Index, Section-Name und i18n-Keys einheitlich auf
"feedback":

- DB: auth.users.community_show_real_name → feedback_show_real_name,
  community_karma → feedback_karma. Migration unter
  services/mana-auth/sql/009_rename_community_to_feedback.sql (manuell
  via psql, in Drizzle-Schema beider Services nachgezogen).
- mana-auth/me.ts: PATCH /api/v1/me/profile akzeptiert jetzt
  feedbackShowRealName und gibt es im Response zurück.
- mana-analytics: feedback.ts liest authUsers.feedbackShowRealName /
  feedbackKarma, redact() + Karma-Increment + Tests entsprechend.
- Frontend: CommunitySection.svelte → FeedbackIdentitySection.svelte
  (Datei umbenannt, Property-Namen + Toast-Texte aktualisiert,
  HeartHalf-Icon, "Feedback-Identität" als Title).
- searchIndex.ts: CategoryId 'community' → 'feedback', anchor
  'community-identity' → 'feedback-identity'.
- i18n (5 locales): settings.categories.community → .feedback,
  settings.search.community_* → feedback_*. Labels DE/EN/FR/IT/ES
  jeweils auf "Feedback" + "im Feedback-Feed" angepasst.

38/38 Integration-Tests grün, validate:i18n-parity sauber, svelte-check 0.

BREAKING (intern, nicht live): Frontend, das gegen die alten Spalten- /
Property-Namen aus dem PATCH-Response geht, fällt jetzt um. Kein
Production-Risiko da Hub noch nicht öffentlich.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 17:09:58 +02:00
Till JS
112e2cc1b4 feat(feedback): rename community → feedback (module + routes + domain)
Modul, Routen und Public-Domain heißen jetzt einheitlich "feedback":

- App-Registry: id 'community' → 'feedback', name 'Community' → 'Feedback',
  Icon Megaphone → HeartHalf (passt zum bereits-globalen heart-half-Icon
  am Module-Header und im PillNav-Usermenü)
- Modul-Config: communityModuleConfig → feedbackModuleConfig
- Routen-Refs: alle href/goto-Aufrufe in Modul-Views, MyWishesView,
  Onboarding-Wish, Profile-MyWishes auf /feedback umgestellt
- /feedback/+layout: Brand "Mana Community" → "Mana Feedback", Megaphone
  → HeartHalf, "In Mana öffnen"-CTA zeigt jetzt auf /?app=feedback
- Public-Mirror Domain: community.mana.how → feedback.mana.how
  (cloudflared-config.yml + docker-compose.macmini.yml CORS_ORIGINS +
  PUBLIC_MANA_ANALYTICS_URL_CLIENT). DNS muss separat angelegt werden.
- Settings-Section: Hilfe-Text nennt jetzt feedback.mana.how

Internal: community_show_real_name + community_karma DB-Spalten bleiben
(Migration nicht im Scope dieses Renames). Settings-Search-Index-Kategorie
'community' bleibt ebenfalls — sie spiegelt das DB-Schema, nicht den
User-Begriff.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 16:18:45 +02:00
Till JS
bcc21ca785 feat(geocoding): privacy hardening — sensitive-query block + coord
quantization + extended cache TTL for public answers

Three independent defenses limit what public geocoding APIs (Photon,
Nominatim) can learn from our outbound traffic:

1. **Sensitive-query block** (`lib/sensitive-query.ts`)
   Queries matching the medical/mental-health/crisis-service keyword
   list (Hausarzt, Psychiater, Klinikum, HIV, Frauenhaus, …) are
   never forwarded to public APIs. The chain detects sensitivity at
   the route layer and runs the search in localOnly mode — providers
   with `privacy: 'public'` are filtered out before iteration begins.
   When no local provider is available (Pelias stopped), a sensitive
   query returns ok:true with results:[] and notice:
   'sensitive_local_unavailable' so the UI can show a sensible
   message instead of "no results".

   The keyword list is documented inline. False negatives are the
   risk; false positives just produce a 0-result UX hit (better
   trade-off).

2. **Coordinate quantization** (`lib/privacy.ts`)
   Forward-search focus.lat/lon: rounded to 2 decimals (~1.1km).
     Enough for the bias to work, hides exact GPS.
   Reverse-geocoding lat/lon: rounded to 3 decimals (~110m).
     City-block resolution — sufficient for "what's near me?",
     avoids reverse-geocoding the user's exact front door.
   Pelias always gets full precision; quantization only on the way
   out to public APIs. New `privacy: 'local' | 'public'` field on
   the GeocodingProvider interface drives this.

3. **Extended cache TTL for public answers**
   New `cache.publicTtlMs` config option, default 7 days (vs. 24h
   for local-provider answers). LRU cache extended with optional
   `ttlOverrideMs` per entry. Same query from N users → 1 outbound
   request to Photon/Nominatim. Strongest privacy lever we have
   over public providers (we can't change their logging, only the
   rate at which we feed them queries).

Threat coverage:
   ✓ User IP / identity hidden (already true — wrapper is the proxy)
   ✓ Exact GPS hidden (quantization)
   ✓ Sensitive query content protected (block)
   ~ Non-sensitive query content visible (acceptable trade-off)
   ~ Aggregate profiling reduced ~10–100× (cache)
   ✗ TLS-level traffic analysis, compelled disclosure (out of scope)

Tests: 141 (was 115). New coverage:
- privacy.test.ts: quantization rules (locks the privacy claim)
- sensitive-query.test.ts: positive matches across categories +
  documented false positives we accept
- chain.test.ts: localOnly mode end-to-end including the load-
  bearing assertion that public providers' search() must NEVER be
  called when the chain is in localOnly mode (no race window)
- cache.test.ts: per-entry ttlOverride longer + shorter than default

Live smoke verified end-to-end:
- "Hausarzt Konstanz" with Pelias down → no public API call,
  notice: 'sensitive_local_unavailable'
- "Konstanz" → falls through to Photon, notice: 'fallback_used'
- Reverse with high-precision GPS → Photon receives quantized
  coords, returns city-block-level result
2026-04-28 16:04:56 +02:00
Till JS
164d5dab8b fix(mana-llm): copy aliases.yaml into Docker image
main.py's lifespan handler loads `Path(__file__).parent.parent /
'aliases.yaml'` (= /app/aliases.yaml) on startup. The Dockerfile only
copied `src/`, so prod containers always crashlooped on first start
with `AliasConfigError: alias config not found at /app/aliases.yaml`
— which is why mana-llm has been silently absent from prod. Surfaced
today after a manual `gh workflow run cd-macmini.yml -f service=
mana-llm` actually attempted to start the container instead of
relying on a long-stale image.

Tested locally: container now starts cleanly, /health returns 200,
and `/v1/aliases` lists the configured chains.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 15:47:48 +02:00
Till JS
34b1690ea4 fix(mana-ai): copy missing workspace deps into Docker installer stage
The Dockerfile copied only @mana/shared-{hono,ai,logger}, but
services/mana-ai/package.json also depends on @mana/shared-research and
@mana/tool-registry (and @mana/tool-registry pulls in
@mana/shared-crypto transitively). Without those, pnpm couldn't
resolve the workspace symlinks and the container crashlooped on every
restart with:

  error: ENOENT reading "/app/services/mana-ai/node_modules/@mana/tool-registry"

Surfaced today after a manual `gh workflow run cd-macmini.yml -f
service=mana-ai` — mana-ai had never been deployed because no commit
since the CD pipeline started had touched its path. The first real
deploy hit the missing-COPY immediately.

Header comment in the Dockerfile now spells out direct + transitive
workspace deps so future additions don't drift.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 15:42:20 +02:00
Till JS
9a0cf5b676 fix(geocoding): bump PROVIDER_TIMEOUT_MS default 5s → 8s
First-probe DNS+TLS handshake against Nominatim can take >5s on a
cold start (verified locally: 642ms warm, sometimes 5-8s cold). The
old 5s default false-marked Nominatim unhealthy and the 30s health-
cache then locked us into a fallback-of-fallback gap. 8s gives
enough headroom for cold-start while still cutting off actually-
stuck connections.

Photon and Pelias don't hit this — Photon's CDN is consistently
sub-second and Pelias is on localhost / LAN. Only the public
Nominatim path warranted the bump, but the timeout is per-provider
shared so we adjust it globally.

Existing PROVIDER_TIMEOUT_MS env override still wins.
2026-04-28 15:39:09 +02:00
Till JS
f1e4a39644 feat(geocoding): provider chain with Photon + Nominatim fallbacks
mana-geocoding now tries Pelias first, falls back to public Photon
(komoot.io) and finally to public Nominatim (OSM) when Pelias is
unhealthy or unreachable. The Places module's address lookup keeps
working even when the Pelias container is stopped — which it currently
is on the Mac mini, freeing 3 GB of RAM until Pelias gets migrated to
the GPU server.

Architecture:

  ProviderChain ─ tries providers in priority order, stops on first
                  success. A clean empty-results answer is definitive
                  (don't burn through public-API budget on a query that
                  legitimately has no match). Only network errors / 5xx
                  / 429 trigger fallthrough.

  HealthCache  ─ per-provider, 30s TTL. A failed health probe or a
                  failed search marks the provider unhealthy and skips
                  it for the rest of the cache window. Lazy refresh —
                  no background pinger.

  RateLimiter  ─ single-token FIFO queue, 1100ms gap by default.
                  Used to enforce Nominatim's 1 req/sec policy. Handles
                  abort during inter-task wait by releasing the busy
                  flag so later tasks aren't blocked.

Provider details:

  pelias    — primary, self-hosted DACH index, full OSM taxonomy in
              `peliasCategories`, no rate limit
  photon    — public komoot endpoint, GeoJSON shape, raw `osm_key:
              osm_value` mapped via lib/osm-category-map.ts. Faster
              than Nominatim, no advertised rate limit but be polite.
  nominatim — public OSM endpoint, strict 1 req/sec via the limiter,
              custom User-Agent required (otherwise 403). Last
              resort — fallback for when Photon is also down.

Response shape changes (additive only — existing callers keep
working):

  - results[].provider: 'pelias' | 'photon' | 'nominatim'
  - results[].peliasCategories: only present when Pelias served the
    request (was already absent on Pelias-API patch failures)
  - top-level provider: <name> + tried: <name[]> on success/error
  - new endpoint: GET /health/providers — per-provider snapshot

Configuration via env (defaults shipped):

  GEOCODING_PROVIDERS=pelias,photon,nominatim   # order matters
  PROVIDER_TIMEOUT_MS=5000
  PROVIDER_HEALTH_CACHE_MS=30000
  PHOTON_API_URL=https://photon.komoot.io
  NOMINATIM_API_URL=https://nominatim.openstreetmap.org
  NOMINATIM_USER_AGENT=mana-geocoding/1.0 (+https://mana.how; ...)
  NOMINATIM_INTERVAL_MS=1100

Testing: 115 tests green (was 42). New coverage:
  - osm-category-map.test.ts (47 cases over food/transit/shopping/
    leisure/work/other priority resolution)
  - rate-limiter.test.ts (FIFO, abort-during-wait, abort-during-sleep)
  - chain.test.ts (failover, empty-results-stops, health-cache,
    snapshot)
  - photon-normalizer.test.ts and nominatim-normalizer.test.ts (lock
    the wire-format mapping for both fallback providers)

Live smoke against public Photon verified — both /search and /reverse
return correctly normalized results with provider="photon" when Pelias
is unreachable.
2026-04-28 15:21:11 +02:00
Till JS
ff823bff60 fix(feedback): POST /api/v1/feedback liest appId aus X-App-Id-Header
Der Submit-Handler hat den Body 1:1 an feedbackService.createFeedback
weitergereicht. Da CreateFeedbackInput appId nicht enthält (Client
schickt es als X-App-Id-Header), schlug jeder INSERT mit "null value
in column app_id violates not-null constraint" fehl.

Außerdem: lightbulb-Icon im phosphor-icon-map nachgezogen, sonst
zeigt der "Idee teilen"-Eintrag in der barMode-Variante des Usermenüs
kein Icon (nur Label).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 15:16:11 +02:00
Till JS
0c30a16eb5 fix: 4 boot-time noise + correctness bugs surfaced by post-deploy smoke
All four were pre-existing; the audit smoke-test made them visible. Fixed
together because they share a "boot console-warn cleanup" theme.

1. streaks ensureSeeded race (DexieError2 ×2)
   - Two boot-time liveQuery callers passed the `count > 0` check before
     either had written, then the second's `.add()` hit a ConstraintError.
   - Fix: cache the seed promise per module, run the existence check +
     bulkAdd inside one Dexie RW transaction, and only insert MISSING
     defs (preserves existing currentStreak/longestStreak counts).

2. encryptRecord('agents', …) "wrong table name?" warning
   - The DEV-only check fired whenever a record carried none of the
     registered encrypted fields, regardless of whether anything could
     actually leak. `ensureDefaultAgent` writes a fresh agent row before
     `systemPrompt` / `memory` exist — pure noise.
   - Fix: drop the "no fields at all" branch. Keep the case-mismatch
     branch (the branch that actually catches silent plaintext leaks).

3. Passkey signInWithPasskey "Cannot read properties of undefined
   (reading 'allowCredentials')"
   - Client destructured `{ options, challengeId }` from the server's
     options response, but Better-Auth's `@better-auth/passkey` plugin
     returns the raw PublicKeyCredentialRequestOptionsJSON (no
     envelope) and tracks the challenge in a signed cookie. Both
     `options` and `challengeId` came back undefined; SimpleWebAuthn
     blew up the moment it tried to read the request shape. Verify body
     `{ challengeId, credential }` was likewise wrong — Better-Auth
     wants `{ response }`.
   - Fix: align both register and authenticate flows with Better-Auth's
     native shape on options + verify, and add `credentials: 'include'`
     on every fetch so the challenge cookie actually round-trips.
     Server's verify proxy now reads `parsed?.response?.id` for
     credentialID rate-limiting.

4. /api/v1/me/onboarding/ → 404
   - Hono's nested router (`app.route(prefix, sub)` + inner
     `app.get('/')`) matches the prefix-without-slash form only. The
     onboarding-status store sent the request with a trailing slash, so
     every login produced a 404 + a console warn.
   - Fix: client sends the path without trailing slash; mana-auth picks
     up `hono/trailing-slash` middleware as defense-in-depth so a future
     accidental trailing slash on any /me/* route 301-redirects instead
     of 404-ing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 14:56:24 +02:00