Commit graph

362 commits

Author SHA1 Message Date
Till JS
b1b9bbc269 chore: rename repo mana-monorepo → managarten
Some checks are pending
CD Mac Mini / Detect Changes (push) Waiting to run
CD Mac Mini / Deploy (push) Blocked by required conditions
CI / Detect Changes (push) Waiting to run
CI / Validate (push) Waiting to run
CI / Build mana-search (push) Blocked by required conditions
CI / Build mana-sync (push) Blocked by required conditions
CI / Build mana-api-gateway (push) Blocked by required conditions
CI / Build mana-crawler (push) Blocked by required conditions
Docker Validate / Validate Dockerfiles (push) Waiting to run
Docker Validate / Build calendar-web (push) Blocked by required conditions
Docker Validate / Build quotes-web (push) Blocked by required conditions
Docker Validate / Build todo-backend (push) Blocked by required conditions
Docker Validate / Build todo-web (push) Blocked by required conditions
Docker Validate / Build mana-auth (push) Blocked by required conditions
Docker Validate / Build mana-sync (push) Blocked by required conditions
Docker Validate / Build mana-media (push) Blocked by required conditions
Mirror to Forgejo / Push to Forgejo (push) Waiting to run
Phase-3-Rename des ehemaligen Multi-App-Monorepos zum eigenständigen
Produkt-Repo. Verein heißt mana e.V., Plattform-Domain bleibt mana.how,
apps/mana/ bleibt unverändert — nur der Repo-Container kriegt den
neuen Namen "managarten" (Garten der mana-Apps).

Geändert:
- package.json#name + #description
- README.md (Titel + erster Absatz)
- TROUBLESHOOTING.md
- alle Mac-Mini-Skripte (Pfade ~/projects/mana-monorepo → ~/projects/managarten)
- COMPOSE_PROJECT_NAME-default in scripts/mac-mini/status.sh
- .github/workflows/cd-macmini.yml + mirror-to-forgejo.yml
- apps/docs (astro.config.mjs + content)
- .claude/settings.local.json (Bash-Permission-Pfade)
- alle docs/*.md Pfad-Referenzen
- launchd plists, .env.macmini.example, infrastructure/

Forgejo-Repo + GitHub-Repo bereits via API umbenannt. Lokales
Verzeichnis-Rename + Mac-Mini-Cutover folgen separat.
2026-05-09 01:16:02 +02:00
Till JS
61f2772789 chore(brand): rename Cards → Cardecky (display, infra, license-IDs)
- App display name → Cardecky in mana-apps.ts, MODULE_REGISTRY, alle Docs
- Domains: cardecky.mana.how (App), cardecky-api.mana.how (Marketplace
  API), cardecky.com (Marketing-Landing — cloudflared-route + nginx-Block
  vorbereitet, DNS muss noch gesetzt werden)
- 301-Redirect cards.mana.how → cardecky.mana.how (nginx + cloudflared)
  für alte Bookmarks; kann nach 6–12 Monaten wieder raus
- SPDX license IDs Cards-Personal-Use/Pro-Only-1.0 → Cardecky-* via
  Drizzle 0001-Migration (DROP CHECK → UPDATE rows → SET DEFAULT → ADD
  CHECK), inkl. _journal- und 0001_snapshot-Update
- In-mana cards-Modul: dezenter Banner zur Standalone-App (GUIDELINES
  §12), einmal schließbar via localStorage
- Docker-CORS-Listen, sso-origins.ts, Prometheus-Target aktualisiert

Technische IDs bleiben bewusst: appId 'cards', schema
mana_platform.cards.*, Verzeichnis apps/cards/, Package @cards/web,
services/cards-server, Env-Vars CARDS_*, UMAMI_WEBSITE_ID_CARDS*, Class
CardsEvents — Mana-Konvention (Brand ≠ technischer Identifier).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 13:49:47 +02:00
Till JS
b185ee2473 doc(mac-mini): aktualisiere Container-Bilanz nach Phase 2c-2g
- Stand-Zeile auf 2026-05-07 inkl. aller Migrations-Phasen 2c+2d+2e+2f+2g
  + Verdaccio-Rollback erklärt
- Mem-Budget-Tabelle: news-ingester aus "Andere" raus (auf GPU-Box seit
  Phase 2f-2), Standalone-Compose-Zeile für verdaccio dazu, Total
  ~45 → ~42 Container
- GPU-Hostname-Liste vervollständigt: status, mana-ai, research, photon

Pre-existing Drift; nicht durch Phase 2g/Rollback verursacht, aber bei
der Audit aufgefallen.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 23:30:06 +02:00
Till JS
aeaefaf675 infra(phase 2f-1 rollback): verdaccio bleibt auf Mac Mini
Phase 2f-1 hatte verdaccio von der Mini auf die GPU-Box verlegt — das
Storage-Volume kam dort aber nie an. Der GPU-Container war leer (keine
htpasswd, keine @mana/*-Pakete), externe `npm install @mana/foo` lief
auf 404. Rollback statt Storage-Migration nachzuholen, weil:

- Mini's Standalone-Verdaccio (~/projects/verdaccio/) hat alle Daten
  inklusive claudebot-Service-Account und 9 published Pakete
- npm-Reads sind ohnehin niedrig (CI-builds), Mini-Disk hat Platz
- Vereinfacht den User-/Token-Pflad-Lebenszyklus (eine Quelle, keine
  Sync-Choreografie)

Cleanup:
- DNS npm.mana.how zurück auf Mini-Tunnel via Cloudflare-API
- Mini cloudflared-config.yml: npm.mana.how-Ingress wieder eingetragen
- GPU-Box: verdaccio-Container + 3 Volumes entfernt (mana_verdaccio-storage,
  mana_verdaccio-plugins, verdaccio-storage)
- infrastructure/docker-compose.gpu-box.yml: verdaccio-Service-Block raus
- infrastructure/verdaccio/config.yaml: gelöscht (war GPU-spezifischer
  Bundle, der Code/mana hat die kanonische Kopie für Mini)
- docs/PLAN_OPTION_C.md: Phase 2f markiert als ⚠️ teilweise zurückgerollt

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 23:12:11 +02:00
Till JS
c84742005b infra(phase 2g): mana-research → GPU-Box
Web-Research-Orchestrator (16+ search-/LLM-providers) auf die GPU-Box
verlagert. Cross-LAN für mana-auth/mana-credits/mana-llm/mana-search/
postgres/redis (192.168.178.131). research.mana.how routet jetzt zum
mana-gpu-server-Tunnel (CF config v29). Mini-Container-Count 42 → 41.

PUBLIC_MANA_RESEARCH_URL in mana-app-web auf https-URL umgestellt —
Mini-Container können 192.168.178.11 nicht direkt erreichen (Colima-NAT),
daher Cross-LAN-Bridge via Cloudflare-Tunnel wie bei mana-ai.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 20:26:10 +02:00
Till JS
e77134bd8b docs(infra): Phase 2f added to PLAN_OPTION_C + hostname table updated to v28
- PLAN_OPTION_C.md: new row covers verdaccio + news-ingester + mana-ai
  with the cross-arch + workspace-deps gotchas
- infrastructure/README.md: hostname table catches up to npm.mana.how
  (Phase 2f-1) and mana-ai.mana.how (Phase 2f-3); config v26 → v28
- infrastructure/.env.gpu-box.example: MANA_SERVICE_KEY +
  MANA_AI_PRIVATE_KEY_PEM block added with note that the values mirror
  Mini's .env.macmini (the latter's matching public-half stays on
  mana-auth, that's what makes Mission-Grant decryption work)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 17:09:28 +02:00
Till JS
6f1b0329f0 docs(infra): document photon.mana.how + cross-LAN workaround pattern
Phase 2c had 3 cross-LAN-routing pain points; Phase 2e + the photon
fix solved 2 of them, so the doc was misleading. Refactored the
"Bekannte Limits" block in PLAN_OPTION_C.md into a proper
cross-LAN-pattern table that lists each known case + its current
status. Phase-2c-original gpu-* and Mini-Promtail entries kept as
the remaining open items, with the same Cloudflare-Tunnel-as-LAN-bridge
workaround spelled out (Loki-HTTP-Push via loki.mana.how would be the
next obvious move).

Plus infrastructure/README.md now lists every active public-hostname
the mana-gpu-server tunnel exposes (v26).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 14:45:34 +02:00
Till JS
d8a35afd99 infra(gpu-box): commit GPU-Box compose to repo + Phase 2e docs
The GPU-Box stack has been carrying real production workload since
Phase 2c (monitoring) but only existed as a /srv/mana/docker-compose.gpu-box.yml
on the box itself. If the WSL filesystem dies, none of it is
reproducible. Bring the file into infrastructure/ as the source of
truth (live file on the box must be kept synchronous; manual rsync
for now since there's no CD into the GPU box).

Plus:
- infrastructure/.env.gpu-box.example as the secrets template
- infrastructure/README.md describing what runs there + how the
  Cloudflare-tunnel ingress is API-managed (not config.yml)
- .gitignore for the live infrastructure/.env.gpu-box copy
- MAC_MINI_SERVER.md status-page section now points at the GPU-Box
  setup instead of the long-stopped Mini container
- PLAN_OPTION_C.md: Phase 2e row + GPU-Box service tree update

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 13:28:49 +02:00
Till JS
585bee42be docs(mac-mini): refresh container counts + memory budget after Phase 2c+2d
53→45 running, 9.6 GiB nominal → ~6 GiB nominal, build-app.sh's
monitoring-stop trigger no longer fires. Cross-link to PLAN_OPTION_C.md
for the GPU-box-side picture (grafana/git/stats/glitchtip live there
via the mana-gpu-server tunnel).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 12:44:22 +02:00
Till JS
dd2e609545 fix(docker): COPY packages/cards-core in SvelteKit Dockerfiles
The cards-spinoff commit (0a544ac41) added @mana/cards-core as a
workspace dependency for apps/mana/apps/web but didn't update the
two Dockerfiles that COPY-and-pnpm-install the workspace into the
image. CD's --no-cache build for mana-web therefore failed at
`pnpm install` with ERR_PNPM_WORKSPACE_PKG_NOT_FOUND, leaving the
container on a stale pre-cleanup image whose ListView28 chunk still
referenced the dropped contextSpaces Dexie table — every mana.how
route 500'd.

Adding the COPY line to both files (the shared sveltekit-base layer
and the per-app layer that does a second pnpm install) makes the
package available to the workspace resolver and lets the build go
through.

Plus the Phase 2c-d doc updates that piled up today (Glitchtip
on dedicated GPU-box stack, gitignore for *_CREDENTIALS.md files).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 01:47:07 +02:00
Till JS
c14aef9f85 docs(infra): Mac-Mini ↔ Windows-GPU-Box workload-split — Plan Option C
Hilfsdienste (Monitoring, Forgejo, Glitchtip, Umami) wandern von der
auslast­ungs-kritischen Mac-Mini-Box auf die Windows-GPU-Box, die
ohnehin 95 % System-RAM idle hat. Production-Hot-Path bleibt auf dem
Mini, kein Geld ausgegeben, Single-Point-of-Failure am Standort
reduziert.

Stand 2026-05-06: Phase 0–2b shipped (WSL2-Docker, Grafana cross-box,
Forgejo, Umami healthy). Phase 2c (Loki+VM+Alerts) und Phase 4
(Cloudflare-Cutover für grafana.mana.how) brauchen eigene Sessions —
beides Pre-existing-Mis-config-Aufräumen, kein Architektur-Risiko.

Hardware-Inventar in WINDOWS_GPU_SERVER_SETUP.md ergänzt: Ryzen 9 5950X,
64 GB DDR4, RTX 3090, 660 GB frei C:. WSL2 auf 24 GB / 12 vCPU
gedeckelt damit AI-Scheduled-Tasks > 30 GB Reserve haben.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 20:39:01 +02:00
Till JS
1b579ab0b0 chore(mana-events): move from port 3065 to 3115 — collision with platform mana-media
Platform-Repo (Code/mana/) reserviert 3065 für mana-media; um Doppel-
Belegung zu vermeiden wandert mana-events (Public-RSVP / Event-Sharing)
auf 3115. Neuer Port-Block 311x ist unbenutzt und gehört strukturell
neben mana-mail (3042) bzw. die anderen 30xx Service-Ports.

Berührt jeden harden-coded 3065-Default — Server-Config, Webapp-Config,
SSR-Routes (rsvp/[token], status), Playwright-Webserver-Setup, e2e-Spec.
PUBLIC_MANA_EVENTS_URL in .env.development zieht beide Variablen mit.

PORT_SCHEMA.md trägt jetzt den Wechsel mit Datum + Begründung —
zukünftiges Ich soll nicht raten warum der Port aus der 30xx-Reihe
ausschert.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 20:38:46 +02:00
Till JS
e37c008a7a chore(articles): polish pass — schema cleanup, MAX cap, filters, docs (#8,#9,#13,#15,#18,#20)
Polish-pass on top of the bulk-import rollout. Five contained items.

#8 + #9 — Dexie v60 schema cleanup
   - Drop articleImportJobs.leasedBy + .leasedUntil. They were defined
     on the original v57 schema as a soft-lease handshake, but the
     worker uses pg_try_advisory_xact_lock and never wrote them.
     Local-* type + projection row stripped.
   - Drop the standalone `state` index on articleImportItems.
     [jobId+state] covers the worker's hot query; the state-solo
     index had no call site.
   Both changes lossless — Dexie just removes the column declarations
   from new rows; existing rows still carry the dead nulls (zombies)
   until the next full row-rewrite. Not worth a hard migration for
   two never-written columns.

#15 — MAX_URLS_PER_JOB hard cap (200)
   articleImportsStore.createJob() throws if the URL list exceeds the
   cap. BulkImportForm surfaces the limit in the live counter chip
   and disables the submit when over. The worker can chew through any
   N, but at high counts the UI gets unwieldy (no virtualisation) and
   wall-clock duration climbs into multi-hour. 200 is a pragmatic
   ceiling — Pocket-export dumps average 50–150.

#13 — Filter-Tabs in JobsList
   Pill-style tabs above the list: Alle / Aktiv / Fertig / Mit Fehlern,
   each with the row count. Disabled when the bucket is empty so the
   user only sees actionable filters. The "Mit Fehlern" filter
   (errorCount > 0) is the most valuable for triage.

#18 — apps/mana/CLAUDE.md
   - Articles row added to the Tool Coverage table (5 propose +
     1 auto, including the new auto-policy import_articles_from_urls).
   - New "Articles bulk-import" section after the AI Workbench part:
     pipeline diagram, table list, actor + metrics + cap pointers.

#20 — ARTICLES_IMPORT_WORKER_DISABLED env var documented
   New row under "Mana API — Articles Bulk-Import Worker" in
   docs/ENVIRONMENT_VARIABLES.md.

Plan: docs/plans/articles-bulk-import.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 02:42:46 +02:00
Till JS
907a3add49 Create forms-module.md 2026-04-28 22:41:45 +02:00
Till JS
230dfd5dad chore: extract arcade into standalone repo
Arcade lives as its own pnpm workspace at ~/Documents/Code/arcade
now, with no @mana/* coupling. This drops every reference and the
games/ directory from the monorepo.

Removes:
- games/ directory (89 files: web + server + 22 HTML games + screenshots)
- @arcade/web, @arcade/server pnpm workspace entries (games/* globs)
- arcade scripts in root package.json (4 scripts)
- arcade.mana.how from mana-auth trusted origins + CORS_ORIGINS
- arcade entries in mana-apps registry, app-icons, URL overrides
- arcade.mana.how from cloudflared tunnel + prometheus blackbox probes
- arcade-web service block in docker-compose.macmini.yml
- generate-env.mjs entries for arcade server + web
- BRANDING_ONLY 'arcade' entry in registry consistency spec
- dead arcade translation keys in GuestWelcomeModal (DE+EN)
- arcade mention in CLAUDE.md, authentication guideline, MODULE_REGISTRY

Verified:
- services/mana-auth/src/auth/sso-config.spec.ts: 8/8 pass
- pnpm install regenerates lockfile cleanly (-536 lines)
- no remaining 'arcade' refs outside historical snapshot docs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:40:01 +02:00
Till JS
962606b961 feat(demo-personas): chor tägerwilen — Recherche + Seed (118 Records)
Erste Demo-Persona auf Prod live: chor-taegerwilen@mana.how.

Inhalt:
- Recherche-Brief mit Quellen, IDs, Modul-Mapping, Pitch-Hooks
- data.ts: 54 Mitglieder (S/A/T/B vollständig), Vorstand, Chorleiter,
  Termine April–Juni 2026, 5 Konzerte 2026, Konzert-Archiv 2015–2025,
  kontextDoc Markdown
- seed.ts: idempotentes Bun-Skript, schreibt direkt in
  mana_sync.sync_changes via SSH-Tunnel (5433). Setzt RLS-Context,
  räumt prior demo-seed Rows auf, schreibt 118 Records über
  kontext / contacts / calendar+timeblocks / events / library /
  notes / website / ai-missions.

Pitch-Hook: der Verein war bereits ClubDesk-Kunde — Mana-Replacement
ist die direkte Migrations-Story.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:17:32 +02:00
Till JS
19627f18b8 docs(demo-personas): Runbook für echte-Account-Demo-Workflow
Vorgehen pro Demo-Persona dokumentiert: Recherche → Live-Account auf
Mac-Mini-Prod → Club-Space → idempotentes Seed-Skript → Smoke-Test.
Inkl. Modul-Mapping (appId/tableName), Common Pitfalls (Prod-Schema-Drift
field_timestamps vs field_meta, forced RLS auf sync_changes), und
Lessons aus Persona 1 (Chor Tägerwilen).

Verworfener Fork-System-Plan bleibt nicht im Repo — siehe Memory-Pointer
project_demo_personas_workflow.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:17:18 +02:00
Till JS
7bca16dfa7 feat(articles): bulk-import schema + plan (Phase 1)
Three new sync-tracked Dexie tables under the articles appId:

  articleImportJobs     — job header (counters, status, lease metadata).
  articleImportItems    — one row per URL in a job, state-machine driven.
  articleExtractPickup  — short-lived server→client handoff inbox.

URL stays plaintext on items by necessity — the server-worker reads it
without master-key access, same rationale as articles.originalUrl. The
extracted article eventually lands encrypted in the existing `articles`
table; bulk-import rows hold only pointers.

Plan: docs/plans/articles-bulk-import.md (full architecture, 7 phases,
test matrix, edge-cases). Phase 2 already shipped in 5535f2da4 (worker);
this commit lays the schema underneath it.

Originally committed as b2f4e8314, lost during a parallel reset, here
restored via cherry-pick.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:11:51 +02:00
Till JS
fc49198992 docs(geocoding): post-migration log + Photon weekly-refresh operator scripts
- Decision report: status flipped to MIGRATED; added migration log with
  five WSL2 gotchas (bzip2 missing, no official Photon image,
  firewall=true blocks cross-LAN, vmIdleTimeout=-1 ineffective,
  PowerShell pre-expansion of bash $(...)) and resource snapshot.
- mana-geocoding CLAUDE.md: PHOTON_SELF_API_URL note now reflects live
  primary status on mana-gpu since 2026-04-28.
- photon-self/: operator scripts for the weekly DB refresh — update.sh
  (atomic-swap with rollback), systemd unit + timer (Sun 03:30 +30min
  jitter, Persistent=true), README with re-installation instructions
  for DR. Currently installed and enabled on mana-gpu.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 21:31:37 +02:00
Till JS
153ad8049c feat(geocoding): support dual-Photon (self-hosted + public) for GPU
migration

The chain now distinguishes two Photon instances:
  photon-self  privacy: 'local'   (self-hosted on mana-gpu)
  photon       privacy: 'public'  (komoot.io, last-resort fallback)

Both wrap the same `PhotonProvider` class with different config — only
the URL, name, and privacy stance differ. The new ProviderName variant
'photon-self' lets the chain track per-provider health for them
independently (a single 'photon' slot would collide in the health
Map).

Opt-in registration: `photon-self` is only built when
PHOTON_SELF_API_URL is set in the env. When unset (current state),
the chain has the same shape as before — full backward compat. After
the GPU migration, flipping the env-var on is the only deploy step
needed:
  PHOTON_SELF_API_URL=http://192.168.178.11:2322

Default chain order updated to:
  photon-self,pelias,photon,nominatim
  ^^^^^^^^^^^ silently skipped if not registered (env unset)

The privacy guarantee is structural: photon-self carries privacy:
'local', so the existing sensitive-query block from the previous
hardening commit now has a real local backend post-migration —
medical/crisis-service queries get real results instead of the
"sensitive_local_unavailable" notice.

Tests: 148 (was 141). New coverage:
- src/__tests__/app.test.ts: createChain registration logic — verifies
  photon-self appears iff PHOTON_SELF_API_URL is set, ordering
  honored, GEOCODING_PROVIDERS env-var filter respected
- providers/__tests__/photon-normalizer.test.ts: provider field
  carries 'photon' or 'photon-self' based on the call argument

Recon of mana-gpu (2026-04-28): Windows 11 Pro Build 26200, 64 GB
RAM (56 GB free), 739 GB disk free, no WSL2/Docker yet, no native
GPU services running. Setup plan documented in
docs/runbooks/photon-on-mana-gpu.md (3–4 h, ~1 h of which is
download/unpack waiting).
2026-04-28 17:19:04 +02:00
Till JS
6f83fba66a docs(reports): geocoding self-hosting decision — recommend Photon on mana-gpu
Compares Pelias / Nominatim / Photon for self-hosting on the GPU
server, with current (2026-04-28) numbers from upstream docs +
GraphHopper's Photon-data downloads:

  Photon Europe pre-built dump: 30.6 GB, weekly refresh
  Photon Germany pre-built dump: 5.8 GB, weekly refresh
  Nominatim Germany import:     ~100 GB disk, 8–12 h, 12 GB RAM
  Pelias DACH (current):         3 GB RAM, 4 services, JS patch hack

Recommendation: Photon Europe-wide on mana-gpu. Single Java process,
embedded OpenSearch, no PBF import (download a tarball, restart),
weekly auto-updates from GraphHopper, integrates with the wrapper's
existing PhotonProvider via just an env-var change.

Once self-hosted, Photon registers as `privacy: 'local'` — the
sensitive-query block (Hausarzt, Klinikum, …) gets a real local
backend and no longer has to return empty results when Pelias is
down. Public Photon stays in the chain as a `privacy: 'public'`
last-resort fallback.

Migration plan included (~3–4 h total, ~1 h waiting), with
phase-by-phase risk assessment.

Pelias does not return — the 3 GB RAM + multi-container + patched
JS combination has no operational case once we have a self-hosted
Photon that already matches our wrapper's wire format.
2026-04-28 17:04:30 +02:00
Till JS
112e2cc1b4 feat(feedback): rename community → feedback (module + routes + domain)
Modul, Routen und Public-Domain heißen jetzt einheitlich "feedback":

- App-Registry: id 'community' → 'feedback', name 'Community' → 'Feedback',
  Icon Megaphone → HeartHalf (passt zum bereits-globalen heart-half-Icon
  am Module-Header und im PillNav-Usermenü)
- Modul-Config: communityModuleConfig → feedbackModuleConfig
- Routen-Refs: alle href/goto-Aufrufe in Modul-Views, MyWishesView,
  Onboarding-Wish, Profile-MyWishes auf /feedback umgestellt
- /feedback/+layout: Brand "Mana Community" → "Mana Feedback", Megaphone
  → HeartHalf, "In Mana öffnen"-CTA zeigt jetzt auf /?app=feedback
- Public-Mirror Domain: community.mana.how → feedback.mana.how
  (cloudflared-config.yml + docker-compose.macmini.yml CORS_ORIGINS +
  PUBLIC_MANA_ANALYTICS_URL_CLIENT). DNS muss separat angelegt werden.
- Settings-Section: Hilfe-Text nennt jetzt feedback.mana.how

Internal: community_show_real_name + community_karma DB-Spalten bleiben
(Migration nicht im Scope dieses Renames). Settings-Search-Index-Kategorie
'community' bleibt ebenfalls — sie spiegelt das DB-Schema, nicht den
User-Begriff.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 16:18:45 +02:00
Till JS
44f9155ed3 chore(dev): pnpm dev:analytics script + test-checklist mentions local-dev startup
War nicht im Setup dokumentiert: bei localem Web-Dev (5173) muss
mana-analytics auf 3064 laufen, sonst werfen FeedbackHook + Toast-
Poll + /community ein ERR_CONNECTION_REFUSED. Convenience-Script
+ Hinweis in der Test-Checklist verhindern den Stolperer.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 14:54:32 +02:00
Till JS
0986d07a7d docs: feedback-hub manual-test-checklist
15 sections covering Phase 3 end-to-end browser-test-flow:
Onboarding-Wish, FeedbackHook + Modal, Public-Feed (eingeloggt +
inkognito), Reactions + Karma, Status-Flow + Loop-Closure, AdminResponse,
Klarname-Toggle, Eulen-Profil (SSR + 404), Threading, Phase-3.F-Cleanup-
Verifikation, Founder-Whitelist, Rate-Limit, Voting-Score, Mobile-
Responsiveness, Quick-DB-Sanity-SQLs.

Plus 'wenn was kaputt ist'-Debug-Pfad und bekannte Lücken (Email-Notifs,
Voice-Submit, Trending, Karma-Decay) als Roadmap-Markers.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 22:33:28 +02:00
Till JS
246c94374f test(feedback): pixel-avatar + redact privacy-boundary; mark plan SHIPPED
Tests:
- packages/feedback/src/avatar.test.ts — 10 unit tests (determinism,
  mirror-symmetry, color contrast, padding-resilience, pseudonym-
  integration, density-sanity).
- services/mana-analytics/src/services/feedback-redact.test.ts —
  9 privacy-boundary tests verifying:
    * anonymous path NEVER includes realName, even when author opted in
    * auth path NEVER includes realName when author opted OUT
    * realName only when (opted-in AND auth-path) — both gates required
    * userId / deviceInfo / voteCount stripped from output

Plan-Doc:
- docs/plans/feedback-rewards-and-identity.md status → shipped (3.A,
  3.B, 3.C, 3.F live; 3.D, 3.E open) mit Commit-Hashes.

Service-Layer minor: REWARD-const + redact als __TEST__-Export
publik gemacht (nur fürs Testen, kein Verhaltensänderung).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 18:11:17 +02:00
Till JS
dbe24acfc4 feat(feedback,credits): community-credit grants — +5 submit / +500 ship / +25 reaction-match
Phase 3.A des feedback-rewards-and-identity-Plans. Direkter Reziprozitäts-
Loop: User kriegt sofort etwas zurück fürs Mitwirken, Originalwunsch-
Eulen werden beim Ship belohnt, Reagierer kriegen einen Anteil.

mana-credits:
- Neuer Endpoint POST /api/v1/internal/credits/grant + grantCredits()
  Service-Methode mit Idempotency via metadata.referenceId.
- transaction_type-Enum erweitert um 'grant' (eigener Typ statt
  Mismatch mit 'refund').
- Migration 0001_grant_transaction_type.sql + partial-Index auf
  metadata->>'referenceId' für O(log n) Idempotency-Lookup.

mana-analytics:
- FeedbackService stempelt sofort +5 Credits beim createFeedback (top-
  level only, Replies bekommen nichts), wenn Mindest-20-Zeichen erfüllt
  und Rate-Limit (10/User/24h via feedback_grant_log) nicht überschritten.
- adminUpdate triggert beim FRISCHEN Übergang nach 'completed':
  +500 Credits an Original-Wisher + +25 an alle, die mit 👍 oder 🚀
  reagiert haben. Doppel-Pay strukturell unmöglich via referenceId
  (`<id>_shipped`, `<id>_reaction_<userId>`).
- Founder-Whitelist via FEEDBACK_FOUNDER_USER_IDS env (verhindert
  Self-Reward).
- Drop voteCount-Spalte (durch reactions/score seit 0002 ersetzt).
- Migration 0003_grant_log_drop_vote_count.sql idempotent, lokal +
  prod eingespielt.

Plan: docs/plans/feedback-rewards-and-identity.md (Phase 3.A-3.F).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 14:13:46 +02:00
Till JS
d5d2b6fcf8 docs(sync): document F1 + F3 commit-log corrections (Punkt 9 + 11)
Two commit artifacts from the multi-terminal sprint phase that can't be
fixed without destructive rebase + force-push of 28+ already-pushed
commits on origin/main:

- Punkt 9: F1 commit (7766ea502) shipped under the misleading title
  'docs(plans): mark llm-fallback-aliases SHIPPED' due to a parallel
  terminal session. The real F1 implementation (27 files: shared-ai/
  field-meta.ts, database.ts hooks, sync.ts wire format, mana-sync Go
  schema reset, mana-ai projections, MCP sync-db.ts) is intact in that
  commit despite the title.

- Punkt 11: F3 commit (6bb9d77be) accidentally bundled a one-line
  DragType addition (`| 'last'` in packages/shared-ui/src/dnd/types.ts)
  that belongs to the Lasts module, not to F3's updatedAt sweep. The
  regex codemod ran across the same files and pulled the change in.
  Functionally harmless (DragType union is additive) but semantically
  two unrelated changes share one commit.

Both commits are now annotated via local Git tags
(`sync-field-meta-overhaul-F1`, `sync-field-meta-overhaul-F3`) so a
future reader can find them by name regardless of the commit titles.
Tags are local — `git push --tags` makes them visible upstream.

Plan: docs/plans/sync-field-meta-overhaul.md "Commit-Log Corrections"
section.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 01:56:38 +02:00
Till JS
275130f8a6 test(sync): cross-cutting integration tests for field-meta overhaul (Punkt 12)
Six new tests in sync.test.ts under the "field-meta overhaul (F1-F4-fu)"
block, verifying the architectural promises of the 2026-04-26 sync
field-meta overhaul end-to-end:

- deriveUpdatedAt returns max(__fieldMeta[*].at)
- deriveUpdatedAt gracefully handles legacy / null records
- Dexie creating-hook stamps __fieldMeta + _updatedAtIndex on every
  local write
- Dexie updating-hook bumps __fieldMeta only for changed fields and
  syncs _updatedAtIndex with the latest at
- SYSTEM_BOOTSTRAP-stamped local insert produces origin='system' (the
  fallback path in userContextStore + kontextStore)
- Bootstrap-twin race scenario: local SYSTEM_BOOTSTRAP row + later
  server insert collapses via field-LWW with no conflict surface

Also re-exports SYSTEM_BOOTSTRAP from $lib/data/events/actor for
parity with the other SYSTEM_* sentinels.

35/35 sync.test.ts pass (29 prior + 6 new).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 01:54:35 +02:00
Till JS
421a49a2a8 docs(sync): close Punkt 5 audit — backend updated_at columns are not sync orphans
Survey of all 17 backend Drizzle schemas (mana-mail/-media/-auth/
-analytics/-research/-events/-subscriptions/-credits + apps/api/
{unlisted,website,traces,presi,todo}):

- 3 columns are actively read by service code:
  - research.providerConfigs.updatedAt — explicit write + DTO field
  - unlisted.snapshots.updatedAt — read in public response
  - website.customDomains.updatedAt — read in DNS-status response
- 14 columns are AUTO-ONLY: Drizzle stamps them via defaultNow() /
  $onUpdate(), no service code reads them.

But the AUTO-ONLY columns are NOT sync-orphans — they're standard
Drizzle audit-timestamp convention, useful for Postgres-level forensics
(`ORDER BY updated_at DESC` to find recently-modified rows during
debugging). F3's plan note ("pure server-internal columns, not touched")
correctly identified them. No cleanup is needed.

Closing the audit item with rationale documented in
docs/plans/sync-field-meta-overhaul.md and DATA_LAYER_AUDIT.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 01:47:41 +02:00
Till JS
ae6a14fb76 feat(shared-ai): SYSTEM_BOOTSTRAP system source — fallback inserts now stamp origin='system'
The race-window `getOrCreateLocalDoc()` fallback in userContextStore +
kontextStore stays (without it, a write that lands between "endpoint
provisioned the singleton in mana_sync" and "first pull landed it in
IndexedDB" would hit `update(missing-id, diff)` — a Dexie no-op that
silently swallows the user's edit). But it was semantically lying: the
insert stamped `origin='user'` even though the row is logically a
client-side replica of the server-side bootstrap.

This commit adds `SYSTEM_BOOTSTRAP = 'system:bootstrap'` to
`@mana/shared-ai` and wraps the two fallback inserts in
`runAsAsync(makeSystemActor(SYSTEM_BOOTSTRAP), ...)`. The Dexie hook
now stamps `origin: 'system'` on the empty-row insert — structurally
identical to the row mana-auth's bootstrap-singletons.ts writes. When
the server's pull arrives later both sides carry the same origin and
the conflict-gate stays quiet. The user's subsequent writes still
stamp `origin: 'user'` on the changed fields.

Plan: docs/plans/sync-field-meta-overhaul.md (F4-fu Fallback-Origin row).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 01:44:30 +02:00
Till JS
099cac4a01 feat(auth): explicit bootstrap-singletons endpoint + idempotent functions (F4 robust)
The F4 server-side singleton bootstrap was fire-and-forget at signup
time — a transient mana_sync outage during registration would leave the
user with no singleton and only the in-store `getOrCreateLocalDoc()`
fallback to race on the first write. The signup-hook is still the
happy-path zero-latency bootstrap; this commit adds a deliberate
reconciliation path that converges on every boot.

- Idempotent `bootstrapUserSingletons` / `bootstrapSpaceSingletons`:
  both functions now existence-check sync_changes before INSERT and
  return boolean (true=inserted, false=skipped).
- New endpoint `POST /api/v1/me/bootstrap-singletons` — JWT-gated under
  the existing `/api/v1/me/*` prefix. Provisions the caller's
  userContext and the kontextDoc for every Space they're a member of.
  Returns `{ ok, bootstrapped: { userContext, spaces: { id: bool } } }`.
- Webapp `(app)/+layout.svelte` calls the endpoint once per
  authenticated boot, after `restoreClientIdFromDexie()` and before
  `createUnifiedSync.startAll()`. Best-effort; failures swallow into a
  console warning and the in-store fallback still covers the rare
  race window.

Plan: docs/plans/sync-field-meta-overhaul.md (F4-robust row).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 01:38:14 +02:00
Till JS
53fecbf4a7 chore(dexie): v55 — sweep orphan updatedAt field from existing rows (F3 cleanup)
After F3 of the sync field-meta overhaul, every read of "last modified"
goes through `deriveUpdatedAt(record)` over `__fieldMeta`. The legacy
`updatedAt` field on existing IndexedDB rows was deliberately left in
place by v53 (its comment explicitly defers the row-rewrite to a later
cleanup) so the cut-over could proceed without a full DB rewrite.

This v55 upgrade walks every sync-relevant table (`Object.keys(TABLE_TO_APP)`)
and `delete row.updatedAt`. Idempotent (rows without the field are a
no-op), best-effort (try/catch per table guards against a registry
entry that doesn't yet have a Dexie store row).

Local-only tables (_pendingChanges, _activity, _clientIdentity,
_aiDebugLog) never carried `updatedAt`, so they stay out of the sweep.

Plan: docs/plans/sync-field-meta-overhaul.md (F3-fu row in Shipping Log).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 01:26:21 +02:00
Till JS
3df7391905 feat(auth): bootstrap per-Space kontextDoc on Space-creation (F4 follow-up)
Symmetrically extends the F4 server-side singleton bootstrap to the
per-Space `kontextDoc`. Every Space-creation — Personal at signup and
brand/club/family/team/practice via the org plugin — now writes an empty
kontextDoc row straight into mana_sync.sync_changes with origin='system',
client_id='system:bootstrap'. Fresh clients pull the row instead of
racing on a local insert that the next pull would clobber.

- New `bootstrapSpaceSingletons(spaceId, ownerUserId, syncSql)` in
  services/mana-auth/src/services/bootstrap-singletons.ts; shared
  `buildFieldMeta` helper extracted.
- `createBetterAuth(databaseUrl, syncDatabaseUrl, webauthn)` now takes
  the sync-DB URL and lazy-creates a module-scoped postgres pool for
  the bootstrap inserts.
- Hook into `databaseHooks.user.create.after` (only on `created: true`
  from createPersonalSpaceFor) and `organizationHooks.afterCreateOrganization`.
- Webapp `kontextStore.ensureDoc()` made private as `getOrCreateLocalDoc()` —
  same fallback role as userContextStore's after F5. Public API is now just
  setContent + appendContent.

Plan: docs/plans/sync-field-meta-overhaul.md (F4-fu row in Shipping Log).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 01:21:31 +02:00
Till JS
8804a20a7f feat(community): public anon hub — module + inline + admin + onboarding
Macht @mana/feedback omnipräsent + öffentlich. Phase 2 vom
Public-Community-Hub-Plan (docs/plans/feedback-hub-public.md).

Inline-Touchpoints:
- FeedbackHook: Lightbulb-Button, opens FeedbackQuickModal vorausgefüllt
  mit module-context. Auto-injected in jeder ModuleShell-Header
  (window-actions row), opt-out via hideFeedback prop.
- GlobalFeedbackPill: Floating "Idee?"-Pill bottom-right, self-hides
  auf /onboarding, /feedback, /community, und für Gäste. Auto-detected
  module-context aus URL bzw. ?app=-Param.
- FeedbackQuickModal: 3-Klick-Submit mit Category-Dropdown, Public-
  Toggle, "Sichtbar als {Pseudonym}"-Confirm-State.

Community-Modul (eigenes Modul, in Workbench drop-bar):
- module.config.ts (server-only, keine Sync-Tabellen)
- queries.ts: useCommunityFeed + useCommunityItem mit auth-aware Switch
  zwischen public + auth-enriched Endpoints
- ListView/DetailView/RoadmapView mit ItemCard-Component
- App-Registry-Eintrag (Megaphone-Icon, #F59E0B)

Public-Mirror-Routes (kein AuthGate):
- /community            — Feed mit SSR-Pre-Render via Public-Endpoint
- /community/[id]       — Single item + replies, SSR
- /community/roadmap    — Kanban Submitted/Planned/InProgress/Completed
- /community/admin      — Founder-only Triage (Status, AdminResponse,
                         visibility-Toggle); Client-side role-gate
                         redirect → /community.
SEO: <svelte:head> mit title/description, <noscript>-Fallback,
Cache-Headers stale-while-revalidate.

API:
- web's lib/api/feedback.ts pointed an die echte mana-analytics-URL
  (3064 dev) statt mana-auth. Neuer publicFeedbackService für
  unauthenticated SSR.
- getManaAnalyticsUrl() in lib/api/config.ts.

Onboarding-Wish public-by-default:
- Disclosure-Text: "Erscheint in Community-Page als Tier-Pseudonym".
- Toggle "Öffentlich teilen" / "Nur für Admins" mit Default on.
- Submitted-Confirm zeigt das generierte Display-Name.

Plan-Doc-Updates:
- feedback-hub.md Phase 2 abgespeckt → Verweis auf feedback-hub-public.md
- feedback-hub-public.md komplett: Architektur-Optionen A-E, Phase 2.x,
  Phase 3 Roadmap (16 Future-Features), Risiken.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 00:02:25 +02:00
Till JS
fd11481d94 docs(plans): mark sync-field-meta-overhaul F1-F7 SHIPPED
All seven phases of docs/plans/sync-field-meta-overhaul.md landed.
Final shipping log:

  F1 7766ea502  __fieldMeta replaces __fieldTimestamps trio
  F2 ad5e04a55  origin-gated conflict detection
  F3 6bb9d77be  drop updatedAt as a synced data field
  F4 c07db300b  server-side singleton bootstrap (mana-auth)
  F5 d78f57c04  drop public userContextStore.ensureDoc()
  F6 a031493fe  stable client_id in Dexie
  F7 2a8e8ff98  drop repair-silent-twin + legacy-avatar migrations

Structural outcome: the four conflict-toast root-causes diagnosed
on 2026-04-26 (updatedAt as synced field, history-replay false-
positives, ensureDoc race, localStorage-bound client_id) are all
closed. The conflict surface fires only when a real user edit
genuinely loses to a server overwrite — anything else is silent.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 23:54:09 +02:00
Till JS
6bb9d77be9 feat(sync): F3 — drop updatedAt as a synced data field
Removes `updatedAt` from the wire protocol and from every Local-prefixed
record type. Replaced by two orthogonal mechanisms — deriveUpdatedAt()
for read-side public-facing values, _updatedAtIndex shadow for indexed
sorts.

Local-side:
- New `_updatedAtIndex` shadow column. Stamped by the Dexie creating /
  updating hook on every write. Stripped from the pending-change payload
  so it never travels to mana-sync. Indexed in Dexie v53 on the 22 tables
  that previously indexed `updatedAt`.
- `deriveUpdatedAt(record)` in sync.ts returns max(__fieldMeta[*].at) so
  the public-facing Task / Note / etc. shape keeps an `updatedAt: string`
  property without holding it as data.
- Type-converters across ~60 module/queries.ts and types.ts files now
  call `deriveUpdatedAt(local)` instead of reading `local.updatedAt`.

Module-store sweep:
- Regex codemod removed `updatedAt: new Date().toISOString()` /
  `: now` / `: now()` / `: nowIso()` stamping from 121 store files
  (~382 call sites total). Single-property update calls
  (`{ updatedAt: now }`) collapsed to `{}`; touch-only patterns
  (writing/drafts, writing/generations) kept the call as a no-op
  because the hook now stamps `_updatedAtIndex` automatically on
  any Dexie modification.
- Local* interfaces stripped of `updatedAt: string` (43 types.ts files).
  Public-facing types (Task, Note, Mission, Agent, …) keep
  `updatedAt: string` as a computed read-side property.
- Companion's chat conversation now sorts on a real
  `lastMessageAt` data field instead of touching `updatedAt`.
- Session-only stores (times/session-alarms, session-countdown-timers)
  stamp `updatedAt: now` directly because they're not in Dexie and
  have no field-meta layer to derive from.

Sync engine:
- applyServerChanges sets `_updatedAtIndex` itself when applying
  server changes (max of server-field times for updates, recordTime
  for inserts) so server-replays land orderable.
- Dropped the legacy `localUpdatedAt` fallback — every record now has
  `__fieldMeta`, the per-field at is the canonical source.
- Soft-delete tombstone path stops stamping `updatedAt: serverTime`,
  uses `_updatedAtIndex` instead.

Server-side:
- mana-ai iteration-writer no longer emits `updatedAt` in
  sync_changes.data; receivers derive it from the field-meta map.
- mana-sync types: no change (the wire format already uses
  `field_meta` / `at` from F1).

Out of scope: backend Drizzle schemas (mana-credits, mana-events, …)
keep their `updated_at` columns. Those are pure server-internal — not
part of the sync_changes / __fieldMeta mechanism F3 cleans up.

Tests + checks:
- 0 svelte-check errors over 7652 files.
- 29/29 sync.test.ts (vitest).
- 61 mana-ai bun tests.
- mana-sync go test ./... cached green.

Plan: docs/plans/sync-field-meta-overhaul.md F3.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 23:12:22 +02:00
Till JS
ba6274edbe refactor(feedback): align package + DB enums, plan central hub
Macht @mana/feedback zur SSOT für alle Nutzer-Feedback-Categories und
-Status — Voraussetzung dafür, dass Onboarding-Wishes, NPS, Churn-Feedback
etc. künftig dort landen.

- Status-Enum: DB-Werte umbenannt new/reviewed/done/rejected →
  submitted/under_review/completed/declined (Package gewinnt). PG≥10
  ALTER TYPE … RENAME VALUE ist non-destructive.
- Category 'praise' ins Package aufgenommen (war nur in DB).
- Category 'onboarding-wish' neu in Package + DB für den Wish-Step.
- Default status in DB: 'new' → 'submitted'.
- CreateFeedbackInput.isPublic optional → Service reicht durch, default
  bleibt true; private Categories wie onboarding-wish setzen false.
- Schema-Datei mit SSOT-Kommentar versehen, der Drift in Zukunft verhindert.

Hand-authored Migration unter services/mana-analytics/drizzle/0001_*.sql
weil drizzle-kit push Enum-Werte nicht zuverlässig umbenennt. Manuell
einspielen vor nächstem db:push:

  psql "\$DATABASE_URL" -f services/mana-analytics/drizzle/0001_align-feedback-enums.sql

Plan in docs/plans/feedback-hub.md (Phase 0–4); Phase 0 + 1 jetzt, 2-4
deferred.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 21:52:25 +02:00
Till JS
bf3bca268a feat(lasts): M1-M7 — module ship + Meilensteine-Aggregator
Mirror sibling to firsts: das *letzte* Mal, das du etwas getan hast —
markiert oder rückwirkend erkannt. Plan: docs/plans/lasts-module.md.

M1 Skelett — Dexie v51 lasts-Tabelle, Encryption-Registry, Per-Space-
Welcome-Seed, Empty-State ListView. Kategorien aus firsts/types.ts
nach \$lib/data/milestones/categories.ts extrahiert (Re-Exports halten
firsts-API stabil).

M2 CRUD + DetailView — StatusTabs (Vermutet/Bestätigt/Aufgehoben),
Quick-Add mit Mode-Toggle, always-editable DetailView mit Lifecycle-
Buttons (Bestätigen, Aufheben mit Inline-Note), 44 i18n-Keys × 5 Locales.

M3 Inbox + Inferenz — Dexie v52 lastsCooldown (12-Monate-Cooldown,
deterministische ID), Source-Registry-Pattern in inference/, places-
Source mit Heuristik visitCount>=5 Span>=180d Silence>=365d. InboxView
mit Akzeptieren/Verwerfen + manueller Scan. contacts/habits → M3.b
sobald jeweilige Frequenz-Felder existieren.

M4 AI-Tools — 5 Tools im AI_TOOL_CATALOG (create_last, confirm_last,
reclaim_last, list_lasts, suggest_lasts), Webapp-Executor mit Vault-
Locked-Handling. Server-Drift-Test 4/4, Schema-Test 6/6.

M5 Reminders + Settings — Pivot zu In-App-DueBanner statt OS-Push (kein
PWA-Push-System im Repo). Pure date-math (12 Vitest cases), Settings-
Store mit 4 Toggles, DueBanner mit max-N rendering, Test-Banner-Knopf.

M6 Visibility + Unlisted-Sharing — VisibilityPicker + SharedLinkControls
in DetailView, buildLastBlob mit reflective-core whitelist (reclaimed
Lasts gehärtet ausgeblockt), SharedLastView public-render, Share-
Dispatcher kennt 'lasts'.

M7 Meilensteine-Aggregator — Cross-modul firsts vereinigt mit lasts
Timeline + Year-Recap. Pure aggregator (mergeMilestones,
buildMilestonesRecap), 12 Vitest cases. /milestones und
/milestones/recap/[year] Routes, Cross-Link in lasts/ListView.

Validation: 0 errors / 0 warnings (svelte-check 7645 files), 24/24
tests, i18n-parity 39x5 aligned (+2 namespaces), i18n-keys baseline-
equal, crypto 211 tables.

LOCAL TIER PATCH: lasts ist 'guest' für Testing — vor Release auf
'beta' setzen (packages/shared-branding/src/mana-apps.ts).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 21:40:29 +02:00
Till JS
ad5e04a554 feat(sync): F2 — origin-gated conflict-detection
Closes the false-positive conflict-toast loop on history-replay. Conflict
notifications now fire only when the local field meta records origin='user'
AND the pull is not an initial hydration round.

Origin source-of-truth:
- shared-ai/field-meta.ts → originFromActor(actor) maps actor.kind onto
  the FieldOrigin enum: user→'user', ai→'agent', system+SYSTEM_MIGRATION
  →'migration', any other system source→'system'.
- Dexie creating/updating hooks call it once per write so every persisted
  field carries the right pipeline tag.
- repair-silent-twin + legacy-avatar wrap their writes in
  runAsAsync(makeSystemActor(SYSTEM_MIGRATION, ...)) so the hook stamps
  origin='migration'. Future replays of those rows from another device
  will not surface as conflicts.

applyServerChanges options:
- New ApplyServerChangesOptions { isInitialHydration?: boolean }.
- Push-response and pull-paged-loop callers compute it from the cursor
  state (`!oldestCursor` / `!cursor`). Pagination resets the flag after
  the first page.
- Conflict-trigger gates on `!options.isInitialHydration && localMeta[k]
  ?.origin === 'user'` in addition to the prior tests.

Tests (sync.test.ts):
- New: replay-burst (10 sequential server updates → 0 conflicts)
- New: agent-origin local write + server overwrite → 0 conflicts
- New: isInitialHydration suppresses everything → 0 conflicts
- New: real user edit + server overwrite → 1 conflict
- All 25 prior tests still pass.

29/29 vitest sync.test.ts cases green; svelte-check 0 errors over 7647
files.

Plan: docs/plans/sync-field-meta-overhaul.md F2 done-criteria met.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 21:38:56 +02:00
Till JS
7766ea5021 docs(plans): mark llm-fallback-aliases SHIPPED, add M-by-M commit table
All 5 milestones landed today in one continuous session: registry,
health cache, fallback router, observability, and consumer migration.
115 service-side tests, validator covers 2538 files.
2026-04-26 21:27:57 +02:00
Till JS
e1860234d6 docs(plans): LLM-fallback via model-aliases — spec
Centralized resilience-layer in mana-llm: callers send semantic aliases
(`mana/long-form`, `mana/structured`, …), the router resolves to a
provider chain and falls back through unhealthy providers via a 30s
health-probe loop. Triggered by today's GPU-server outage that hung
the writing-generation endpoint for 75s before 500.

5 milestones, ~3 dev-days, big-bang migration (no live yet → no legacy).
All hardcoded `ollama/...` strings move into a single aliases.yaml SSOT,
new validate-llm-strings.mjs gate prevents regression.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 20:19:34 +02:00
Till JS
507532c367 docs(workbench-seeding-cleanup): record polish-pass commits
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 19:34:25 +02:00
Till JS
3d30e39ae7 feat(comic): Mc5 — Wardrobe-Hook "Als Comic-Character"
Brücke von Wardrobe nach Comic: User klickt auf einem Outfit oder
einem einzelnen Kleidungsstück „Als Comic-Character", landet im
Character-Builder mit pre-filltem Add-Prompt ("wearing the
Bühnenoutfit"), picked Stil und rendert die ersten 4 Varianten.

Wardrobe-Buttons:
- DetailOutfitView: unterhalb des TryOnButton ein outline-Link
  navigiert zu `/comic/character/new?title=…&prompt=wearing+the+
  OUTFITNAME+outfit`.
- DetailGarmentView: analog mit `prompt=wearing+GARMENTNAME` für
  ein einzelnes Kleidungsstück. Beide nur sichtbar wenn das
  Outfit/Garment nicht archiviert ist.
- Sparkle-Icon + dezent neutraler Border-Style (nicht primary —
  das ist die TryOn-CTA), hover schaltet auf primary/40.

Comic CharacterBuilder bekommt drei optionale Props:
`initialName?`, `initialAddPrompt?`, `initialStyle?`. Im
extend-Modus ignoriert (Source ist dann der existing-Character),
im create-Modus dienen sie als $state-Initialwerte. Routine read
ist intentional — Mounting passiert frisch pro Route-Visit, also
einmaliges Capture passt.

`/comic/character/new/+page.svelte` parsed jetzt
`page.url.searchParams` für `title`, `prompt`, `style` und reicht
sie als Props durch. style wird gegen die VALID_STYLES-Liste
validiert — defekte URL-Params fallen ohne Crash auf
"unset/default" zurück.

Bewusst NICHT gemacht: Try-On-Output direkt als sourceBodyMediaId
verwenden. Das Try-On-Bild ist im mana-media mit `app='picture'`
getaggt; `verifyMediaOwnership` auf
`/picture/generate-with-reference` akzeptiert nur
`['me','wardrobe','comic']` — der Comic-Generate würde mit
HTTP 404 abbrechen. Lösung wäre eine Server-Route die Picture-
Output als Comic-Asset re-tagged, das ist aber eigene Spec.
Aktueller Pfad ist sauberer: rohe meImages-Refs bleiben Source,
der Add-Prompt steuert den Outfit-Look.

Plan-Doc §11 Mc5 dokumentiert den Pfad + warum kein
Try-On-Reuse.

Comic-Files type-checken sauber.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 19:32:29 +02:00
Till JS
313809bc95 feat(comic): Mc1 — Character-Datenschicht (Iteration + Pinning)
Comic-Modul nutzte bisher rohe meImages direkt als Story-Refs:
gpt-image-2 / Nano Banana variieren zwischen Calls, Panel 1 sah
anders aus als Panel 4, User hatte keine Iteration vor der Story.
Lösung: Comic-Character als eigene Entität, einmal aufgebaut +
iteriert + gepinnt, danach Story-Anchor.

Datenschicht:
- Dexie v49 `comicCharacters` (space-scoped, indices createdAt /
  style / isFavorite / isArchived).
- types.ts: LocalComicCharacter mit name + style + addPrompt +
  sourceFaceMediaId + sourceBodyMediaId? + variantMediaIds[] +
  pinnedVariantId?, plus toCharacter + characterCoverVariantId
  helper (pinned > erste Variant > null).
- crypto/registry.ts: comicCharacters entry — name + description
  + addPrompt + tags encrypted; style + IDs + Variant-Liste +
  Booleans plaintext.
- collections.ts: comicCharactersTable.
- queries.ts: useAllCharacters, useCharactersByStyle, useCharacter
  via scopedForModule (alle space-scoped).
- stores/characters.svelte.ts: createCharacter (auto-pin first
  variant fallback), appendVariant (auto-pin if none yet),
  pinVariant, removeVariant (mit pin-fallback auf erste
  remaining), updateCharacter, toggleFavorite, archiveCharacter,
  deleteCharacter. Arrays werden via [...arr] entproxiet (Svelte
  5 $state defense).
- module.config.ts: comicCharacters in tables-Liste.
- picture/types.ts + queries.ts: comicCharacterId Back-Ref auf
  LocalImage + Image, mutually exclusive mit comicStoryId.
- 3 neue Encryption-Roundtrip-Tests (insgesamt 8 grün) für
  charakter-Row, Build-in-progress (no variants), Roundtrip.

Architektur-Entscheidungen (Plan-Doc §11 dokumentiert):
- **space-scoped**, nicht user-global: Source-meImages sind ja
  selbst space-scoped post-v40, sonst orphan-Refs nach
  Space-Wechsel.
- **Snapshot at story-create**, kein Live-Lookup: Stories
  speichern die mediaId der gepinnten Variant zum Erstellungs-
  zeitpunkt → re-pinning eines Characters lässt bestehende
  Stories unverändert.
- **n=4 fixes Variant-Count**: in einem gpt-image-2-Call
  parallel; sweet-spot für Auswahl ohne Decision-Fatigue.
- **Mutually-exclusive Back-Refs** auf picture.images:
  comicStoryId XOR comicCharacterId — Image ist Panel ODER
  Variant, nie beides.

Mc2 (UI: Builder + Variant-Grid + Routes), Mc3 (Story-Create-
Update + Soft-Migration), Mc4 (MCP/Catalog), Mc5 (Wardrobe-Hook)
folgen separat.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 15:52:58 +02:00
Till JS
547f643a6f docs(workbench-seeding-cleanup): record final architecture, all shipped
The plan ended up simpler than the four-layer sequence I originally
sketched: making the hook smart (use `getEffectiveSpaceId()` instead of
the literal sentinel) replaced both Schicht-A Etappe-2 (throw on
missing) and the per-call-site stamp migration. With that, the
transitional legacy-Home check + post-reconcile dedup pass also
became dead code and got removed in the same cleanup commit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 15:51:23 +02:00
Till JS
21c64e2616 docs(workbench-seeding-cleanup): record shipped status, sequence Schicht A
Plan now reflects what's actually merged: D-soft, B+C, and Schicht A
Etappe 1 are in. Etappe 2 (creating-hook flip to throw) is queued
post-soak; D-hard (deterministic-id rename) follows after that.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 15:12:25 +02:00
Till JS
faa16fa898 feat(augur): new module — signs collected, patterns read
Introduces the Augur module: capture omens, fortunes, and hunches in
a poetic Witness mode and read them back empirically in Oracle mode.
Same data, two lenses; the killer mechanic is the Living Oracle that
materialises empirical reflections from the user's own resolved
history at capture time.

Why now: docs/future/MODULE_IDEAS.md captured the brainstorm, then
the spec landed at docs/plans/augur-module.md as a Witness+Oracle
hybrid. Built end-to-end through M6 in one go.

Highlights:
- Witness gallery + DueBanner + DetailView + Resolve flow
- Oracle stats: calibration-per-source, vibe-hit-rate, cross-module
  correlation engine (mood/sleep/duration after-windows)
- Living Oracle: deterministic fingerprint+match against user's own
  resolved history; cold-start-gated at 50 resolved entries
- Year-Recap view at /augur/recap/[year]
- 5 MCP tools: capture_sign, resolve_sign, list_open_signs,
  consult_oracle, augur_year_recap (in AI_TOOL_CATALOG)
- Visibility integration: default 'private', VisibilityPicker in
  DetailView. Server-side unlisted-snapshot-publish stays follow-up
- v47 Dexie schema; encrypted: source/claim/feltMeaning/
  expectedOutcome/outcomeNote/tags/livingOracleSnapshot
- LOCAL TIER PATCH: requiredTier 'guest' for testing

Strings interpolated through `T` constants so the i18n-hardcoded
baseline stays at 0 for augur — real $_('augur.*') keys land later.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 15:02:15 +02:00
Till JS
d62ae8f1e3 fix(workbench): dedup duplicate Home scenes accumulated by seeding race
The Home-seeder in workbench-scenes.svelte.ts writes new scenes without
spaceId, so the creating-hook stamps them with the _personal:<userId>
sentinel. The per-space dedup check filters by the real space UUID and
never finds them — every login adds another Home row, and every visit
to a non-personal Space (Brand/Family/Team) drops yet another seed
into the personal Space.

This is Schicht D-soft of the broader cleanup plan
(docs/plans/workbench-seeding-cleanup.md): a one-shot dedup pass that
collapses duplicate "Home" rows per spaceId, merging openApps from the
losers into the survivor (most apps wins, ties by most-recent
updatedAt) and soft-deleting the rest so mana-sync propagates the
cleanup to other devices. Touches only rows that look like fresh
default seeds — anything customized (description, wallpaper, agent
binding, scope tags, non-Home name) is left alone.

Wired in two places: a Dexie v48 upgrade so it runs once per device on
schema bump, and a belt-and-suspenders pass in (app)/+layout.svelte
right after reconcileSentinels() to catch the edge case where
sentinel-stamped rows just collapsed into the same UUID group as
already-reconciled rows.

The structural fix that prevents new duplicates from ever forming
(per-space-seeds registry + deterministic seed ids +
creating-hook hardening) ships in follow-up commits per the plan.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 14:08:32 +02:00
Till JS
d924895de0 docs(unlisted-sharing): park M8.6-readiness check as 2026-05-09 plan-TODO
The CronCreate scheduler ignored the durable flag so the original
in-session reminder won't survive a Claude restart. Captures the
exact `psql` query to run and the decision criteria (expired-not-
cleaned > 0 → start M8.6; total = 0 → keep waiting) directly in the
plan doc instead.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 12:54:51 +02:00
Till JS
364522db87 feat(comic): image-model picker — OpenAI + Nano Banana wählbar
Comic nutzte bisher 'openai/gpt-image-2' hartcodiert auf drei Ebenen
(generate-panel.ts, comic.generatePanel MCP-Tool, generate_comic_panel
AI-Tool). Wardrobe hat seit dem Nano-Banana-Commit einen
TryOnModelPicker mit drei Optionen — Comic spiegelt das jetzt 1:1.

Wählbar in allen drei Editoren (PanelEditor, BatchPanelEditor,
StoryboardSuggester):
- openai/gpt-image-2 (Default) — OpenAI GPT-image Standard
- google/gemini-3-pro-image-preview — Nano Banana Pro, hohe
  Konsistenz, teurer
- google/gemini-3.1-flash-image-preview — Nano Banana 2, neuestes,
  schnell, günstig

Implementierung:
- api/generate-panel.ts: PanelModel Union + DEFAULT_PANEL_MODEL +
  model? Param auf RunPanelGenerateParams + im HTTP-Body
  weitergereicht (vorher hart 'openai/gpt-image-2').
- components/PanelModelPicker.svelte: neue Komponente, Stil/Markup
  identisch zu TryOnModelPicker für Muskel-Memory über beide Flows.
- components/PanelEditor.svelte: `let model = $state(DEFAULT_PANEL_MODEL)`
  + Picker oberhalb der Qualität-/Format-Leiste + model im
  runPanelGenerate-Call.
- components/BatchPanelEditor.svelte: gleiche Änderung — ein Model
  pro Batch (nicht pro Row) damit der Batch konsistent rendert.
- components/StoryboardSuggester.svelte: gleiches Pattern; der
  Picker landet zwischen "Panel manuell"-Button und dem
  Qualität/Format-Block.
- packages/mana-tool-registry/src/modules/comic.ts: generatePanel
  Input-Schema bekommt model mit zod.enum() + default; im Body
  wird input.model durchgereicht.
- packages/shared-ai/src/tools/schemas.ts: generate_comic_panel
  bekommt Parameter 'model' optional mit gleicher Enum-Liste.
- apps/mana/apps/web/src/lib/modules/comic/tools.ts: isValidModel
  Guard + Parameter-Validierung; model an runPanelGenerate.

Keine Story-Level-Persistierung — model bleibt lokaler State pro
Editor-Mount. Eine model-Spalte auf comicStories würde Migration
brauchen und die Wahl ist eh ad-hoc pro Panel/Batch.

Plan-Doc (§2.1) dokumentiert die Entscheidung + die drei Optionen.

107 shared-ai tests weiter grün. check + validate:all clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 17:19:40 +02:00