Commit graph

3806 commits

Author SHA1 Message Date
Till JS
75d9207ff2 feat(forms): M1 skeleton — module + Dexie v57 + welcome-seed
Erste Lieferung des Forms-Moduls (docs/plans/forms-module.md M1):

- Modul-Struktur unter src/lib/modules/forms/: types (LocalForm,
  LocalFormResponse, 11 Field-Types, BranchingRule, FormSettings),
  module.config, collections, queries (live + type-converters),
  stores (forms-CRUD inkl. add/update/remove/reorderFields,
  responses-submit mit denormalized responseCount-bump),
  ListView mit Quick-Create + Search + 3-fach Empty-State.
- Dexie v57: forms (id, status, _updatedAtIndex) + formResponses
  (id, formId, status, submittedAt, _updatedAtIndex, [formId+status]).
- Encryption-Registry typed entries: title/description/fields/branching/
  settings auf forms; answers/submitterEmail/submitterName/submitterMeta
  auf formResponses. Status, formId, submittedAt, responseCount,
  visibility, unlistedToken bleiben plaintext (Routing- + Sort-Felder).
- Per-Space-Welcome-Seed mit Beispiel-Formular (3 Felder), wired in
  data/seeds/index.ts.
- Route /forms via RoutePage (appId='forms').
- i18n-Namespace forms/ × 5 Locales (de/en/es/fr/it).

App-Registry-Eintrag (APP_ICONS.forms + MANA_APPS) ist bereits in
6d193a9fa gelandet (paralleler app-registry-polish-Commit).

Validatoren grün: validate:turbo, validate:pg-schema,
validate:i18n-parity (77 namespaces × 5), validate:theme-{parity,
utilities,variables}, audit:encrypted-tools (23 tools, M1 hat keine),
svelte-check 0/0/0 über 7680 Files. check:crypto: 213 Tables (+2
sind meine), 3 Violations sind pre-existing dead entries.

LOCAL TIER PATCH: requiredTier='guest' mit revert-Marker vor Release.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 23:01:05 +02:00
Till JS
6d193a9fa7 chore(app-registry): polish 4 small wins — TOC + AppId-derive + route-drift test + 3 MANA_APPS
§1 AppId derivation (shared-branding):
- `AppId` is now `keyof typeof APP_BRANDING` (config.ts) instead of a
  hand-maintained union in types.ts. Adding/removing an entry in
  `APP_BRANDING` automatically updates the union — eliminates the
  drift class that produced the ContextLogo type-error.
- `AppBranding.id` relaxed to `string` to break the circular type
  reference (key in `APP_BRANDING` is the authoritative id).

§2 Route-drift smoke test (registry.spec.ts):
- New 4th test: parses every `routes/(app)/*+page.svelte`, extracts
  the `<RoutePage appId="…">` literal, asserts the id is registered
  in the workbench app-registry. Catches drift like the earlier
  `appId="broadcasts"` vs id `'broadcast'` bug structurally.
- ROUTE_ONLY_APP_IDS allowlist for routes that intentionally don't
  back a workbench module (gifts, llm-test, milestones, organizations,
  teams, tags).
- Caught two real drifts in the process and fixed them:
    /agents/+page.svelte → appId="ai-agents" → "agents"
    /agents/templates/+page.svelte → same

§3 MANA_APPS hochgezogen (kontext, wishes):
- kontext (Web-Context URL crawler) + wishes (Wunschliste) had module
  + workbench card but no MANA_APPS branding entry. Both got proper
  description, longDescription and a fresh APP_ICONS entry (globe-
  with-text-lines for kontext, shooting-star for wishes).
- Removed both from WORKBENCH_ONLY in spec — they're full apps now.
- Note: `myday` was already in MANA_APPS, the WORKBENCH_ONLY entry
  was redundant and had been silently double-counting.

§4 apps.ts — top-level INDEX comment:
- 80 registerApp() calls were chronological-by-when-added — basically
  unsearchable. Added an §1–§4 navigation comment near the top
  grouping apps by role (entity / module surface / AI Workbench /
  System) so devs can jump to a section. Physical reordering of
  the 80 blocks deferred to avoid disturbing the active multi-
  terminal session — the TOC delivers ~80% of the navigation win.

Bonus: register `forms` module that the parallel session added but
hadn't wired into the workbench yet — the new route-drift test caught
this immediately on first run.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:59:26 +02:00
Till JS
907a3add49 Create forms-module.md 2026-04-28 22:41:45 +02:00
Till JS
230dfd5dad chore: extract arcade into standalone repo
Arcade lives as its own pnpm workspace at ~/Documents/Code/arcade
now, with no @mana/* coupling. This drops every reference and the
games/ directory from the monorepo.

Removes:
- games/ directory (89 files: web + server + 22 HTML games + screenshots)
- @arcade/web, @arcade/server pnpm workspace entries (games/* globs)
- arcade scripts in root package.json (4 scripts)
- arcade.mana.how from mana-auth trusted origins + CORS_ORIGINS
- arcade entries in mana-apps registry, app-icons, URL overrides
- arcade.mana.how from cloudflared tunnel + prometheus blackbox probes
- arcade-web service block in docker-compose.macmini.yml
- generate-env.mjs entries for arcade server + web
- BRANDING_ONLY 'arcade' entry in registry consistency spec
- dead arcade translation keys in GuestWelcomeModal (DE+EN)
- arcade mention in CLAUDE.md, authentication guideline, MODULE_REGISTRY

Verified:
- services/mana-auth/src/auth/sso-config.spec.ts: 8/8 pass
- pnpm install regenerates lockfile cleanly (-536 lines)
- no remaining 'arcade' refs outside historical snapshot docs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:40:01 +02:00
Till JS
33b3f656fd test(articles): parseUrls unit tests + extract pure module (Phase 7)
Move `parseUrls` out of stores/imports.svelte.ts (which transitively
imports Dexie via collections.ts) into a standalone parse-urls.ts so
the test file can exercise it without booting Dexie. The store re-
exports parseUrls so existing call sites (BulkImportForm, tools.ts)
keep working unchanged.

11 unit tests covering:
  - empty + whitespace-only inputs
  - newline / whitespace / comma / tab separator handling
  - http + https accepted, ftp / mailto / javascript / file rejected
  - bare domains rejected (URL accepts them as opaque, our parser
    requires explicit scheme)
  - duplicate detection preserves first-occurrence order
  - canonicalisation (trailing slash on root, query+fragment kept)
  - mixed valid / invalid / duplicate token ordering
  - title-prefixed-paste behaviour (strict — surfaces non-URL words
    as invalid for the user to see)
  - 50-URL stress check

Plan: docs/plans/articles-bulk-import.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:39:17 +02:00
Till JS
0fc16d1bfd feat(articles): bulk-import AI tool wiring (Phase 6)
Adds import_articles_from_urls tool to the articles module so the AI
Workbench can kick off a bulk-import job in one call. Auto-policy: the
job itself is the unit of approval, no per-article propose card.

- shared-ai schemas: declare the tool name + propose/auto policy
- articles/tools.ts: implement parseUrls + articleImportsStore.createJob
- consume-pickup.ts: handle the new event type
- events/catalog.ts: register article-import lifecycle events
- imports.svelte.ts: minor polish

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:33:31 +02:00
Till JS
5f0a1b5053 feat(articles): bulk-import UI (Phase 5)
apps/mana/apps/web/src/lib/modules/articles/components/:
  - BulkImportForm.svelte: <textarea> + live-validating $derived parser,
    counter chips for valid/duplicate/invalid, expandable invalid-list,
    submit creates a job + navigates to /articles/import/[jobId].
  - JobsList.svelte: index of past + active jobs (newest first), status
    pill + progress + per-counter chips. Click row → detail.
  - JobDetailView.svelte: live header (status, progress bar, counters),
    action bar (pause/resume/cancel/retry-failed/delete), per-item rows
    with state pill + URL + open-link or error tooltip.

apps/mana/apps/web/src/routes/(app)/articles/import/:
  - +page.svelte: hosts BulkImportForm + JobsList.
  - [jobId]/+page.svelte: hosts JobDetailView.

AddUrlForm.svelte: small "Mehrere URLs auf einmal? → Bulk-Import" link
under the single-URL input so the existing flow surfaces the new path.

The whole UI is a pure liveQuery view — JobDetailView re-renders as
the server-worker writes counter updates and item-state transitions
through sync_changes. Worker tick + pickup-consumer (already shipped
in 5535f2da4 + a9bcd4183) close the loop end-to-end.

Phase 6 (Domain-Events + AI-Tool) and Phase 7 (Tests) follow.

Plan: docs/plans/articles-bulk-import.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:30:36 +02:00
Till JS
29cbaf30f5 feat(articles): bulk-import store + queries (Phase 4)
apps/mana/apps/web/src/lib/modules/articles/:
  - stores/imports.svelte.ts: new file. articleImportsStore with
    createJob (bulkAdd N items + 1 job), pauseJob, resumeJob,
    cancelJob, retryFailed, deleteJob. parseUrls exported as a pure
    function — splits on whitespace+comma, validates http(s) scheme,
    deduplicates while preserving input order; used by both the store
    and the UI's $derived live-validation in Phase 5.
  - queries.ts: toImportJob/toImportItem converters + useImportJobs
    (index list), useImportJob (detail header), useImportItems (per-
    job item list). All scope-aware via scopedForModule / scopedGet.

Job creation: createJob(urls) → jobId. Items written first so a worker
tick that races the job-write doesn't see a job with totalUrls=N but
fewer items reachable. Server-worker picks up state='pending' items
on its 2s tick.

retryFailed re-arms the job to status='running' if it was 'done',
because all-terminal-items had triggered the auto-completion in the
worker's counter-rollup pass.

deleteJob is soft (deletedAt stamp) on both job + items; already-
landed Article rows are NOT touched.

Phase 5 (UI) follows.

Plan: docs/plans/articles-bulk-import.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:23:45 +02:00
Till JS
fa299e3bf9 feat(app-registry): wire up 4 modules + 7 routes + tier-patch validator
Resolves the cross-cutting drift that the app-registry sanity-test was
silently catching but BRANDING_ONLY exceptions papered over.

App-registry wiring:
- Register augur, broadcasts, invoices, timeline as workbench cards.
- Resolve agents↔ai-agents naming drift: workbench id is now `agents`
  (matches MANA_APPS + the /agents route URL); folder stays `ai-agents`
  for grouping with other ai-* modules.

Broadcast→broadcasts unification:
- module.config appId, MANA_APPS id, APP_ICONS key, all route appIds,
  and the redundant APP_URL_OVERRIDES entry — all aligned with the
  earlier folder rename so nothing diverges anymore.

Top-level routes for workbench-only modules:
- /goals, /myday, /kontext, /rituals, /automations, /activity — thin
  RoutePage wrappers around the existing module ListViews.
- /timeline becomes a real module (ListView extracted from the route),
  route shrinks to a 12-line wrapper.

Food unarchive:
- packages/shared-branding/src/mana-apps.ts: remove `archived: true`
  from food entry. The module is fully wired (registered, synced,
  routed, with AI tools); the flag was outdated.

i18n cleanup:
- Rename ai-agents → agents key in all 5 apps locales.
- Drop dead "observatory" key from all 5 nav locales (route folder was
  removed in 7bca16dfa).

New CI guard — scripts/validate-tier-patches.mjs:
- Scans for `LOCAL TIER PATCH — revert before release` markers.
- Default: informational list (does not fail).
- Strict mode (MANA_TIER_PATCH_STRICT=1) for release/RC pipeline.
- Wired into validate:all.

Spec update:
- registry.spec.ts WORKBENCH_ONLY/BRANDING_ONLY: documented Settings
  family + AI Studio surfaces + intentionally-internal modules so the
  drift guard fires only on real drift.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:21:41 +02:00
Till JS
8a5fad34df fix(geocoding): bump PROVIDER_TIMEOUT_MS to 20s for cold cross-LAN
Cold-start fetches from the mana-geocoding container to photon-self
on mana-gpu (over WSL2 mirrored networking) consistently take >10s on
the first probe and ~2s once warm. The previous 8s default caused the
chain to false-mark photon-self unhealthy on every cold path, leaking
to public photon for the next 30s health-cache window — and pinning
the public-photon answer in the 7d cache (now shortened to 1h).

Also wires the docker-compose macmini env to honor PROVIDER_TIMEOUT_MS
and CACHE_PUBLIC_TTL_MS overrides so production picks up the new
values without a code rebuild.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:19:21 +02:00
Till JS
962606b961 feat(demo-personas): chor tägerwilen — Recherche + Seed (118 Records)
Erste Demo-Persona auf Prod live: chor-taegerwilen@mana.how.

Inhalt:
- Recherche-Brief mit Quellen, IDs, Modul-Mapping, Pitch-Hooks
- data.ts: 54 Mitglieder (S/A/T/B vollständig), Vorstand, Chorleiter,
  Termine April–Juni 2026, 5 Konzerte 2026, Konzert-Archiv 2015–2025,
  kontextDoc Markdown
- seed.ts: idempotentes Bun-Skript, schreibt direkt in
  mana_sync.sync_changes via SSH-Tunnel (5433). Setzt RLS-Context,
  räumt prior demo-seed Rows auf, schreibt 118 Records über
  kontext / contacts / calendar+timeblocks / events / library /
  notes / website / ai-missions.

Pitch-Hook: der Verein war bereits ClubDesk-Kunde — Mana-Replacement
ist die direkte Migrations-Story.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:17:32 +02:00
Till JS
19627f18b8 docs(demo-personas): Runbook für echte-Account-Demo-Workflow
Vorgehen pro Demo-Persona dokumentiert: Recherche → Live-Account auf
Mac-Mini-Prod → Club-Space → idempotentes Seed-Skript → Smoke-Test.
Inkl. Modul-Mapping (appId/tableName), Common Pitfalls (Prod-Schema-Drift
field_timestamps vs field_meta, forced RLS auf sync_changes), und
Lessons aus Persona 1 (Chor Tägerwilen).

Verworfener Fork-System-Plan bleibt nicht im Repo — siehe Memory-Pointer
project_demo_personas_workflow.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:17:18 +02:00
Till JS
a9bcd4183a feat(articles): client-side pickup consumer (Phase 3)
Watches `articleExtractPickup` via liveQuery. For each row the server-
worker drops:

  1. Look up the matching `articleImportItems` row. Stale → just clean
     the inbox.
  2. Dedupe race: if the URL has been single-saved meanwhile, point
     the import item at the existing article (state='duplicate'),
     don't create a second row.
  3. Happy path: call existing articlesStore.saveFromExtracted (which
     runs encryptRecord + articleTable.add and emits ArticleSaved)
     → flip item to 'saved' (or 'consent-wall' on warning).
  4. Delete the pickup row so the inbox stays empty in steady state.

Multi-tab coordination via `navigator.locks.request('mana:articles:pickup')`
with `ifAvailable: true` — only the lock-holder consumes; other tabs
just observe the liveQuery and exit. Falls back to per-row in-memory
dedupe when the Locks API isn't available; the field-LWW server merge
forgives the rare double-process.

Wired from data-layer-listeners.ts so it boots once with the rest of
the data layer and disposes on layout unmount.

End-to-end pipeline now live:
  Client write items(state='pending')
    → sync_changes
    → server-worker tick (Phase 2)
    → Pickup row + state='extracted'
    → sync pull → liveQuery
    → saveFromExtracted (encrypt) → flip 'saved' / 'duplicate' / 'consent-wall'
    → delete pickup row

What's still needed for first user-visible test: Phase 4 (store
methods to create a job) + Phase 5 (UI). Without those there's no
way yet to inject items.

Plan: docs/plans/articles-bulk-import.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:16:10 +02:00
Till JS
2bbcf14aba chore(geocoding): remove Pelias + close 3 bypass paths to public Nominatim
Pelias was retired from the Mac mini on 2026-04-28; photon-self
(self-hosted Photon on mana-gpu) has been the live primary since then.
This removes the now-dead Pelias adapter, config, tests, and the
services/mana-geocoding/pelias/ stack — the entire compose file, the
geojsonify_place_details.js patch, the setup.sh import script.

Provider chain is now `photon-self → photon → nominatim`. The chain
keeps its `privacy: 'local' | 'public'` split, sensitive-query
blocking, coord quantization, and aggressive caching unchanged.

Three direct calls to nominatim.openstreetmap.org that bypassed
mana-geocoding now route through the wrapper:

- citycorners/add-city + citycorners/cities/[slug]/add use the shared
  searchAddress() client (browser → same-origin proxy → mana-geocoding
  → photon-self).
- memoro mobile drops its OSM reverse-geocoding fallback entirely;
  Expo's on-device reverse-geocoding stays as the sole path. Routing
  through the wrapper would require a memoro-server proxy endpoint —
  a follow-up if Expo's quality proves insufficient.

Other behavioral changes:

- CACHE_PUBLIC_TTL_MS dropped from 7d → 1h. The long TTL was a
  privacy-amplification trick from the Pelias era; with photon-self
  serving the bulk of traffic, a transient cross-LAN blip was pinning
  cached fallback answers for days. 1h gives quick recovery.
- /health/pelias renamed to /health/photon-self; prometheus blackbox
  config + status-page generator updated.
- mana-geocoding container no longer needs `extra_hosts:
  host.docker.internal:host-gateway` (was only there for the
  Pelias-on-host-network era).

113 tests passing. CLAUDE.md rewritten to reflect the post-Pelias
architecture.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:12:26 +02:00
Till JS
7bca16dfa7 feat(articles): bulk-import schema + plan (Phase 1)
Three new sync-tracked Dexie tables under the articles appId:

  articleImportJobs     — job header (counters, status, lease metadata).
  articleImportItems    — one row per URL in a job, state-machine driven.
  articleExtractPickup  — short-lived server→client handoff inbox.

URL stays plaintext on items by necessity — the server-worker reads it
without master-key access, same rationale as articles.originalUrl. The
extracted article eventually lands encrypted in the existing `articles`
table; bulk-import rows hold only pointers.

Plan: docs/plans/articles-bulk-import.md (full architecture, 7 phases,
test matrix, edge-cases). Phase 2 already shipped in 5535f2da4 (worker);
this commit lays the schema underneath it.

Originally committed as b2f4e8314, lost during a parallel reset, here
restored via cherry-pick.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 22:11:51 +02:00
Till JS
5535f2da48 feat(articles): server-side bulk-import worker (Phase 2)
apps/api/src/modules/articles/:
  - import-projection.ts: sync_changes → live LWW projection of jobs
    + items. Cross-user scan for claimable jobs, per-job item scan.
  - import-extractor.ts: per-item state-machine. Claim → fetch → write
    pickup + flip extracted, OR retry up to 3x then 'error'. All writes
    attributed to system:articles-import-worker actor (built inline so
    no shared-ai SystemSource extension needed for now).
  - import-worker.ts: 2s tick, pg_try_advisory_xact_lock keyed on 'ARTI'
    so multi-instance apps/api never double-processes. Concurrency 3
    pending items per job per tick. Job-counter rollups + status flips
    derived from current item states.
  - apps/api/src/index.ts: start the worker at boot.

Pipeline (server side):
  Client write articleImportItems(state='pending')
    → sync push → mana_sync.sync_changes
    → server-worker tick projects 'pending' items
    → extractFromUrl (shared-rss / Readability)
    → write articleExtractPickup row + flip item → 'extracted'

Phase 3 (client-side pickup consumer) and Phase 4+ (store + UI) follow.

Plan: docs/plans/articles-bulk-import.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 21:33:49 +02:00
Till JS
fc49198992 docs(geocoding): post-migration log + Photon weekly-refresh operator scripts
- Decision report: status flipped to MIGRATED; added migration log with
  five WSL2 gotchas (bzip2 missing, no official Photon image,
  firewall=true blocks cross-LAN, vmIdleTimeout=-1 ineffective,
  PowerShell pre-expansion of bash $(...)) and resource snapshot.
- mana-geocoding CLAUDE.md: PHOTON_SELF_API_URL note now reflects live
  primary status on mana-gpu since 2026-04-28.
- photon-self/: operator scripts for the weekly DB refresh — update.sh
  (atomic-swap with rollback), systemd unit + timer (Sun 03:30 +30min
  jitter, Persistent=true), README with re-installation instructions
  for DR. Currently installed and enabled on mana-gpu.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 21:31:37 +02:00
Till JS
7ebbf064ce feat(macmini): pass PHOTON_SELF_API_URL through to mana-geocoding
The wrapper supports a `photon-self` provider when PHOTON_SELF_API_URL
is set, but the compose file wasn't forwarding the env-var into the
container. Add it as an env-substitution from .env.macmini so flipping
the GPU-server-hosted Photon on/off is one line in the env file.

Empty string = slot disabled (back-compat with the old config).

Required for the 2026-04-28 Photon-on-mana-gpu migration to take effect.
The wrapper code that consumes this env-var landed in 153ad8049
(dual-Photon support).
2026-04-28 21:15:54 +02:00
Till JS
248549b15a fix(feedback): keine doppelte Anzeige von Title + Body
Bei kurzen Posts (oder wenn mana-llm fehlschlug) hat der Auto-Title-
Fallback `feedbackText.slice(0, 80)` den Body 1:1 als Title gespeichert
— Card zeigte dann zwei Mal denselben Text.

Zwei Schichten Schutz:

1. **Server (mana-analytics)**: catch-Branch wirft den Prefix-Fallback
   raus (title bleibt null). Zusätzlich neue isRedundantTitle()-Heuristik
   verwirft auch Auto-Titles, die nur ein truncierter Prefix des Bodies
   sind (Whitespace-collapse + Ellipsis-strip).

2. **Frontend (ItemCard)**: defensive showTitle-Computed — ältere DB-
   Items mit redundantem Title rendern automatisch nur den Body, ohne
   dass eine Datenbank-Cleanup nötig ist.

Title-Slot bleibt für echte Auto-Summaries und manuelle Titel sichtbar.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 17:37:51 +02:00
Till JS
f754d4ecbb chore(infra): provision 2 GiB swap inside Colima VM as OOM safety net
Colima starts its Linux VM with no swap configured. Without swap the
kernel responds to memory pressure by invoking the OOM-killer instead
of paging out cold pages — meaning a transient peak (mana-web Vite
build with 8 GiB heap landing on top of the running container set)
takes down a container instead of just stalling for a few seconds.

The 2026-04-28 Mac Mini RAM audit found:
  - VM allocated:       12 GiB (1 GiB kernel overhead → 11 GiB user)
  - Container RSS:      ~4 GiB pinned
  - Available headroom: ~7.6 GiB
  - mana-web Vite peak: ~8 GiB

That's 400 MiB over the limit during builds, which is why we previously
needed the build-memory-headroom.sh wrapper to pause monitoring (frees
~700 MiB temporarily). Swap is the safer second backstop — Linux only
swaps under actual pressure (used=0 right after creation, confirmed
free -h), and the kernel can fall back to paging cold container memory
to give a build the burst it needs without killing anything.

The new step in migrate-to-colima.sh:
- creates /swap (2 GiB, root-only)
- mkswap + swapon
- persists in /etc/fstab so the VM remounts it on every restart
- idempotent — re-runs are no-ops

Already provisioned on the live VM via:
  ssh mana-server 'colima ssh -- "sudo fallocate -l 2G /swap && \
    sudo chmod 600 /swap && sudo mkswap /swap && sudo swapon /swap"'

Verified: free -h shows Swap: 2.0Gi total / 0B used. Currently dormant.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 17:31:52 +02:00
Till JS
f41ca5405a fix(deploy): safe-db-push cleanup trap also removes snapshot + journal
The previous version of the cleanup trap only deleted SQL files left
by drizzle-kit's probe-generate, but not the matching `_snapshot.json`
(56 KB per service) or the journal entry. Each deploy then leaked one
snapshot file into the runner's working tree.

Surfaced after my own local smoke-test: ran the script against
mana-auth, found a 56 KB \`drizzle/meta/0000_snapshot.json\` left
behind that I had to clean up manually.

The trap now:
- Computes the full set of files added under \`drizzle/\` during this
  run (not just SQL) and removes every one of them.
- Strips the probe's journal entry via jq.
- If the \`drizzle/\` dir didn't exist before the run, removes it
  entirely. Otherwise sweeps empty meta/ subdirs the run created.

Smoke-tested locally: working tree is clean after each run regardless
of whether drizzle/ existed beforehand.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 17:25:46 +02:00
Till JS
c7094207da fix(feedback): ReactionBar stoppt Click-Bubbling
Wer in Feed/Workbench eine Reaction setzt, landete bisher direkt im
Detail-View — der Button-Click ist zur Card-onclick durchgesickert.

Fix in der Quelle: ReactionBar.handleClick ruft jetzt e.stopPropagation()
bevor onToggle feuert. Damit funktioniert es überall, wo Reactions in
einer klickbaren Hülle sitzen (Feed-Cards, MyReactedView, Detail-Page,
zukünftige Surfaces).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 17:25:22 +02:00
Till JS
f851f15a47 feat(lasts): tidy ListView header — single-row quick-add + scrollable icon-tabs
Two layout fixes for the Lasts ListView:

1. Tab bar: status filters (Alle/Vermutet/Bestätigt/Aufgehoben) get inline
   Phosphor icons + parenthesized counters. Inbox/Meilensteine/Einstellungen
   now render as full icon+label tabs in a `border-left`-separated cluster
   instead of icon-only links. The whole bar is `overflow-x: auto` with
   hidden scrollbars (matches calendar/DateStrip pattern), so narrow
   workbench cards scroll horizontally instead of wrapping.

2. Quick-add: collapses two rows (input + Vermutet/Bestätigt pill toggle)
   into one. Mode is a `<select>` styled like the category select, sitting
   to the right of the title input. Removes the visual duplication where
   the toggle pills mimicked the status tabs above.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 17:22:40 +02:00
Till JS
153ad8049c feat(geocoding): support dual-Photon (self-hosted + public) for GPU
migration

The chain now distinguishes two Photon instances:
  photon-self  privacy: 'local'   (self-hosted on mana-gpu)
  photon       privacy: 'public'  (komoot.io, last-resort fallback)

Both wrap the same `PhotonProvider` class with different config — only
the URL, name, and privacy stance differ. The new ProviderName variant
'photon-self' lets the chain track per-provider health for them
independently (a single 'photon' slot would collide in the health
Map).

Opt-in registration: `photon-self` is only built when
PHOTON_SELF_API_URL is set in the env. When unset (current state),
the chain has the same shape as before — full backward compat. After
the GPU migration, flipping the env-var on is the only deploy step
needed:
  PHOTON_SELF_API_URL=http://192.168.178.11:2322

Default chain order updated to:
  photon-self,pelias,photon,nominatim
  ^^^^^^^^^^^ silently skipped if not registered (env unset)

The privacy guarantee is structural: photon-self carries privacy:
'local', so the existing sensitive-query block from the previous
hardening commit now has a real local backend post-migration —
medical/crisis-service queries get real results instead of the
"sensitive_local_unavailable" notice.

Tests: 148 (was 141). New coverage:
- src/__tests__/app.test.ts: createChain registration logic — verifies
  photon-self appears iff PHOTON_SELF_API_URL is set, ordering
  honored, GEOCODING_PROVIDERS env-var filter respected
- providers/__tests__/photon-normalizer.test.ts: provider field
  carries 'photon' or 'photon-self' based on the call argument

Recon of mana-gpu (2026-04-28): Windows 11 Pro Build 26200, 64 GB
RAM (56 GB free), 739 GB disk free, no WSL2/Docker yet, no native
GPU services running. Setup plan documented in
docs/runbooks/photon-on-mana-gpu.md (3–4 h, ~1 h of which is
download/unpack waiting).
2026-04-28 17:19:04 +02:00
Till JS
104a5a46a0 fix(deploy): pnpm install workspace deps before running safe-db-push
Two follow-up fixes after the first migration-step deploy revealed
gaps:

1. \`pnpm dlx drizzle-kit\` doesn't work — the drizzle.config.ts file
   itself does \`import { defineConfig } from 'drizzle-kit'\`, and
   Node's resolver only finds that import via local node_modules,
   not pnpm's dlx cache. Reverted to plain \`pnpm exec drizzle-kit\`
   and require the workspace to be installed.

2. CD now runs \`pnpm install --filter ./services/<svc>... --frozen-
   lockfile --ignore-scripts\` once at the start of the migration
   step for every Drizzle service in the deploy. Path-based filter
   (not name-based) because our service package names follow no
   uniform convention (\`@mana/auth\` vs \`@mana/credits-service\` vs
   \`@mana/events\`). pnpm's lockfile cache makes second-and-later
   runs near-instant.

3. Dropped the \`--silent\` flag from \`pnpm exec drizzle-kit --version\`
   — it isn't a recognised pnpm-exec flag and causes a 254 exit code,
   making the script's "is drizzle-kit available?" probe always fail.

Smoke-tested locally — script now runs cleanly against mana-auth's
schema, reports "no changes detected", cleans up the probe SQL file.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 17:10:08 +02:00
Till JS
941df57f77 feat(feedback): rename community-identity columns + settings-section
Letzter "community"-Rest aus dem Feedback-Hub räumt sich auf — DB-Spalten,
Settings-Search-Index, Section-Name und i18n-Keys einheitlich auf
"feedback":

- DB: auth.users.community_show_real_name → feedback_show_real_name,
  community_karma → feedback_karma. Migration unter
  services/mana-auth/sql/009_rename_community_to_feedback.sql (manuell
  via psql, in Drizzle-Schema beider Services nachgezogen).
- mana-auth/me.ts: PATCH /api/v1/me/profile akzeptiert jetzt
  feedbackShowRealName und gibt es im Response zurück.
- mana-analytics: feedback.ts liest authUsers.feedbackShowRealName /
  feedbackKarma, redact() + Karma-Increment + Tests entsprechend.
- Frontend: CommunitySection.svelte → FeedbackIdentitySection.svelte
  (Datei umbenannt, Property-Namen + Toast-Texte aktualisiert,
  HeartHalf-Icon, "Feedback-Identität" als Title).
- searchIndex.ts: CategoryId 'community' → 'feedback', anchor
  'community-identity' → 'feedback-identity'.
- i18n (5 locales): settings.categories.community → .feedback,
  settings.search.community_* → feedback_*. Labels DE/EN/FR/IT/ES
  jeweils auf "Feedback" + "im Feedback-Feed" angepasst.

38/38 Integration-Tests grün, validate:i18n-parity sauber, svelte-check 0.

BREAKING (intern, nicht live): Frontend, das gegen die alten Spalten- /
Property-Namen aus dem PATCH-Response geht, fällt jetzt um. Kein
Production-Risiko da Hub noch nicht öffentlich.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 17:09:58 +02:00
Till JS
6f83fba66a docs(reports): geocoding self-hosting decision — recommend Photon on mana-gpu
Compares Pelias / Nominatim / Photon for self-hosting on the GPU
server, with current (2026-04-28) numbers from upstream docs +
GraphHopper's Photon-data downloads:

  Photon Europe pre-built dump: 30.6 GB, weekly refresh
  Photon Germany pre-built dump: 5.8 GB, weekly refresh
  Nominatim Germany import:     ~100 GB disk, 8–12 h, 12 GB RAM
  Pelias DACH (current):         3 GB RAM, 4 services, JS patch hack

Recommendation: Photon Europe-wide on mana-gpu. Single Java process,
embedded OpenSearch, no PBF import (download a tarball, restart),
weekly auto-updates from GraphHopper, integrates with the wrapper's
existing PhotonProvider via just an env-var change.

Once self-hosted, Photon registers as `privacy: 'local'` — the
sensitive-query block (Hausarzt, Klinikum, …) gets a real local
backend and no longer has to return empty results when Pelias is
down. Public Photon stays in the chain as a `privacy: 'public'`
last-resort fallback.

Migration plan included (~3–4 h total, ~1 h waiting), with
phase-by-phase risk assessment.

Pelias does not return — the 3 GB RAM + multi-container + patched
JS combination has no operational case once we have a self-hosted
Photon that already matches our wrapper's wire format.
2026-04-28 17:04:30 +02:00
Till JS
e4d9dc5b2e fix(deploy): safe-db-push uses pnpm dlx when local drizzle-kit is missing
The Mac Mini runner doesn't run \`pnpm install\` (every service builds
inside Docker), so per-service node_modules/.bin/drizzle-kit isn't
present. The first deploy with the new migration step printed
\`ERR_PNPM_RECURSIVE_EXEC_FIRST_FAIL Command "drizzle-kit" not found\`
and silently treated every service as "no schema changes — clean".

Pick the invocation mode at runtime: \`pnpm exec drizzle-kit\` if a local
binary exists, otherwise \`pnpm dlx drizzle-kit\`. dlx caches the package
in the global pnpm store after the first fetch, so subsequent calls
are fast. drizzle-kit reads its config from cwd, so it still picks up
each service's drizzle.config.ts correctly.

Smoke-tested locally against services/mana-auth — script reports
"no schema changes — clean" instead of failing silently.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 16:57:35 +02:00
Till JS
b1fa55dbca feat(places): surface geocoding privacy notices in autocomplete UI
The mana-geocoding wrapper now returns `notice: 'fallback_used' |
'sensitive_local_unavailable'` alongside results so the UI can show
the user *why* a query had unusual behavior. This commit wires that
all the way through the Places module's address-autocomplete inputs.

Geocoding client (lib/geocoding/index.ts):
- Add `GeocodingNotice` and `SearchOutcome` types
- Add `searchAddressDetailed` and `reverseGeocodeDetailed` — same
  semantics as the existing functions but return the wrapper's
  provider/notice metadata. Existing `searchAddress`/`reverseGeocode`
  stay backward-compatible (they call the detailed variants under
  the hood and discard the metadata).
- Extend GeocodingResult with optional `provider` field.

Places ListView (the only current consumer that exposes typed
addresses to users):
- Both autocomplete inputs (tracking-edit + main address-search)
  now use searchAddressDetailed and surface notices inline.
- 'sensitive_local_unavailable' renders an amber explainer block in
  the dropdown — title + body — so the user knows why their medical
  query returned 0 hits without leaking the search to a public API.
- 'fallback_used' renders a small "≈ ungefähr" footer badge so users
  understand the result came from public OSM (less precise but
  still valid).
- The dropdown opens when EITHER results exist OR a notice is
  present — sensitive blocked queries with empty results still
  surface their explainer.

i18n: new `places.geocoding_notice.*` sub-namespace in all 5 locales
(de/en/es/fr/it) — 4 strings each. All validators green.

Other consumers (places DetailView, events, photos, contacts) keep
the existing searchAddress/reverseGeocode calls — they don't need
the privacy notices today and would just add noise. They can adopt
the detailed variant if/when the use case warrants it.
2026-04-28 16:24:15 +02:00
Till JS
f20a411fd8 chore(infra): right-size mem_limits based on observed RSS (Tier-3 sweep)
The compose mem_limits hadn't been revisited in months. Today's
live `docker stats` snapshot revealed:
  - 5 services using <25% of their limit (waste)
  - 3 services using >70% of their limit (OOM risk during spikes)

Adjusted both directions, no container removal, no behaviour change.
Each tweak carries a 1-line rationale in the file with the observed
RSS that motivated it.

Bumped (tight → comfortable):
  mana-mon-cadvisor       128m → 160m  (was 76% — bursts during stat collection)
  mana-mon-alert-notifier  32m →  48m  (was 79% — alert-bursts queue up)
  mana-core-media         128m → 160m  (was 63% — image-thumb spikes)

Trimmed (over-provisioned):
  mana-research           256m → 128m  (live ~57m, 22%)
  mana-mail               256m → 128m  (live ~11m bootstrap; legitimate growth headroom)
  mana-app-uload-server   256m → 128m  (live ~51m, 20%)
  mana-service-llm        256m → 128m  (live ~46m, 18%; thin proxy to upstream Ollama)
  mana-app-llm-playground 128m →  64m  (live ~22m, 17%; static-export demo)

Net delta: -496 MiB in compose limits — direct headroom for the
mana-web Vite build that previously OOM'd on the same VM. Combined
with the build-memory-headroom.sh wrapper (which still pauses the
monitoring stack during heavy builds), the Vite OOM risk is gone
on paper.

Containers will be recreated on next CD pass through `docker compose
up -d` (touched env or recipe). For the trimmed services, the new
limit is well above current RSS so nothing should OOM. For the bumped
services, the old limit was the tight one, so this only relaxes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 16:18:58 +02:00
Till JS
112e2cc1b4 feat(feedback): rename community → feedback (module + routes + domain)
Modul, Routen und Public-Domain heißen jetzt einheitlich "feedback":

- App-Registry: id 'community' → 'feedback', name 'Community' → 'Feedback',
  Icon Megaphone → HeartHalf (passt zum bereits-globalen heart-half-Icon
  am Module-Header und im PillNav-Usermenü)
- Modul-Config: communityModuleConfig → feedbackModuleConfig
- Routen-Refs: alle href/goto-Aufrufe in Modul-Views, MyWishesView,
  Onboarding-Wish, Profile-MyWishes auf /feedback umgestellt
- /feedback/+layout: Brand "Mana Community" → "Mana Feedback", Megaphone
  → HeartHalf, "In Mana öffnen"-CTA zeigt jetzt auf /?app=feedback
- Public-Mirror Domain: community.mana.how → feedback.mana.how
  (cloudflared-config.yml + docker-compose.macmini.yml CORS_ORIGINS +
  PUBLIC_MANA_ANALYTICS_URL_CLIENT). DNS muss separat angelegt werden.
- Settings-Section: Hilfe-Text nennt jetzt feedback.mana.how

Internal: community_show_real_name + community_karma DB-Spalten bleiben
(Migration nicht im Scope dieses Renames). Settings-Search-Index-Kategorie
'community' bleibt ebenfalls — sie spiegelt das DB-Schema, nicht den
User-Begriff.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 16:18:45 +02:00
Till JS
698e09b88c chore(deploy): auto-apply additive Drizzle schema migrations + RAM headroom for mana-web build
Two CD-pipeline ergonomics fixes that surfaced during the 2026-04-28
schema-drift sweep.

(C) Auto-apply additive Drizzle migrations
========================================
8 services use Drizzle (mana-auth/-credits/-events/-research/-mail/
-subscriptions/-user/-analytics) but the CD pipeline never ran their
`db:push` script, so 4 schema additions stayed undeployed for days
(auth.users.kind, credits.{sync_subscriptions,reservations},
event_discovery.*) until live PostgresErrors surfaced them.

New `scripts/mac-mini/safe-db-push.sh`:
- Uses `drizzle-kit generate` to write a probe SQL file (does NOT
  apply yet).
- Greps the generated SQL for destructive patterns (DROP TABLE/
  COLUMN/TYPE/SCHEMA/INDEX, ALTER COLUMN ... TYPE, RENAME).
- Refuses to auto-apply if any are found — operator must review and
  run `pnpm db:push --force` manually after pg_dump.
- Otherwise applies via `drizzle-kit push --force` and cleans up the
  probe artifacts.

CD step "Apply schema migrations" runs between build and container
restart, sourcing each changed service's DATABASE_URL from compose
config (with @postgres → @localhost rewrite for the host runner).
Failure aborts deploy before the new container starts — the old
container keeps running with the old schema, which matches.

(D) Build-time RAM headroom
========================================
mana-web's Vite build needs 8 GiB of Node heap; Colima's VM is sized
at 12 GiB; ~3.5 GiB of other containers run during deploy. The 2026-
04-28 mana-web deploy OOM'd at the Vite step ("cannot allocate
memory") and only succeeded on retry once concurrent traffic settled.

New `scripts/mac-mini/build-memory-headroom.sh`:
- `start`: stops every container matching `^mana-mon-` (the
  observability stack — VictoriaMetrics, Loki, Glitchtip, cAdvisor,
  umami, blackbox, exporters). Frees ~700 MiB.
- `stop`: restores them from the snapshot list captured at start.
- `wrap <cmd>`: pause + run + always-resume via trap.

CD wraps the build loop with start/stop, but only when mana-web is in
the change set — other services build well below 4 GiB and don't
need the headroom. The monitoring stack resumes before the migration
step so cAdvisor + exporters are back online for the deploy-metrics
collection.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 16:10:31 +02:00
Till JS
bcc21ca785 feat(geocoding): privacy hardening — sensitive-query block + coord
quantization + extended cache TTL for public answers

Three independent defenses limit what public geocoding APIs (Photon,
Nominatim) can learn from our outbound traffic:

1. **Sensitive-query block** (`lib/sensitive-query.ts`)
   Queries matching the medical/mental-health/crisis-service keyword
   list (Hausarzt, Psychiater, Klinikum, HIV, Frauenhaus, …) are
   never forwarded to public APIs. The chain detects sensitivity at
   the route layer and runs the search in localOnly mode — providers
   with `privacy: 'public'` are filtered out before iteration begins.
   When no local provider is available (Pelias stopped), a sensitive
   query returns ok:true with results:[] and notice:
   'sensitive_local_unavailable' so the UI can show a sensible
   message instead of "no results".

   The keyword list is documented inline. False negatives are the
   risk; false positives just produce a 0-result UX hit (better
   trade-off).

2. **Coordinate quantization** (`lib/privacy.ts`)
   Forward-search focus.lat/lon: rounded to 2 decimals (~1.1km).
     Enough for the bias to work, hides exact GPS.
   Reverse-geocoding lat/lon: rounded to 3 decimals (~110m).
     City-block resolution — sufficient for "what's near me?",
     avoids reverse-geocoding the user's exact front door.
   Pelias always gets full precision; quantization only on the way
   out to public APIs. New `privacy: 'local' | 'public'` field on
   the GeocodingProvider interface drives this.

3. **Extended cache TTL for public answers**
   New `cache.publicTtlMs` config option, default 7 days (vs. 24h
   for local-provider answers). LRU cache extended with optional
   `ttlOverrideMs` per entry. Same query from N users → 1 outbound
   request to Photon/Nominatim. Strongest privacy lever we have
   over public providers (we can't change their logging, only the
   rate at which we feed them queries).

Threat coverage:
   ✓ User IP / identity hidden (already true — wrapper is the proxy)
   ✓ Exact GPS hidden (quantization)
   ✓ Sensitive query content protected (block)
   ~ Non-sensitive query content visible (acceptable trade-off)
   ~ Aggregate profiling reduced ~10–100× (cache)
   ✗ TLS-level traffic analysis, compelled disclosure (out of scope)

Tests: 141 (was 115). New coverage:
- privacy.test.ts: quantization rules (locks the privacy claim)
- sensitive-query.test.ts: positive matches across categories +
  documented false positives we accept
- chain.test.ts: localOnly mode end-to-end including the load-
  bearing assertion that public providers' search() must NEVER be
  called when the chain is in localOnly mode (no race window)
- cache.test.ts: per-entry ttlOverride longer + shorter than default

Live smoke verified end-to-end:
- "Hausarzt Konstanz" with Pelias down → no public API call,
  notice: 'sensitive_local_unavailable'
- "Konstanz" → falls through to Photon, notice: 'fallback_used'
- Reverse with high-precision GPS → Photon receives quantized
  coords, returns city-block-level result
2026-04-28 16:04:56 +02:00
Till JS
164d5dab8b fix(mana-llm): copy aliases.yaml into Docker image
main.py's lifespan handler loads `Path(__file__).parent.parent /
'aliases.yaml'` (= /app/aliases.yaml) on startup. The Dockerfile only
copied `src/`, so prod containers always crashlooped on first start
with `AliasConfigError: alias config not found at /app/aliases.yaml`
— which is why mana-llm has been silently absent from prod. Surfaced
today after a manual `gh workflow run cd-macmini.yml -f service=
mana-llm` actually attempted to start the container instead of
relying on a long-stale image.

Tested locally: container now starts cleanly, /health returns 200,
and `/v1/aliases` lists the configured chains.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 15:47:48 +02:00
Till JS
34b1690ea4 fix(mana-ai): copy missing workspace deps into Docker installer stage
The Dockerfile copied only @mana/shared-{hono,ai,logger}, but
services/mana-ai/package.json also depends on @mana/shared-research and
@mana/tool-registry (and @mana/tool-registry pulls in
@mana/shared-crypto transitively). Without those, pnpm couldn't
resolve the workspace symlinks and the container crashlooped on every
restart with:

  error: ENOENT reading "/app/services/mana-ai/node_modules/@mana/tool-registry"

Surfaced today after a manual `gh workflow run cd-macmini.yml -f
service=mana-ai` — mana-ai had never been deployed because no commit
since the CD pipeline started had touched its path. The first real
deploy hit the missing-COPY immediately.

Header comment in the Dockerfile now spells out direct + transitive
workspace deps so future additions don't drift.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 15:42:20 +02:00
Till JS
9a0cf5b676 fix(geocoding): bump PROVIDER_TIMEOUT_MS default 5s → 8s
First-probe DNS+TLS handshake against Nominatim can take >5s on a
cold start (verified locally: 642ms warm, sometimes 5-8s cold). The
old 5s default false-marked Nominatim unhealthy and the 30s health-
cache then locked us into a fallback-of-fallback gap. 8s gives
enough headroom for cold-start while still cutting off actually-
stuck connections.

Photon and Pelias don't hit this — Photon's CDN is consistently
sub-second and Pelias is on localhost / LAN. Only the public
Nominatim path warranted the bump, but the timeout is per-provider
shared so we adjust it globally.

Existing PROVIDER_TIMEOUT_MS env override still wins.
2026-04-28 15:39:09 +02:00
Till JS
15ab24bda8 feat(feedback): heart-half als globales Feedback-Icon + inline-Form in der Workbench
Drei Probleme adressiert:

1. **Icon-Vereinheitlichung**: alle Feedback-Affordances tragen jetzt
   das phosphor `heart-half`-Icon (statt vorher Lightbulb/Mix). Geändert
   in PillNav-Usermenü, ModuleShell-Header (FeedbackHook), Phosphor-Icon-
   Map. Eine Stelle, ein Icon — Wiedererkennung steigt.

2. **Inline statt Modal in Workbench-Cards**: AppPage.svelte rendert
   das Feedback-Formular jetzt im selben Slot wie die Hilfe-Seite —
   Klick auf das Heart-Half-Icon togglet den Inline-Panel statt einen
   Modal-Backdrop über die ganze Workbench zu legen. Hilfe und Feedback
   sind mutually-exclusive (eines geht zu, sobald das andere aufgeht).

3. **Form-Body extrahiert**: FeedbackForm.svelte enthält jetzt das
   Formular ohne jegliches Chrome. FeedbackQuickModal nutzt es im Modal-
   Mode (Standalone-Routen, PillNav), AppPage im Inline-Mode. Eine
   Quelle, beide Surfaces bleiben in sync.

ModuleShell schluckt zusätzlich `onFeedback`/`feedbackOpen`-Props: wenn
gesetzt, ruft die FeedbackHook-Komponente onClick statt das eigene Modal
zu öffnen — der Host (AppPage) übernimmt das Rendering.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 15:36:52 +02:00
Till JS
f39e72340c chore(ci): drop 16 dead build-* jobs + per-product detect-changes branches
The CI workflow accumulated 16 build jobs for apps/services that no
longer exist after the 2026-04 unification (chat, todo, calendar, clock,
contacts, presi, storage, food, skilltree, telegram-stats-bot — all of
their /apps/web and /apps/backend dirs are gone) plus a structurally
broken `build-mana-web` (its Dockerfile starts FROM `sveltekit-base:local`,
an image only the Mac Mini self-hosted runner has). Every push has been
producing red CI runs from these dead jobs while the real production
deploys (cd-macmini.yml) succeeded.

Removed:
- detect-changes per-product outputs + force-build-all branches
- 220 lines of dead per-product detect-logic shell
- 19 lines of per-product summary block
- build-mana-web (broken; CD on Mac Mini covers prod)
- build-{chat,todo,calendar,clock,contacts,presi,storage,food,skilltree}-{web,backend}
- build-telegram-stats-bot

Kept (still build cleanly on ubuntu-latest):
- build-mana-{auth,search,sync,notify,api-gateway,crawler,media,credits}
- validate (PRs)
- auth-integration (PRs)

CI workflow shrank 1290 → 583 lines. The header comment now spells out
which services are in/out of CI and why.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 15:32:43 +02:00
Till JS
f1e4a39644 feat(geocoding): provider chain with Photon + Nominatim fallbacks
mana-geocoding now tries Pelias first, falls back to public Photon
(komoot.io) and finally to public Nominatim (OSM) when Pelias is
unhealthy or unreachable. The Places module's address lookup keeps
working even when the Pelias container is stopped — which it currently
is on the Mac mini, freeing 3 GB of RAM until Pelias gets migrated to
the GPU server.

Architecture:

  ProviderChain ─ tries providers in priority order, stops on first
                  success. A clean empty-results answer is definitive
                  (don't burn through public-API budget on a query that
                  legitimately has no match). Only network errors / 5xx
                  / 429 trigger fallthrough.

  HealthCache  ─ per-provider, 30s TTL. A failed health probe or a
                  failed search marks the provider unhealthy and skips
                  it for the rest of the cache window. Lazy refresh —
                  no background pinger.

  RateLimiter  ─ single-token FIFO queue, 1100ms gap by default.
                  Used to enforce Nominatim's 1 req/sec policy. Handles
                  abort during inter-task wait by releasing the busy
                  flag so later tasks aren't blocked.

Provider details:

  pelias    — primary, self-hosted DACH index, full OSM taxonomy in
              `peliasCategories`, no rate limit
  photon    — public komoot endpoint, GeoJSON shape, raw `osm_key:
              osm_value` mapped via lib/osm-category-map.ts. Faster
              than Nominatim, no advertised rate limit but be polite.
  nominatim — public OSM endpoint, strict 1 req/sec via the limiter,
              custom User-Agent required (otherwise 403). Last
              resort — fallback for when Photon is also down.

Response shape changes (additive only — existing callers keep
working):

  - results[].provider: 'pelias' | 'photon' | 'nominatim'
  - results[].peliasCategories: only present when Pelias served the
    request (was already absent on Pelias-API patch failures)
  - top-level provider: <name> + tried: <name[]> on success/error
  - new endpoint: GET /health/providers — per-provider snapshot

Configuration via env (defaults shipped):

  GEOCODING_PROVIDERS=pelias,photon,nominatim   # order matters
  PROVIDER_TIMEOUT_MS=5000
  PROVIDER_HEALTH_CACHE_MS=30000
  PHOTON_API_URL=https://photon.komoot.io
  NOMINATIM_API_URL=https://nominatim.openstreetmap.org
  NOMINATIM_USER_AGENT=mana-geocoding/1.0 (+https://mana.how; ...)
  NOMINATIM_INTERVAL_MS=1100

Testing: 115 tests green (was 42). New coverage:
  - osm-category-map.test.ts (47 cases over food/transit/shopping/
    leisure/work/other priority resolution)
  - rate-limiter.test.ts (FIFO, abort-during-wait, abort-during-sleep)
  - chain.test.ts (failover, empty-results-stops, health-cache,
    snapshot)
  - photon-normalizer.test.ts and nominatim-normalizer.test.ts (lock
    the wire-format mapping for both fallback providers)

Live smoke against public Photon verified — both /search and /reverse
return correctly normalized results with provider="photon" when Pelias
is unreachable.
2026-04-28 15:21:11 +02:00
Till JS
ff823bff60 fix(feedback): POST /api/v1/feedback liest appId aus X-App-Id-Header
Der Submit-Handler hat den Body 1:1 an feedbackService.createFeedback
weitergereicht. Da CreateFeedbackInput appId nicht enthält (Client
schickt es als X-App-Id-Header), schlug jeder INSERT mit "null value
in column app_id violates not-null constraint" fehl.

Außerdem: lightbulb-Icon im phosphor-icon-map nachgezogen, sonst
zeigt der "Idee teilen"-Eintrag in der barMode-Variante des Usermenüs
kein Icon (nur Label).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 15:16:11 +02:00
Till JS
94d3277e2e feat(feedback): "Idee teilen" lebt jetzt im PillNav-Usermenü
Ersetzt den schwebenden "Idee?"-Pill durch einen Eintrag im rechten
Usermenü (Profil / Credits / Idee teilen / Logout). Ein Affordance an
einer Stelle statt zwei nebeneinander.

- PillNavigation: neuer onFeedback-Prop + Lightbulb-Icon. Wenn gesetzt,
  ersetzt der Eintrag den Legacy-/feedback-Link in accountLinks und
  taucht zusätzlich oben in den userMenuBarItems (barMode) auf.
- UserMenuPanel: AccountLink kennt jetzt onClick? als Alternative zu
  href? — Action-Chips schließen das Panel direkt nach dem Klick.
- (app)/+layout: GlobalFeedbackPill-Mount entfernt, FeedbackQuickModal
  wird state-gebunden gerendert (moduleContext aus Pfad/?app= abgeleitet
  wie bisher in der alten Pill).
- GlobalFeedbackPill.svelte gelöscht — niemand referenziert sie mehr.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 15:12:27 +02:00
Till JS
eaa1d7432b fix: silence two cosmetic boot-time devtools warnings
1. /api/auth/organization/get-active-member 400
   The Better-Auth org plugin returns 400 ("active organization not
   found") whenever the session has no activeOrganizationId yet — i.e.
   on every fresh inkognito login. The fetch was already tolerated
   (fetchActiveMember returns null on 400), but the network panel
   logged it as a noisy red row.

   Fix: gate the call on the localStorage hint. The hint is set by
   writeActiveSpaceHint() after every successful set-active, so its
   presence is a reliable proxy for "session has activeOrganizationId
   set". Without a hint we go straight to list + auto-activate
   Personal — same effective outcome, no 400.

2. Chrome "Autofocus processing was blocked" on /onboarding/name and
   /onboarding/wish
   The static `autofocus` attribute races the previous route's focus
   owner across the SvelteKit transition. Chrome refuses to honour
   autofocus when a document already has a focused element and warns.

   Fix: replace the attribute with `bind:this={el}` + a $effect that
   imperatively `el.focus()`s after `tick()` — by then the outgoing
   page has unmounted and there's no competing focus claim. The
   svelte-ignore directives are no longer needed and have been removed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 15:10:15 +02:00
Till JS
537719032e infra(macmini): bump squeezed container memory limits
Mac Mini was running at 99% memory pressure with 8.6 GB swap active —
load was OK but every cold-container request was paying disk-I/O for
swapped pages. Container observations:

  redis      190/192 MB (99 %)  — close to OOM, hot keys evicting
  victoria   227/256 MB (89 %)  — constant GC pressure
  glitchtip  232/256 MB (91 %)
  umami      223/256 MB (87 %)

Each bumped to 384 MB, total +512 MB reservation in the Colima VM.
Headroom for that comes from stopping the Pelias stack (~3 GB freed)
in the same change-window.

Redis additionally gets `--maxmemory 320mb --maxmemory-policy allkeys-lru`
so the daemon evicts its own LRU keys at ~80 % of mem_limit instead of
letting the kernel OOM-kill the whole container. Safe for our usage —
Redis only holds rate-limit counters + sync hot-paths, no critical state.

Pelias stays stopped pending a migration to mana-gpu; mana-geocoding
will need a Nominatim fallback before the migration so the Places
module's address lookup keeps working.
2026-04-28 15:02:38 +02:00
Till JS
0c30a16eb5 fix: 4 boot-time noise + correctness bugs surfaced by post-deploy smoke
All four were pre-existing; the audit smoke-test made them visible. Fixed
together because they share a "boot console-warn cleanup" theme.

1. streaks ensureSeeded race (DexieError2 ×2)
   - Two boot-time liveQuery callers passed the `count > 0` check before
     either had written, then the second's `.add()` hit a ConstraintError.
   - Fix: cache the seed promise per module, run the existence check +
     bulkAdd inside one Dexie RW transaction, and only insert MISSING
     defs (preserves existing currentStreak/longestStreak counts).

2. encryptRecord('agents', …) "wrong table name?" warning
   - The DEV-only check fired whenever a record carried none of the
     registered encrypted fields, regardless of whether anything could
     actually leak. `ensureDefaultAgent` writes a fresh agent row before
     `systemPrompt` / `memory` exist — pure noise.
   - Fix: drop the "no fields at all" branch. Keep the case-mismatch
     branch (the branch that actually catches silent plaintext leaks).

3. Passkey signInWithPasskey "Cannot read properties of undefined
   (reading 'allowCredentials')"
   - Client destructured `{ options, challengeId }` from the server's
     options response, but Better-Auth's `@better-auth/passkey` plugin
     returns the raw PublicKeyCredentialRequestOptionsJSON (no
     envelope) and tracks the challenge in a signed cookie. Both
     `options` and `challengeId` came back undefined; SimpleWebAuthn
     blew up the moment it tried to read the request shape. Verify body
     `{ challengeId, credential }` was likewise wrong — Better-Auth
     wants `{ response }`.
   - Fix: align both register and authenticate flows with Better-Auth's
     native shape on options + verify, and add `credentials: 'include'`
     on every fetch so the challenge cookie actually round-trips.
     Server's verify proxy now reads `parsed?.response?.id` for
     credentialID rate-limiting.

4. /api/v1/me/onboarding/ → 404
   - Hono's nested router (`app.route(prefix, sub)` + inner
     `app.get('/')`) matches the prefix-without-slash form only. The
     onboarding-status store sent the request with a trailing slash, so
     every login produced a 404 + a console warn.
   - Fix: client sends the path without trailing slash; mana-auth picks
     up `hono/trailing-slash` middleware as defense-in-depth so a future
     accidental trailing slash on any /me/* route 301-redirects instead
     of 404-ing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 14:56:24 +02:00
Till JS
44f9155ed3 chore(dev): pnpm dev:analytics script + test-checklist mentions local-dev startup
War nicht im Setup dokumentiert: bei localem Web-Dev (5173) muss
mana-analytics auf 3064 laufen, sonst werfen FeedbackHook + Toast-
Poll + /community ein ERR_CONNECTION_REFUSED. Convenience-Script
+ Hinweis in der Test-Checklist verhindern den Stolperer.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 14:54:32 +02:00
Till JS
4237d84c18 i18n(drink+habits+picture): translate 3 list views via $_()
- drink/ListView: route through drink.list_view.* (today/log/empty
  + create form + ctx-menu); drops unused PencilSimple equivalent
- habits/ListView: route through habits.list_view.* (voice
  capture + tally grid + create form + ctx-menu); drops unused
  PencilSimple icon import
- picture/ListView: route through picture.list_view.* (drop overlay,
  action strip, view-mode titles, search placeholder, empty states,
  lightbox actions)

Baseline 833 → 818 (-15).
2026-04-27 22:36:57 +02:00
Till JS
0986d07a7d docs: feedback-hub manual-test-checklist
15 sections covering Phase 3 end-to-end browser-test-flow:
Onboarding-Wish, FeedbackHook + Modal, Public-Feed (eingeloggt +
inkognito), Reactions + Karma, Status-Flow + Loop-Closure, AdminResponse,
Klarname-Toggle, Eulen-Profil (SSR + 404), Threading, Phase-3.F-Cleanup-
Verifikation, Founder-Whitelist, Rate-Limit, Voting-Score, Mobile-
Responsiveness, Quick-DB-Sanity-SQLs.

Plus 'wenn was kaputt ist'-Debug-Pfad und bekannte Lücken (Email-Notifs,
Voice-Submit, Trending, Karma-Decay) als Roadmap-Markers.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 22:33:28 +02:00
Till JS
0f1dbe9d4c i18n(locales): add drink+habits, extend picture for list-view sub 2026-04-27 22:32:59 +02:00
Till JS
7dfa1c74be i18n(body+mood+questions): translate picker/quick-log/question-detail via $_()
- body/components/ExercisePicker: route remaining German strings
  (dialog/close/filter_all/create-form) through body.exercisePicker.*;
  drops { default: '...' } fallbacks now that all keys resolve
- mood/components/QuickLog: route through mood.quick_log.*
- questions/views/DetailView: route through questions.detail.*
  + dynamic questions.status.<id> / questions.priority.<id> /
  questions.detail.depth_<id>; drops local statusLabels/priorityLabels/
  depthLabels const blocks (re-uses existing status+priority keys
  extended with `normal`/`urgent`)

Baseline 851 → 833 (-18).
2026-04-27 19:13:18 +02:00
Till JS
136d3fbf87 i18n(locales): extend body+mood+questions for picker/quicklog/question-detail 2026-04-27 19:11:02 +02:00