Move `parseUrls` out of stores/imports.svelte.ts (which transitively
imports Dexie via collections.ts) into a standalone parse-urls.ts so
the test file can exercise it without booting Dexie. The store re-
exports parseUrls so existing call sites (BulkImportForm, tools.ts)
keep working unchanged.
11 unit tests covering:
- empty + whitespace-only inputs
- newline / whitespace / comma / tab separator handling
- http + https accepted, ftp / mailto / javascript / file rejected
- bare domains rejected (URL accepts them as opaque, our parser
requires explicit scheme)
- duplicate detection preserves first-occurrence order
- canonicalisation (trailing slash on root, query+fragment kept)
- mixed valid / invalid / duplicate token ordering
- title-prefixed-paste behaviour (strict — surfaces non-URL words
as invalid for the user to see)
- 50-URL stress check
Plan: docs/plans/articles-bulk-import.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds import_articles_from_urls tool to the articles module so the AI
Workbench can kick off a bulk-import job in one call. Auto-policy: the
job itself is the unit of approval, no per-article propose card.
- shared-ai schemas: declare the tool name + propose/auto policy
- articles/tools.ts: implement parseUrls + articleImportsStore.createJob
- consume-pickup.ts: handle the new event type
- events/catalog.ts: register article-import lifecycle events
- imports.svelte.ts: minor polish
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
apps/mana/apps/web/src/lib/modules/articles/components/:
- BulkImportForm.svelte: <textarea> + live-validating $derived parser,
counter chips for valid/duplicate/invalid, expandable invalid-list,
submit creates a job + navigates to /articles/import/[jobId].
- JobsList.svelte: index of past + active jobs (newest first), status
pill + progress + per-counter chips. Click row → detail.
- JobDetailView.svelte: live header (status, progress bar, counters),
action bar (pause/resume/cancel/retry-failed/delete), per-item rows
with state pill + URL + open-link or error tooltip.
apps/mana/apps/web/src/routes/(app)/articles/import/:
- +page.svelte: hosts BulkImportForm + JobsList.
- [jobId]/+page.svelte: hosts JobDetailView.
AddUrlForm.svelte: small "Mehrere URLs auf einmal? → Bulk-Import" link
under the single-URL input so the existing flow surfaces the new path.
The whole UI is a pure liveQuery view — JobDetailView re-renders as
the server-worker writes counter updates and item-state transitions
through sync_changes. Worker tick + pickup-consumer (already shipped
in 5535f2da4 + a9bcd4183) close the loop end-to-end.
Phase 6 (Domain-Events + AI-Tool) and Phase 7 (Tests) follow.
Plan: docs/plans/articles-bulk-import.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
apps/mana/apps/web/src/lib/modules/articles/:
- stores/imports.svelte.ts: new file. articleImportsStore with
createJob (bulkAdd N items + 1 job), pauseJob, resumeJob,
cancelJob, retryFailed, deleteJob. parseUrls exported as a pure
function — splits on whitespace+comma, validates http(s) scheme,
deduplicates while preserving input order; used by both the store
and the UI's $derived live-validation in Phase 5.
- queries.ts: toImportJob/toImportItem converters + useImportJobs
(index list), useImportJob (detail header), useImportItems (per-
job item list). All scope-aware via scopedForModule / scopedGet.
Job creation: createJob(urls) → jobId. Items written first so a worker
tick that races the job-write doesn't see a job with totalUrls=N but
fewer items reachable. Server-worker picks up state='pending' items
on its 2s tick.
retryFailed re-arms the job to status='running' if it was 'done',
because all-terminal-items had triggered the auto-completion in the
worker's counter-rollup pass.
deleteJob is soft (deletedAt stamp) on both job + items; already-
landed Article rows are NOT touched.
Phase 5 (UI) follows.
Plan: docs/plans/articles-bulk-import.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Resolves the cross-cutting drift that the app-registry sanity-test was
silently catching but BRANDING_ONLY exceptions papered over.
App-registry wiring:
- Register augur, broadcasts, invoices, timeline as workbench cards.
- Resolve agents↔ai-agents naming drift: workbench id is now `agents`
(matches MANA_APPS + the /agents route URL); folder stays `ai-agents`
for grouping with other ai-* modules.
Broadcast→broadcasts unification:
- module.config appId, MANA_APPS id, APP_ICONS key, all route appIds,
and the redundant APP_URL_OVERRIDES entry — all aligned with the
earlier folder rename so nothing diverges anymore.
Top-level routes for workbench-only modules:
- /goals, /myday, /kontext, /rituals, /automations, /activity — thin
RoutePage wrappers around the existing module ListViews.
- /timeline becomes a real module (ListView extracted from the route),
route shrinks to a 12-line wrapper.
Food unarchive:
- packages/shared-branding/src/mana-apps.ts: remove `archived: true`
from food entry. The module is fully wired (registered, synced,
routed, with AI tools); the flag was outdated.
i18n cleanup:
- Rename ai-agents → agents key in all 5 apps locales.
- Drop dead "observatory" key from all 5 nav locales (route folder was
removed in 7bca16dfa).
New CI guard — scripts/validate-tier-patches.mjs:
- Scans for `LOCAL TIER PATCH — revert before release` markers.
- Default: informational list (does not fail).
- Strict mode (MANA_TIER_PATCH_STRICT=1) for release/RC pipeline.
- Wired into validate:all.
Spec update:
- registry.spec.ts WORKBENCH_ONLY/BRANDING_ONLY: documented Settings
family + AI Studio surfaces + intentionally-internal modules so the
drift guard fires only on real drift.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cold-start fetches from the mana-geocoding container to photon-self
on mana-gpu (over WSL2 mirrored networking) consistently take >10s on
the first probe and ~2s once warm. The previous 8s default caused the
chain to false-mark photon-self unhealthy on every cold path, leaking
to public photon for the next 30s health-cache window — and pinning
the public-photon answer in the 7d cache (now shortened to 1h).
Also wires the docker-compose macmini env to honor PROVIDER_TIMEOUT_MS
and CACHE_PUBLIC_TTL_MS overrides so production picks up the new
values without a code rebuild.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Erste Demo-Persona auf Prod live: chor-taegerwilen@mana.how.
Inhalt:
- Recherche-Brief mit Quellen, IDs, Modul-Mapping, Pitch-Hooks
- data.ts: 54 Mitglieder (S/A/T/B vollständig), Vorstand, Chorleiter,
Termine April–Juni 2026, 5 Konzerte 2026, Konzert-Archiv 2015–2025,
kontextDoc Markdown
- seed.ts: idempotentes Bun-Skript, schreibt direkt in
mana_sync.sync_changes via SSH-Tunnel (5433). Setzt RLS-Context,
räumt prior demo-seed Rows auf, schreibt 118 Records über
kontext / contacts / calendar+timeblocks / events / library /
notes / website / ai-missions.
Pitch-Hook: der Verein war bereits ClubDesk-Kunde — Mana-Replacement
ist die direkte Migrations-Story.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Vorgehen pro Demo-Persona dokumentiert: Recherche → Live-Account auf
Mac-Mini-Prod → Club-Space → idempotentes Seed-Skript → Smoke-Test.
Inkl. Modul-Mapping (appId/tableName), Common Pitfalls (Prod-Schema-Drift
field_timestamps vs field_meta, forced RLS auf sync_changes), und
Lessons aus Persona 1 (Chor Tägerwilen).
Verworfener Fork-System-Plan bleibt nicht im Repo — siehe Memory-Pointer
project_demo_personas_workflow.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Watches `articleExtractPickup` via liveQuery. For each row the server-
worker drops:
1. Look up the matching `articleImportItems` row. Stale → just clean
the inbox.
2. Dedupe race: if the URL has been single-saved meanwhile, point
the import item at the existing article (state='duplicate'),
don't create a second row.
3. Happy path: call existing articlesStore.saveFromExtracted (which
runs encryptRecord + articleTable.add and emits ArticleSaved)
→ flip item to 'saved' (or 'consent-wall' on warning).
4. Delete the pickup row so the inbox stays empty in steady state.
Multi-tab coordination via `navigator.locks.request('mana:articles:pickup')`
with `ifAvailable: true` — only the lock-holder consumes; other tabs
just observe the liveQuery and exit. Falls back to per-row in-memory
dedupe when the Locks API isn't available; the field-LWW server merge
forgives the rare double-process.
Wired from data-layer-listeners.ts so it boots once with the rest of
the data layer and disposes on layout unmount.
End-to-end pipeline now live:
Client write items(state='pending')
→ sync_changes
→ server-worker tick (Phase 2)
→ Pickup row + state='extracted'
→ sync pull → liveQuery
→ saveFromExtracted (encrypt) → flip 'saved' / 'duplicate' / 'consent-wall'
→ delete pickup row
What's still needed for first user-visible test: Phase 4 (store
methods to create a job) + Phase 5 (UI). Without those there's no
way yet to inject items.
Plan: docs/plans/articles-bulk-import.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pelias was retired from the Mac mini on 2026-04-28; photon-self
(self-hosted Photon on mana-gpu) has been the live primary since then.
This removes the now-dead Pelias adapter, config, tests, and the
services/mana-geocoding/pelias/ stack — the entire compose file, the
geojsonify_place_details.js patch, the setup.sh import script.
Provider chain is now `photon-self → photon → nominatim`. The chain
keeps its `privacy: 'local' | 'public'` split, sensitive-query
blocking, coord quantization, and aggressive caching unchanged.
Three direct calls to nominatim.openstreetmap.org that bypassed
mana-geocoding now route through the wrapper:
- citycorners/add-city + citycorners/cities/[slug]/add use the shared
searchAddress() client (browser → same-origin proxy → mana-geocoding
→ photon-self).
- memoro mobile drops its OSM reverse-geocoding fallback entirely;
Expo's on-device reverse-geocoding stays as the sole path. Routing
through the wrapper would require a memoro-server proxy endpoint —
a follow-up if Expo's quality proves insufficient.
Other behavioral changes:
- CACHE_PUBLIC_TTL_MS dropped from 7d → 1h. The long TTL was a
privacy-amplification trick from the Pelias era; with photon-self
serving the bulk of traffic, a transient cross-LAN blip was pinning
cached fallback answers for days. 1h gives quick recovery.
- /health/pelias renamed to /health/photon-self; prometheus blackbox
config + status-page generator updated.
- mana-geocoding container no longer needs `extra_hosts:
host.docker.internal:host-gateway` (was only there for the
Pelias-on-host-network era).
113 tests passing. CLAUDE.md rewritten to reflect the post-Pelias
architecture.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three new sync-tracked Dexie tables under the articles appId:
articleImportJobs — job header (counters, status, lease metadata).
articleImportItems — one row per URL in a job, state-machine driven.
articleExtractPickup — short-lived server→client handoff inbox.
URL stays plaintext on items by necessity — the server-worker reads it
without master-key access, same rationale as articles.originalUrl. The
extracted article eventually lands encrypted in the existing `articles`
table; bulk-import rows hold only pointers.
Plan: docs/plans/articles-bulk-import.md (full architecture, 7 phases,
test matrix, edge-cases). Phase 2 already shipped in 5535f2da4 (worker);
this commit lays the schema underneath it.
Originally committed as b2f4e8314, lost during a parallel reset, here
restored via cherry-pick.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Decision report: status flipped to MIGRATED; added migration log with
five WSL2 gotchas (bzip2 missing, no official Photon image,
firewall=true blocks cross-LAN, vmIdleTimeout=-1 ineffective,
PowerShell pre-expansion of bash $(...)) and resource snapshot.
- mana-geocoding CLAUDE.md: PHOTON_SELF_API_URL note now reflects live
primary status on mana-gpu since 2026-04-28.
- photon-self/: operator scripts for the weekly DB refresh — update.sh
(atomic-swap with rollback), systemd unit + timer (Sun 03:30 +30min
jitter, Persistent=true), README with re-installation instructions
for DR. Currently installed and enabled on mana-gpu.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The wrapper supports a `photon-self` provider when PHOTON_SELF_API_URL
is set, but the compose file wasn't forwarding the env-var into the
container. Add it as an env-substitution from .env.macmini so flipping
the GPU-server-hosted Photon on/off is one line in the env file.
Empty string = slot disabled (back-compat with the old config).
Required for the 2026-04-28 Photon-on-mana-gpu migration to take effect.
The wrapper code that consumes this env-var landed in 153ad8049
(dual-Photon support).
Bei kurzen Posts (oder wenn mana-llm fehlschlug) hat der Auto-Title-
Fallback `feedbackText.slice(0, 80)` den Body 1:1 als Title gespeichert
— Card zeigte dann zwei Mal denselben Text.
Zwei Schichten Schutz:
1. **Server (mana-analytics)**: catch-Branch wirft den Prefix-Fallback
raus (title bleibt null). Zusätzlich neue isRedundantTitle()-Heuristik
verwirft auch Auto-Titles, die nur ein truncierter Prefix des Bodies
sind (Whitespace-collapse + Ellipsis-strip).
2. **Frontend (ItemCard)**: defensive showTitle-Computed — ältere DB-
Items mit redundantem Title rendern automatisch nur den Body, ohne
dass eine Datenbank-Cleanup nötig ist.
Title-Slot bleibt für echte Auto-Summaries und manuelle Titel sichtbar.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Colima starts its Linux VM with no swap configured. Without swap the
kernel responds to memory pressure by invoking the OOM-killer instead
of paging out cold pages — meaning a transient peak (mana-web Vite
build with 8 GiB heap landing on top of the running container set)
takes down a container instead of just stalling for a few seconds.
The 2026-04-28 Mac Mini RAM audit found:
- VM allocated: 12 GiB (1 GiB kernel overhead → 11 GiB user)
- Container RSS: ~4 GiB pinned
- Available headroom: ~7.6 GiB
- mana-web Vite peak: ~8 GiB
That's 400 MiB over the limit during builds, which is why we previously
needed the build-memory-headroom.sh wrapper to pause monitoring (frees
~700 MiB temporarily). Swap is the safer second backstop — Linux only
swaps under actual pressure (used=0 right after creation, confirmed
free -h), and the kernel can fall back to paging cold container memory
to give a build the burst it needs without killing anything.
The new step in migrate-to-colima.sh:
- creates /swap (2 GiB, root-only)
- mkswap + swapon
- persists in /etc/fstab so the VM remounts it on every restart
- idempotent — re-runs are no-ops
Already provisioned on the live VM via:
ssh mana-server 'colima ssh -- "sudo fallocate -l 2G /swap && \
sudo chmod 600 /swap && sudo mkswap /swap && sudo swapon /swap"'
Verified: free -h shows Swap: 2.0Gi total / 0B used. Currently dormant.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous version of the cleanup trap only deleted SQL files left
by drizzle-kit's probe-generate, but not the matching `_snapshot.json`
(56 KB per service) or the journal entry. Each deploy then leaked one
snapshot file into the runner's working tree.
Surfaced after my own local smoke-test: ran the script against
mana-auth, found a 56 KB \`drizzle/meta/0000_snapshot.json\` left
behind that I had to clean up manually.
The trap now:
- Computes the full set of files added under \`drizzle/\` during this
run (not just SQL) and removes every one of them.
- Strips the probe's journal entry via jq.
- If the \`drizzle/\` dir didn't exist before the run, removes it
entirely. Otherwise sweeps empty meta/ subdirs the run created.
Smoke-tested locally: working tree is clean after each run regardless
of whether drizzle/ existed beforehand.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wer in Feed/Workbench eine Reaction setzt, landete bisher direkt im
Detail-View — der Button-Click ist zur Card-onclick durchgesickert.
Fix in der Quelle: ReactionBar.handleClick ruft jetzt e.stopPropagation()
bevor onToggle feuert. Damit funktioniert es überall, wo Reactions in
einer klickbaren Hülle sitzen (Feed-Cards, MyReactedView, Detail-Page,
zukünftige Surfaces).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two layout fixes for the Lasts ListView:
1. Tab bar: status filters (Alle/Vermutet/Bestätigt/Aufgehoben) get inline
Phosphor icons + parenthesized counters. Inbox/Meilensteine/Einstellungen
now render as full icon+label tabs in a `border-left`-separated cluster
instead of icon-only links. The whole bar is `overflow-x: auto` with
hidden scrollbars (matches calendar/DateStrip pattern), so narrow
workbench cards scroll horizontally instead of wrapping.
2. Quick-add: collapses two rows (input + Vermutet/Bestätigt pill toggle)
into one. Mode is a `<select>` styled like the category select, sitting
to the right of the title input. Removes the visual duplication where
the toggle pills mimicked the status tabs above.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
migration
The chain now distinguishes two Photon instances:
photon-self privacy: 'local' (self-hosted on mana-gpu)
photon privacy: 'public' (komoot.io, last-resort fallback)
Both wrap the same `PhotonProvider` class with different config — only
the URL, name, and privacy stance differ. The new ProviderName variant
'photon-self' lets the chain track per-provider health for them
independently (a single 'photon' slot would collide in the health
Map).
Opt-in registration: `photon-self` is only built when
PHOTON_SELF_API_URL is set in the env. When unset (current state),
the chain has the same shape as before — full backward compat. After
the GPU migration, flipping the env-var on is the only deploy step
needed:
PHOTON_SELF_API_URL=http://192.168.178.11:2322
Default chain order updated to:
photon-self,pelias,photon,nominatim
^^^^^^^^^^^ silently skipped if not registered (env unset)
The privacy guarantee is structural: photon-self carries privacy:
'local', so the existing sensitive-query block from the previous
hardening commit now has a real local backend post-migration —
medical/crisis-service queries get real results instead of the
"sensitive_local_unavailable" notice.
Tests: 148 (was 141). New coverage:
- src/__tests__/app.test.ts: createChain registration logic — verifies
photon-self appears iff PHOTON_SELF_API_URL is set, ordering
honored, GEOCODING_PROVIDERS env-var filter respected
- providers/__tests__/photon-normalizer.test.ts: provider field
carries 'photon' or 'photon-self' based on the call argument
Recon of mana-gpu (2026-04-28): Windows 11 Pro Build 26200, 64 GB
RAM (56 GB free), 739 GB disk free, no WSL2/Docker yet, no native
GPU services running. Setup plan documented in
docs/runbooks/photon-on-mana-gpu.md (3–4 h, ~1 h of which is
download/unpack waiting).
Two follow-up fixes after the first migration-step deploy revealed
gaps:
1. \`pnpm dlx drizzle-kit\` doesn't work — the drizzle.config.ts file
itself does \`import { defineConfig } from 'drizzle-kit'\`, and
Node's resolver only finds that import via local node_modules,
not pnpm's dlx cache. Reverted to plain \`pnpm exec drizzle-kit\`
and require the workspace to be installed.
2. CD now runs \`pnpm install --filter ./services/<svc>... --frozen-
lockfile --ignore-scripts\` once at the start of the migration
step for every Drizzle service in the deploy. Path-based filter
(not name-based) because our service package names follow no
uniform convention (\`@mana/auth\` vs \`@mana/credits-service\` vs
\`@mana/events\`). pnpm's lockfile cache makes second-and-later
runs near-instant.
3. Dropped the \`--silent\` flag from \`pnpm exec drizzle-kit --version\`
— it isn't a recognised pnpm-exec flag and causes a 254 exit code,
making the script's "is drizzle-kit available?" probe always fail.
Smoke-tested locally — script now runs cleanly against mana-auth's
schema, reports "no changes detected", cleans up the probe SQL file.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Letzter "community"-Rest aus dem Feedback-Hub räumt sich auf — DB-Spalten,
Settings-Search-Index, Section-Name und i18n-Keys einheitlich auf
"feedback":
- DB: auth.users.community_show_real_name → feedback_show_real_name,
community_karma → feedback_karma. Migration unter
services/mana-auth/sql/009_rename_community_to_feedback.sql (manuell
via psql, in Drizzle-Schema beider Services nachgezogen).
- mana-auth/me.ts: PATCH /api/v1/me/profile akzeptiert jetzt
feedbackShowRealName und gibt es im Response zurück.
- mana-analytics: feedback.ts liest authUsers.feedbackShowRealName /
feedbackKarma, redact() + Karma-Increment + Tests entsprechend.
- Frontend: CommunitySection.svelte → FeedbackIdentitySection.svelte
(Datei umbenannt, Property-Namen + Toast-Texte aktualisiert,
HeartHalf-Icon, "Feedback-Identität" als Title).
- searchIndex.ts: CategoryId 'community' → 'feedback', anchor
'community-identity' → 'feedback-identity'.
- i18n (5 locales): settings.categories.community → .feedback,
settings.search.community_* → feedback_*. Labels DE/EN/FR/IT/ES
jeweils auf "Feedback" + "im Feedback-Feed" angepasst.
38/38 Integration-Tests grün, validate:i18n-parity sauber, svelte-check 0.
BREAKING (intern, nicht live): Frontend, das gegen die alten Spalten- /
Property-Namen aus dem PATCH-Response geht, fällt jetzt um. Kein
Production-Risiko da Hub noch nicht öffentlich.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Compares Pelias / Nominatim / Photon for self-hosting on the GPU
server, with current (2026-04-28) numbers from upstream docs +
GraphHopper's Photon-data downloads:
Photon Europe pre-built dump: 30.6 GB, weekly refresh
Photon Germany pre-built dump: 5.8 GB, weekly refresh
Nominatim Germany import: ~100 GB disk, 8–12 h, 12 GB RAM
Pelias DACH (current): 3 GB RAM, 4 services, JS patch hack
Recommendation: Photon Europe-wide on mana-gpu. Single Java process,
embedded OpenSearch, no PBF import (download a tarball, restart),
weekly auto-updates from GraphHopper, integrates with the wrapper's
existing PhotonProvider via just an env-var change.
Once self-hosted, Photon registers as `privacy: 'local'` — the
sensitive-query block (Hausarzt, Klinikum, …) gets a real local
backend and no longer has to return empty results when Pelias is
down. Public Photon stays in the chain as a `privacy: 'public'`
last-resort fallback.
Migration plan included (~3–4 h total, ~1 h waiting), with
phase-by-phase risk assessment.
Pelias does not return — the 3 GB RAM + multi-container + patched
JS combination has no operational case once we have a self-hosted
Photon that already matches our wrapper's wire format.
The Mac Mini runner doesn't run \`pnpm install\` (every service builds
inside Docker), so per-service node_modules/.bin/drizzle-kit isn't
present. The first deploy with the new migration step printed
\`ERR_PNPM_RECURSIVE_EXEC_FIRST_FAIL Command "drizzle-kit" not found\`
and silently treated every service as "no schema changes — clean".
Pick the invocation mode at runtime: \`pnpm exec drizzle-kit\` if a local
binary exists, otherwise \`pnpm dlx drizzle-kit\`. dlx caches the package
in the global pnpm store after the first fetch, so subsequent calls
are fast. drizzle-kit reads its config from cwd, so it still picks up
each service's drizzle.config.ts correctly.
Smoke-tested locally against services/mana-auth — script reports
"no schema changes — clean" instead of failing silently.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The mana-geocoding wrapper now returns `notice: 'fallback_used' |
'sensitive_local_unavailable'` alongside results so the UI can show
the user *why* a query had unusual behavior. This commit wires that
all the way through the Places module's address-autocomplete inputs.
Geocoding client (lib/geocoding/index.ts):
- Add `GeocodingNotice` and `SearchOutcome` types
- Add `searchAddressDetailed` and `reverseGeocodeDetailed` — same
semantics as the existing functions but return the wrapper's
provider/notice metadata. Existing `searchAddress`/`reverseGeocode`
stay backward-compatible (they call the detailed variants under
the hood and discard the metadata).
- Extend GeocodingResult with optional `provider` field.
Places ListView (the only current consumer that exposes typed
addresses to users):
- Both autocomplete inputs (tracking-edit + main address-search)
now use searchAddressDetailed and surface notices inline.
- 'sensitive_local_unavailable' renders an amber explainer block in
the dropdown — title + body — so the user knows why their medical
query returned 0 hits without leaking the search to a public API.
- 'fallback_used' renders a small "≈ ungefähr" footer badge so users
understand the result came from public OSM (less precise but
still valid).
- The dropdown opens when EITHER results exist OR a notice is
present — sensitive blocked queries with empty results still
surface their explainer.
i18n: new `places.geocoding_notice.*` sub-namespace in all 5 locales
(de/en/es/fr/it) — 4 strings each. All validators green.
Other consumers (places DetailView, events, photos, contacts) keep
the existing searchAddress/reverseGeocode calls — they don't need
the privacy notices today and would just add noise. They can adopt
the detailed variant if/when the use case warrants it.
The compose mem_limits hadn't been revisited in months. Today's
live `docker stats` snapshot revealed:
- 5 services using <25% of their limit (waste)
- 3 services using >70% of their limit (OOM risk during spikes)
Adjusted both directions, no container removal, no behaviour change.
Each tweak carries a 1-line rationale in the file with the observed
RSS that motivated it.
Bumped (tight → comfortable):
mana-mon-cadvisor 128m → 160m (was 76% — bursts during stat collection)
mana-mon-alert-notifier 32m → 48m (was 79% — alert-bursts queue up)
mana-core-media 128m → 160m (was 63% — image-thumb spikes)
Trimmed (over-provisioned):
mana-research 256m → 128m (live ~57m, 22%)
mana-mail 256m → 128m (live ~11m bootstrap; legitimate growth headroom)
mana-app-uload-server 256m → 128m (live ~51m, 20%)
mana-service-llm 256m → 128m (live ~46m, 18%; thin proxy to upstream Ollama)
mana-app-llm-playground 128m → 64m (live ~22m, 17%; static-export demo)
Net delta: -496 MiB in compose limits — direct headroom for the
mana-web Vite build that previously OOM'd on the same VM. Combined
with the build-memory-headroom.sh wrapper (which still pauses the
monitoring stack during heavy builds), the Vite OOM risk is gone
on paper.
Containers will be recreated on next CD pass through `docker compose
up -d` (touched env or recipe). For the trimmed services, the new
limit is well above current RSS so nothing should OOM. For the bumped
services, the old limit was the tight one, so this only relaxes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Modul, Routen und Public-Domain heißen jetzt einheitlich "feedback":
- App-Registry: id 'community' → 'feedback', name 'Community' → 'Feedback',
Icon Megaphone → HeartHalf (passt zum bereits-globalen heart-half-Icon
am Module-Header und im PillNav-Usermenü)
- Modul-Config: communityModuleConfig → feedbackModuleConfig
- Routen-Refs: alle href/goto-Aufrufe in Modul-Views, MyWishesView,
Onboarding-Wish, Profile-MyWishes auf /feedback umgestellt
- /feedback/+layout: Brand "Mana Community" → "Mana Feedback", Megaphone
→ HeartHalf, "In Mana öffnen"-CTA zeigt jetzt auf /?app=feedback
- Public-Mirror Domain: community.mana.how → feedback.mana.how
(cloudflared-config.yml + docker-compose.macmini.yml CORS_ORIGINS +
PUBLIC_MANA_ANALYTICS_URL_CLIENT). DNS muss separat angelegt werden.
- Settings-Section: Hilfe-Text nennt jetzt feedback.mana.how
Internal: community_show_real_name + community_karma DB-Spalten bleiben
(Migration nicht im Scope dieses Renames). Settings-Search-Index-Kategorie
'community' bleibt ebenfalls — sie spiegelt das DB-Schema, nicht den
User-Begriff.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two CD-pipeline ergonomics fixes that surfaced during the 2026-04-28
schema-drift sweep.
(C) Auto-apply additive Drizzle migrations
========================================
8 services use Drizzle (mana-auth/-credits/-events/-research/-mail/
-subscriptions/-user/-analytics) but the CD pipeline never ran their
`db:push` script, so 4 schema additions stayed undeployed for days
(auth.users.kind, credits.{sync_subscriptions,reservations},
event_discovery.*) until live PostgresErrors surfaced them.
New `scripts/mac-mini/safe-db-push.sh`:
- Uses `drizzle-kit generate` to write a probe SQL file (does NOT
apply yet).
- Greps the generated SQL for destructive patterns (DROP TABLE/
COLUMN/TYPE/SCHEMA/INDEX, ALTER COLUMN ... TYPE, RENAME).
- Refuses to auto-apply if any are found — operator must review and
run `pnpm db:push --force` manually after pg_dump.
- Otherwise applies via `drizzle-kit push --force` and cleans up the
probe artifacts.
CD step "Apply schema migrations" runs between build and container
restart, sourcing each changed service's DATABASE_URL from compose
config (with @postgres → @localhost rewrite for the host runner).
Failure aborts deploy before the new container starts — the old
container keeps running with the old schema, which matches.
(D) Build-time RAM headroom
========================================
mana-web's Vite build needs 8 GiB of Node heap; Colima's VM is sized
at 12 GiB; ~3.5 GiB of other containers run during deploy. The 2026-
04-28 mana-web deploy OOM'd at the Vite step ("cannot allocate
memory") and only succeeded on retry once concurrent traffic settled.
New `scripts/mac-mini/build-memory-headroom.sh`:
- `start`: stops every container matching `^mana-mon-` (the
observability stack — VictoriaMetrics, Loki, Glitchtip, cAdvisor,
umami, blackbox, exporters). Frees ~700 MiB.
- `stop`: restores them from the snapshot list captured at start.
- `wrap <cmd>`: pause + run + always-resume via trap.
CD wraps the build loop with start/stop, but only when mana-web is in
the change set — other services build well below 4 GiB and don't
need the headroom. The monitoring stack resumes before the migration
step so cAdvisor + exporters are back online for the deploy-metrics
collection.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
quantization + extended cache TTL for public answers
Three independent defenses limit what public geocoding APIs (Photon,
Nominatim) can learn from our outbound traffic:
1. **Sensitive-query block** (`lib/sensitive-query.ts`)
Queries matching the medical/mental-health/crisis-service keyword
list (Hausarzt, Psychiater, Klinikum, HIV, Frauenhaus, …) are
never forwarded to public APIs. The chain detects sensitivity at
the route layer and runs the search in localOnly mode — providers
with `privacy: 'public'` are filtered out before iteration begins.
When no local provider is available (Pelias stopped), a sensitive
query returns ok:true with results:[] and notice:
'sensitive_local_unavailable' so the UI can show a sensible
message instead of "no results".
The keyword list is documented inline. False negatives are the
risk; false positives just produce a 0-result UX hit (better
trade-off).
2. **Coordinate quantization** (`lib/privacy.ts`)
Forward-search focus.lat/lon: rounded to 2 decimals (~1.1km).
Enough for the bias to work, hides exact GPS.
Reverse-geocoding lat/lon: rounded to 3 decimals (~110m).
City-block resolution — sufficient for "what's near me?",
avoids reverse-geocoding the user's exact front door.
Pelias always gets full precision; quantization only on the way
out to public APIs. New `privacy: 'local' | 'public'` field on
the GeocodingProvider interface drives this.
3. **Extended cache TTL for public answers**
New `cache.publicTtlMs` config option, default 7 days (vs. 24h
for local-provider answers). LRU cache extended with optional
`ttlOverrideMs` per entry. Same query from N users → 1 outbound
request to Photon/Nominatim. Strongest privacy lever we have
over public providers (we can't change their logging, only the
rate at which we feed them queries).
Threat coverage:
✓ User IP / identity hidden (already true — wrapper is the proxy)
✓ Exact GPS hidden (quantization)
✓ Sensitive query content protected (block)
~ Non-sensitive query content visible (acceptable trade-off)
~ Aggregate profiling reduced ~10–100× (cache)
✗ TLS-level traffic analysis, compelled disclosure (out of scope)
Tests: 141 (was 115). New coverage:
- privacy.test.ts: quantization rules (locks the privacy claim)
- sensitive-query.test.ts: positive matches across categories +
documented false positives we accept
- chain.test.ts: localOnly mode end-to-end including the load-
bearing assertion that public providers' search() must NEVER be
called when the chain is in localOnly mode (no race window)
- cache.test.ts: per-entry ttlOverride longer + shorter than default
Live smoke verified end-to-end:
- "Hausarzt Konstanz" with Pelias down → no public API call,
notice: 'sensitive_local_unavailable'
- "Konstanz" → falls through to Photon, notice: 'fallback_used'
- Reverse with high-precision GPS → Photon receives quantized
coords, returns city-block-level result
main.py's lifespan handler loads `Path(__file__).parent.parent /
'aliases.yaml'` (= /app/aliases.yaml) on startup. The Dockerfile only
copied `src/`, so prod containers always crashlooped on first start
with `AliasConfigError: alias config not found at /app/aliases.yaml`
— which is why mana-llm has been silently absent from prod. Surfaced
today after a manual `gh workflow run cd-macmini.yml -f service=
mana-llm` actually attempted to start the container instead of
relying on a long-stale image.
Tested locally: container now starts cleanly, /health returns 200,
and `/v1/aliases` lists the configured chains.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Dockerfile copied only @mana/shared-{hono,ai,logger}, but
services/mana-ai/package.json also depends on @mana/shared-research and
@mana/tool-registry (and @mana/tool-registry pulls in
@mana/shared-crypto transitively). Without those, pnpm couldn't
resolve the workspace symlinks and the container crashlooped on every
restart with:
error: ENOENT reading "/app/services/mana-ai/node_modules/@mana/tool-registry"
Surfaced today after a manual `gh workflow run cd-macmini.yml -f
service=mana-ai` — mana-ai had never been deployed because no commit
since the CD pipeline started had touched its path. The first real
deploy hit the missing-COPY immediately.
Header comment in the Dockerfile now spells out direct + transitive
workspace deps so future additions don't drift.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
First-probe DNS+TLS handshake against Nominatim can take >5s on a
cold start (verified locally: 642ms warm, sometimes 5-8s cold). The
old 5s default false-marked Nominatim unhealthy and the 30s health-
cache then locked us into a fallback-of-fallback gap. 8s gives
enough headroom for cold-start while still cutting off actually-
stuck connections.
Photon and Pelias don't hit this — Photon's CDN is consistently
sub-second and Pelias is on localhost / LAN. Only the public
Nominatim path warranted the bump, but the timeout is per-provider
shared so we adjust it globally.
Existing PROVIDER_TIMEOUT_MS env override still wins.
Drei Probleme adressiert:
1. **Icon-Vereinheitlichung**: alle Feedback-Affordances tragen jetzt
das phosphor `heart-half`-Icon (statt vorher Lightbulb/Mix). Geändert
in PillNav-Usermenü, ModuleShell-Header (FeedbackHook), Phosphor-Icon-
Map. Eine Stelle, ein Icon — Wiedererkennung steigt.
2. **Inline statt Modal in Workbench-Cards**: AppPage.svelte rendert
das Feedback-Formular jetzt im selben Slot wie die Hilfe-Seite —
Klick auf das Heart-Half-Icon togglet den Inline-Panel statt einen
Modal-Backdrop über die ganze Workbench zu legen. Hilfe und Feedback
sind mutually-exclusive (eines geht zu, sobald das andere aufgeht).
3. **Form-Body extrahiert**: FeedbackForm.svelte enthält jetzt das
Formular ohne jegliches Chrome. FeedbackQuickModal nutzt es im Modal-
Mode (Standalone-Routen, PillNav), AppPage im Inline-Mode. Eine
Quelle, beide Surfaces bleiben in sync.
ModuleShell schluckt zusätzlich `onFeedback`/`feedbackOpen`-Props: wenn
gesetzt, ruft die FeedbackHook-Komponente onClick statt das eigene Modal
zu öffnen — der Host (AppPage) übernimmt das Rendering.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The CI workflow accumulated 16 build jobs for apps/services that no
longer exist after the 2026-04 unification (chat, todo, calendar, clock,
contacts, presi, storage, food, skilltree, telegram-stats-bot — all of
their /apps/web and /apps/backend dirs are gone) plus a structurally
broken `build-mana-web` (its Dockerfile starts FROM `sveltekit-base:local`,
an image only the Mac Mini self-hosted runner has). Every push has been
producing red CI runs from these dead jobs while the real production
deploys (cd-macmini.yml) succeeded.
Removed:
- detect-changes per-product outputs + force-build-all branches
- 220 lines of dead per-product detect-logic shell
- 19 lines of per-product summary block
- build-mana-web (broken; CD on Mac Mini covers prod)
- build-{chat,todo,calendar,clock,contacts,presi,storage,food,skilltree}-{web,backend}
- build-telegram-stats-bot
Kept (still build cleanly on ubuntu-latest):
- build-mana-{auth,search,sync,notify,api-gateway,crawler,media,credits}
- validate (PRs)
- auth-integration (PRs)
CI workflow shrank 1290 → 583 lines. The header comment now spells out
which services are in/out of CI and why.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
mana-geocoding now tries Pelias first, falls back to public Photon
(komoot.io) and finally to public Nominatim (OSM) when Pelias is
unhealthy or unreachable. The Places module's address lookup keeps
working even when the Pelias container is stopped — which it currently
is on the Mac mini, freeing 3 GB of RAM until Pelias gets migrated to
the GPU server.
Architecture:
ProviderChain ─ tries providers in priority order, stops on first
success. A clean empty-results answer is definitive
(don't burn through public-API budget on a query that
legitimately has no match). Only network errors / 5xx
/ 429 trigger fallthrough.
HealthCache ─ per-provider, 30s TTL. A failed health probe or a
failed search marks the provider unhealthy and skips
it for the rest of the cache window. Lazy refresh —
no background pinger.
RateLimiter ─ single-token FIFO queue, 1100ms gap by default.
Used to enforce Nominatim's 1 req/sec policy. Handles
abort during inter-task wait by releasing the busy
flag so later tasks aren't blocked.
Provider details:
pelias — primary, self-hosted DACH index, full OSM taxonomy in
`peliasCategories`, no rate limit
photon — public komoot endpoint, GeoJSON shape, raw `osm_key:
osm_value` mapped via lib/osm-category-map.ts. Faster
than Nominatim, no advertised rate limit but be polite.
nominatim — public OSM endpoint, strict 1 req/sec via the limiter,
custom User-Agent required (otherwise 403). Last
resort — fallback for when Photon is also down.
Response shape changes (additive only — existing callers keep
working):
- results[].provider: 'pelias' | 'photon' | 'nominatim'
- results[].peliasCategories: only present when Pelias served the
request (was already absent on Pelias-API patch failures)
- top-level provider: <name> + tried: <name[]> on success/error
- new endpoint: GET /health/providers — per-provider snapshot
Configuration via env (defaults shipped):
GEOCODING_PROVIDERS=pelias,photon,nominatim # order matters
PROVIDER_TIMEOUT_MS=5000
PROVIDER_HEALTH_CACHE_MS=30000
PHOTON_API_URL=https://photon.komoot.io
NOMINATIM_API_URL=https://nominatim.openstreetmap.org
NOMINATIM_USER_AGENT=mana-geocoding/1.0 (+https://mana.how; ...)
NOMINATIM_INTERVAL_MS=1100
Testing: 115 tests green (was 42). New coverage:
- osm-category-map.test.ts (47 cases over food/transit/shopping/
leisure/work/other priority resolution)
- rate-limiter.test.ts (FIFO, abort-during-wait, abort-during-sleep)
- chain.test.ts (failover, empty-results-stops, health-cache,
snapshot)
- photon-normalizer.test.ts and nominatim-normalizer.test.ts (lock
the wire-format mapping for both fallback providers)
Live smoke against public Photon verified — both /search and /reverse
return correctly normalized results with provider="photon" when Pelias
is unreachable.
Der Submit-Handler hat den Body 1:1 an feedbackService.createFeedback
weitergereicht. Da CreateFeedbackInput appId nicht enthält (Client
schickt es als X-App-Id-Header), schlug jeder INSERT mit "null value
in column app_id violates not-null constraint" fehl.
Außerdem: lightbulb-Icon im phosphor-icon-map nachgezogen, sonst
zeigt der "Idee teilen"-Eintrag in der barMode-Variante des Usermenüs
kein Icon (nur Label).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Ersetzt den schwebenden "Idee?"-Pill durch einen Eintrag im rechten
Usermenü (Profil / Credits / Idee teilen / Logout). Ein Affordance an
einer Stelle statt zwei nebeneinander.
- PillNavigation: neuer onFeedback-Prop + Lightbulb-Icon. Wenn gesetzt,
ersetzt der Eintrag den Legacy-/feedback-Link in accountLinks und
taucht zusätzlich oben in den userMenuBarItems (barMode) auf.
- UserMenuPanel: AccountLink kennt jetzt onClick? als Alternative zu
href? — Action-Chips schließen das Panel direkt nach dem Klick.
- (app)/+layout: GlobalFeedbackPill-Mount entfernt, FeedbackQuickModal
wird state-gebunden gerendert (moduleContext aus Pfad/?app= abgeleitet
wie bisher in der alten Pill).
- GlobalFeedbackPill.svelte gelöscht — niemand referenziert sie mehr.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1. /api/auth/organization/get-active-member 400
The Better-Auth org plugin returns 400 ("active organization not
found") whenever the session has no activeOrganizationId yet — i.e.
on every fresh inkognito login. The fetch was already tolerated
(fetchActiveMember returns null on 400), but the network panel
logged it as a noisy red row.
Fix: gate the call on the localStorage hint. The hint is set by
writeActiveSpaceHint() after every successful set-active, so its
presence is a reliable proxy for "session has activeOrganizationId
set". Without a hint we go straight to list + auto-activate
Personal — same effective outcome, no 400.
2. Chrome "Autofocus processing was blocked" on /onboarding/name and
/onboarding/wish
The static `autofocus` attribute races the previous route's focus
owner across the SvelteKit transition. Chrome refuses to honour
autofocus when a document already has a focused element and warns.
Fix: replace the attribute with `bind:this={el}` + a $effect that
imperatively `el.focus()`s after `tick()` — by then the outgoing
page has unmounted and there's no competing focus claim. The
svelte-ignore directives are no longer needed and have been removed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mac Mini was running at 99% memory pressure with 8.6 GB swap active —
load was OK but every cold-container request was paying disk-I/O for
swapped pages. Container observations:
redis 190/192 MB (99 %) — close to OOM, hot keys evicting
victoria 227/256 MB (89 %) — constant GC pressure
glitchtip 232/256 MB (91 %)
umami 223/256 MB (87 %)
Each bumped to 384 MB, total +512 MB reservation in the Colima VM.
Headroom for that comes from stopping the Pelias stack (~3 GB freed)
in the same change-window.
Redis additionally gets `--maxmemory 320mb --maxmemory-policy allkeys-lru`
so the daemon evicts its own LRU keys at ~80 % of mem_limit instead of
letting the kernel OOM-kill the whole container. Safe for our usage —
Redis only holds rate-limit counters + sync hot-paths, no critical state.
Pelias stays stopped pending a migration to mana-gpu; mana-geocoding
will need a Nominatim fallback before the migration so the Places
module's address lookup keeps working.
All four were pre-existing; the audit smoke-test made them visible. Fixed
together because they share a "boot console-warn cleanup" theme.
1. streaks ensureSeeded race (DexieError2 ×2)
- Two boot-time liveQuery callers passed the `count > 0` check before
either had written, then the second's `.add()` hit a ConstraintError.
- Fix: cache the seed promise per module, run the existence check +
bulkAdd inside one Dexie RW transaction, and only insert MISSING
defs (preserves existing currentStreak/longestStreak counts).
2. encryptRecord('agents', …) "wrong table name?" warning
- The DEV-only check fired whenever a record carried none of the
registered encrypted fields, regardless of whether anything could
actually leak. `ensureDefaultAgent` writes a fresh agent row before
`systemPrompt` / `memory` exist — pure noise.
- Fix: drop the "no fields at all" branch. Keep the case-mismatch
branch (the branch that actually catches silent plaintext leaks).
3. Passkey signInWithPasskey "Cannot read properties of undefined
(reading 'allowCredentials')"
- Client destructured `{ options, challengeId }` from the server's
options response, but Better-Auth's `@better-auth/passkey` plugin
returns the raw PublicKeyCredentialRequestOptionsJSON (no
envelope) and tracks the challenge in a signed cookie. Both
`options` and `challengeId` came back undefined; SimpleWebAuthn
blew up the moment it tried to read the request shape. Verify body
`{ challengeId, credential }` was likewise wrong — Better-Auth
wants `{ response }`.
- Fix: align both register and authenticate flows with Better-Auth's
native shape on options + verify, and add `credentials: 'include'`
on every fetch so the challenge cookie actually round-trips.
Server's verify proxy now reads `parsed?.response?.id` for
credentialID rate-limiting.
4. /api/v1/me/onboarding/ → 404
- Hono's nested router (`app.route(prefix, sub)` + inner
`app.get('/')`) matches the prefix-without-slash form only. The
onboarding-status store sent the request with a trailing slash, so
every login produced a 404 + a console warn.
- Fix: client sends the path without trailing slash; mana-auth picks
up `hono/trailing-slash` middleware as defense-in-depth so a future
accidental trailing slash on any /me/* route 301-redirects instead
of 404-ing.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
War nicht im Setup dokumentiert: bei localem Web-Dev (5173) muss
mana-analytics auf 3064 laufen, sonst werfen FeedbackHook + Toast-
Poll + /community ein ERR_CONNECTION_REFUSED. Convenience-Script
+ Hinweis in der Test-Checklist verhindern den Stolperer.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- comic/components/CharacterPicker: route through comic.picker.* with
HTML interpolation for the no-face/empty-garment alerts
- comic/views/DetailCharacterView: route through comic.character_detail.*
+ dynamic comic.styles.<id>; drops unused STYLE_LABELS import
- quiz/PlayView: route through quiz.play_view.* (back/empty/result/play
all consolidated)
Baseline 869 → 851 (-18).