Pelias was retired from the Mac mini on 2026-04-28; photon-self
(self-hosted Photon on mana-gpu) has been the live primary since then.
This removes the now-dead Pelias adapter, config, tests, and the
services/mana-geocoding/pelias/ stack — the entire compose file, the
geojsonify_place_details.js patch, the setup.sh import script.
Provider chain is now `photon-self → photon → nominatim`. The chain
keeps its `privacy: 'local' | 'public'` split, sensitive-query
blocking, coord quantization, and aggressive caching unchanged.
Three direct calls to nominatim.openstreetmap.org that bypassed
mana-geocoding now route through the wrapper:
- citycorners/add-city + citycorners/cities/[slug]/add use the shared
searchAddress() client (browser → same-origin proxy → mana-geocoding
→ photon-self).
- memoro mobile drops its OSM reverse-geocoding fallback entirely;
Expo's on-device reverse-geocoding stays as the sole path. Routing
through the wrapper would require a memoro-server proxy endpoint —
a follow-up if Expo's quality proves insufficient.
Other behavioral changes:
- CACHE_PUBLIC_TTL_MS dropped from 7d → 1h. The long TTL was a
privacy-amplification trick from the Pelias era; with photon-self
serving the bulk of traffic, a transient cross-LAN blip was pinning
cached fallback answers for days. 1h gives quick recovery.
- /health/pelias renamed to /health/photon-self; prometheus blackbox
config + status-page generator updated.
- mana-geocoding container no longer needs `extra_hosts:
host.docker.internal:host-gateway` (was only there for the
Pelias-on-host-network era).
113 tests passing. CLAUDE.md rewritten to reflect the post-Pelias
architecture.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Colima starts its Linux VM with no swap configured. Without swap the
kernel responds to memory pressure by invoking the OOM-killer instead
of paging out cold pages — meaning a transient peak (mana-web Vite
build with 8 GiB heap landing on top of the running container set)
takes down a container instead of just stalling for a few seconds.
The 2026-04-28 Mac Mini RAM audit found:
- VM allocated: 12 GiB (1 GiB kernel overhead → 11 GiB user)
- Container RSS: ~4 GiB pinned
- Available headroom: ~7.6 GiB
- mana-web Vite peak: ~8 GiB
That's 400 MiB over the limit during builds, which is why we previously
needed the build-memory-headroom.sh wrapper to pause monitoring (frees
~700 MiB temporarily). Swap is the safer second backstop — Linux only
swaps under actual pressure (used=0 right after creation, confirmed
free -h), and the kernel can fall back to paging cold container memory
to give a build the burst it needs without killing anything.
The new step in migrate-to-colima.sh:
- creates /swap (2 GiB, root-only)
- mkswap + swapon
- persists in /etc/fstab so the VM remounts it on every restart
- idempotent — re-runs are no-ops
Already provisioned on the live VM via:
ssh mana-server 'colima ssh -- "sudo fallocate -l 2G /swap && \
sudo chmod 600 /swap && sudo mkswap /swap && sudo swapon /swap"'
Verified: free -h shows Swap: 2.0Gi total / 0B used. Currently dormant.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous version of the cleanup trap only deleted SQL files left
by drizzle-kit's probe-generate, but not the matching `_snapshot.json`
(56 KB per service) or the journal entry. Each deploy then leaked one
snapshot file into the runner's working tree.
Surfaced after my own local smoke-test: ran the script against
mana-auth, found a 56 KB \`drizzle/meta/0000_snapshot.json\` left
behind that I had to clean up manually.
The trap now:
- Computes the full set of files added under \`drizzle/\` during this
run (not just SQL) and removes every one of them.
- Strips the probe's journal entry via jq.
- If the \`drizzle/\` dir didn't exist before the run, removes it
entirely. Otherwise sweeps empty meta/ subdirs the run created.
Smoke-tested locally: working tree is clean after each run regardless
of whether drizzle/ existed beforehand.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two follow-up fixes after the first migration-step deploy revealed
gaps:
1. \`pnpm dlx drizzle-kit\` doesn't work — the drizzle.config.ts file
itself does \`import { defineConfig } from 'drizzle-kit'\`, and
Node's resolver only finds that import via local node_modules,
not pnpm's dlx cache. Reverted to plain \`pnpm exec drizzle-kit\`
and require the workspace to be installed.
2. CD now runs \`pnpm install --filter ./services/<svc>... --frozen-
lockfile --ignore-scripts\` once at the start of the migration
step for every Drizzle service in the deploy. Path-based filter
(not name-based) because our service package names follow no
uniform convention (\`@mana/auth\` vs \`@mana/credits-service\` vs
\`@mana/events\`). pnpm's lockfile cache makes second-and-later
runs near-instant.
3. Dropped the \`--silent\` flag from \`pnpm exec drizzle-kit --version\`
— it isn't a recognised pnpm-exec flag and causes a 254 exit code,
making the script's "is drizzle-kit available?" probe always fail.
Smoke-tested locally — script now runs cleanly against mana-auth's
schema, reports "no changes detected", cleans up the probe SQL file.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Mac Mini runner doesn't run \`pnpm install\` (every service builds
inside Docker), so per-service node_modules/.bin/drizzle-kit isn't
present. The first deploy with the new migration step printed
\`ERR_PNPM_RECURSIVE_EXEC_FIRST_FAIL Command "drizzle-kit" not found\`
and silently treated every service as "no schema changes — clean".
Pick the invocation mode at runtime: \`pnpm exec drizzle-kit\` if a local
binary exists, otherwise \`pnpm dlx drizzle-kit\`. dlx caches the package
in the global pnpm store after the first fetch, so subsequent calls
are fast. drizzle-kit reads its config from cwd, so it still picks up
each service's drizzle.config.ts correctly.
Smoke-tested locally against services/mana-auth — script reports
"no schema changes — clean" instead of failing silently.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The mana-geocoding wrapper now returns `notice: 'fallback_used' |
'sensitive_local_unavailable'` alongside results so the UI can show
the user *why* a query had unusual behavior. This commit wires that
all the way through the Places module's address-autocomplete inputs.
Geocoding client (lib/geocoding/index.ts):
- Add `GeocodingNotice` and `SearchOutcome` types
- Add `searchAddressDetailed` and `reverseGeocodeDetailed` — same
semantics as the existing functions but return the wrapper's
provider/notice metadata. Existing `searchAddress`/`reverseGeocode`
stay backward-compatible (they call the detailed variants under
the hood and discard the metadata).
- Extend GeocodingResult with optional `provider` field.
Places ListView (the only current consumer that exposes typed
addresses to users):
- Both autocomplete inputs (tracking-edit + main address-search)
now use searchAddressDetailed and surface notices inline.
- 'sensitive_local_unavailable' renders an amber explainer block in
the dropdown — title + body — so the user knows why their medical
query returned 0 hits without leaking the search to a public API.
- 'fallback_used' renders a small "≈ ungefähr" footer badge so users
understand the result came from public OSM (less precise but
still valid).
- The dropdown opens when EITHER results exist OR a notice is
present — sensitive blocked queries with empty results still
surface their explainer.
i18n: new `places.geocoding_notice.*` sub-namespace in all 5 locales
(de/en/es/fr/it) — 4 strings each. All validators green.
Other consumers (places DetailView, events, photos, contacts) keep
the existing searchAddress/reverseGeocode calls — they don't need
the privacy notices today and would just add noise. They can adopt
the detailed variant if/when the use case warrants it.
Two CD-pipeline ergonomics fixes that surfaced during the 2026-04-28
schema-drift sweep.
(C) Auto-apply additive Drizzle migrations
========================================
8 services use Drizzle (mana-auth/-credits/-events/-research/-mail/
-subscriptions/-user/-analytics) but the CD pipeline never ran their
`db:push` script, so 4 schema additions stayed undeployed for days
(auth.users.kind, credits.{sync_subscriptions,reservations},
event_discovery.*) until live PostgresErrors surfaced them.
New `scripts/mac-mini/safe-db-push.sh`:
- Uses `drizzle-kit generate` to write a probe SQL file (does NOT
apply yet).
- Greps the generated SQL for destructive patterns (DROP TABLE/
COLUMN/TYPE/SCHEMA/INDEX, ALTER COLUMN ... TYPE, RENAME).
- Refuses to auto-apply if any are found — operator must review and
run `pnpm db:push --force` manually after pg_dump.
- Otherwise applies via `drizzle-kit push --force` and cleans up the
probe artifacts.
CD step "Apply schema migrations" runs between build and container
restart, sourcing each changed service's DATABASE_URL from compose
config (with @postgres → @localhost rewrite for the host runner).
Failure aborts deploy before the new container starts — the old
container keeps running with the old schema, which matches.
(D) Build-time RAM headroom
========================================
mana-web's Vite build needs 8 GiB of Node heap; Colima's VM is sized
at 12 GiB; ~3.5 GiB of other containers run during deploy. The 2026-
04-28 mana-web deploy OOM'd at the Vite step ("cannot allocate
memory") and only succeeded on retry once concurrent traffic settled.
New `scripts/mac-mini/build-memory-headroom.sh`:
- `start`: stops every container matching `^mana-mon-` (the
observability stack — VictoriaMetrics, Loki, Glitchtip, cAdvisor,
umami, blackbox, exporters). Frees ~700 MiB.
- `stop`: restores them from the snapshot list captured at start.
- `wrap <cmd>`: pause + run + always-resume via trap.
CD wraps the build loop with start/stop, but only when mana-web is in
the change set — other services build well below 4 GiB and don't
need the headroom. The monitoring stack resumes before the migration
step so cAdvisor + exporters are back online for the deploy-metrics
collection.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- comic/components/CharacterPicker: route through comic.picker.* with
HTML interpolation for the no-face/empty-garment alerts
- comic/views/DetailCharacterView: route through comic.character_detail.*
+ dynamic comic.styles.<id>; drops unused STYLE_LABELS import
- quiz/PlayView: route through quiz.play_view.* (back/empty/result/play
all consolidated)
Baseline 869 → 851 (-18).
- Back button (← Symbole), Gespeichert hint, Zusammenführen…/Löschen actions
- Merge panel: label with {name} interpolation, "– Symbol wählen –" placeholder, OK/Abbrechen
- Empty: "Symbol nicht gefunden."
- Editable header: name placeholder, "Traum"/"Träume" via count_singular/plural
- Color picker: aria with {color} interpolation
- 4 section labels (Meine Bedeutung / Stimmungs-Verteilung / Häufig zusammen mit / Träume mit diesem Symbol) + meaning placeholder
- Mood label routed via $_('dreams.moods.' + mood) with valid-mood guard; "Unbekannt" fallback via symbol_detail.mood_unknown
- Co-occurring chip title with {name} interpolation
- Confirms: delete + merge with {name}/{source}/{target} interpolation
- Dream-ref title fallback via dreams.list_view.untitled
- MOOD_LABELS import dropped (constant kept in types.ts for non-Svelte callers)
Baselines: hardcoded 1074 → 1066 (8 cleared); missing-keys baseline +0 (dreams.moods.* dynamic key already baselined).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The reward chip added in eecf64c1c (community/feedback +5 chip) carries
the brand-term "Mana Credits" as a hardcoded German label. Brand terms
that are deliberately untranslated still count against the baseline, so
the +1 violation in onboarding/wish/+page.svelte was blocking
validate:all.
Bumping the baseline (1111 → 1112) to unblock deploys. Real fix is to
either route the chip through $_() with a brand-noun key or carve a
formal exception list — neither is in scope for this change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>