Pre-deploy-Audit gefunden: meine neue session.svelte.ts + portal-redirect.ts
lasen PUBLIC_MANA_AUTH_URL/PUBLIC_AUTH_WEB_URL via $env/dynamic/public. In
Production ist das aber die Docker-interne URL `http://mana-auth:3001`,
die der Browser nicht erreichen kann — Folge wäre endlose Redirect-Loop
bei der ersten User-Session.
managarten hat das Pattern schon gelöst: hooks.server.ts injiziert
`window.__PUBLIC_*_URL__` aus den `_CLIENT`-suffixed env-Vars (Public-
Domain-Werte). `lib/data/scope/auth-fetch.authBaseUrl()` ist der
kanonische Helper dafür.
- session.svelte.ts: ruft jetzt `authBaseUrl()` aus auth-fetch.
- portal-redirect.ts: eigenes window/process-Lookup für PUBLIC_AUTH_WEB_URL,
gleiches Pattern.
- hooks.server.ts: PUBLIC_AUTH_WEB_URL_CLIENT-Lesen + window-Injection.
- docker-compose.macmini.yml (mana-app-web): PUBLIC_AUTH_WEB_URL +
PUBLIC_AUTH_WEB_URL_CLIENT env-Vars hinzugefügt.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Cloudflared-Ingress für `manawald.mana.how` (port 3090 lokal) + dem
mana-auth-Container die Origin in `CORS_ORIGINS` ergänzen, damit SSO-
Cookie-Auth funktioniert.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds STALWART_RECOVERY_ADMIN to the stalwart service environment so the
admin/ManaMailAdmin2026! credentials survive container restarts. Bootstrap
completed programmatically via JMAP; port 587 STARTTLS listener active.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Mac-Mini-Drift in Source-Control bringen — war seit 2026-05-08 live
auf dem Server, aber uncommitted (während des managarten-Renames via
stash gerettet).
Cloudflared-Tunnel:
- verein.mana.how → :3088 (Verein-Landing, live seit 2026-05-09)
- share.mana.how → :3072 (Föderations-Share-Service, Phase F)
- mcp.mana.how → :3069 (MCP-Gateway, exposing tool-registry)
- cardecky-api.mana.how → :3191 (Port-Korrektur, war fälschlich :3072)
- cardecky.mana.how → :5181 (Port-Korrektur, war :5180)
- nutriphi.mana.how → :3087, nutriphi-api.mana.how → :3086
docker-compose.macmini.yml:
- mana-auth CORS_ORIGINS: nutriphi.mana.how + nutriphi-api.mana.how
- Neuer Service mana-share (Build aus ../mana/services/mana-share,
Föderations-Backbone Phase F, Port 3072, eigene DB-Tabellen in
mana_platform)
- Neuer Service mana-mcp (Build aus ../mana/services/mana-mcp,
MCP-Gateway, Port 3069)
Beide Services bauen aus dem mana-platform-Repo (../mana/services/...),
nicht aus managarten — managarten orchestriert nur via Compose.
Phase-3-Rename des ehemaligen Multi-App-Monorepos zum eigenständigen
Produkt-Repo. Verein heißt mana e.V., Plattform-Domain bleibt mana.how,
apps/mana/ bleibt unverändert — nur der Repo-Container kriegt den
neuen Namen "managarten" (Garten der mana-Apps).
Geändert:
- package.json#name + #description
- README.md (Titel + erster Absatz)
- TROUBLESHOOTING.md
- alle Mac-Mini-Skripte (Pfade ~/projects/mana-monorepo → ~/projects/managarten)
- COMPOSE_PROJECT_NAME-default in scripts/mac-mini/status.sh
- .github/workflows/cd-macmini.yml + mirror-to-forgejo.yml
- apps/docs (astro.config.mjs + content)
- .claude/settings.local.json (Bash-Permission-Pfade)
- alle docs/*.md Pfad-Referenzen
- launchd plists, .env.macmini.example, infrastructure/
Forgejo-Repo + GitHub-Repo bereits via API umbenannt. Lokales
Verzeichnis-Rename + Mac-Mini-Cutover folgen separat.
Stalwart's official Docker image is distroless and has no wget, curl,
nc, ss, or netstat. The compose healthcheck (CMD wget ...) was failing
with "executable file not found" since 2026-05-05; container shows
status=unhealthy 24/7 even though Stalwart itself runs fine on
:25 / :587 / :465 / :993 / :8080.
Disable. Crash detection comes from docker's restart=always plus
mana-monitoring's external SMTP probe (blackbox-exporter), not from
inside the container.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Part of the 8-Doppel-Cutover (2026-05-08, plan
~/.claude/plans/floating-swinging-flurry.md):
- docker-compose.{macmini,dev,test}.yml: build context for
mana-{auth,credits,media,llm,notify} switched to ../mana/services/...
so the Mac Mini stack pulls platform services from the platform repo
(sibling clone), not from services/ in this monorepo.
- .npmrc + apps/api/{Dockerfile,package.json}: @mana/media-client now
resolved from Verdaccio (npm.mana.how, ^0.1.0) instead of as a
workspace COPY from services/mana-media/packages/client. Build-arg
NPM_TOKEN flows through .npmrc for pnpm install auth. Required
before services/mana-media/ can be deleted.
- .github/workflows/{ci,cd-macmini,daily-tests}.yml: removed the
detect-/build-/test-jobs that targeted services/mana-{auth,credits,
notify,media}/. Those services build out of the platform repo now —
CI for them belongs in mana/-repo (open). cd-macmini's
workflow_dispatch can still rebuild any of them on demand;
auto-detect on path-change is gone for these five.
- scripts/{mac-mini/push-schemas.sh,run-integration-tests.sh}:
rewritten to look in ../mana/ for the platform services.
- package.json dev:{auth,credits,notify,media}: paths point at
../mana/services/... so local dev still works post-cutover.
What this commit does NOT do: delete services/mana-{auth,credits,...}
from this repo. That waits for Phase 7 once the Mac Mini stack has
booted cleanly from the new build paths.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds the two zitare hostnames to PRODUCTION_TRUSTED_ORIGINS in
sso-origins.ts and to the mana-auth CORS_ORIGINS in
docker-compose.macmini.yml. Pre-condition for the first Zitare
live-cut on the Mac Mini — the running mana-auth container must
be rebuilt for the new TRUSTED_ORIGINS list to take effect (see
zitare/DEPLOY.md Schritt 3).
sso-config.spec.ts asserts symmetry between sso-origins.ts and
the CORS_ORIGINS env in compose. Test runs 8/8 green after this
change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Memoro's SvelteKit SPA at memoro-app.mana.how is a separate deploy
under mana e.V. that needs to use the central mana-auth (login,
session, JWT). Without this entry Better-Auth rejects its preflight
silently (no Access-Control-Allow-Origin header) and the SPA can't
even reach POST /api/v1/auth/login.
Updates both SSOTs per the rule in CLAUDE.md / mana-auth/CLAUDE.md:
1. PRODUCTION_TRUSTED_ORIGINS in services/mana-auth/src/auth/sso-origins.ts
2. CORS_ORIGINS for mana-auth in docker-compose.macmini.yml
sso-config.spec.ts will pick up the consistency between the two.
Web-Research-Orchestrator (16+ search-/LLM-providers) auf die GPU-Box
verlagert. Cross-LAN für mana-auth/mana-credits/mana-llm/mana-search/
postgres/redis (192.168.178.131). research.mana.how routet jetzt zum
mana-gpu-server-Tunnel (CF config v29). Mini-Container-Count 42 → 41.
PUBLIC_MANA_RESEARCH_URL in mana-app-web auf https-URL umgestellt —
Mini-Container können 192.168.178.11 nicht direkt erreichen (Colima-NAT),
daher Cross-LAN-Bridge via Cloudflare-Tunnel wie bei mana-ai.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
End-to-end "publish my local deck to the marketplace" surface in
the Cards standalone app. Hooks into cards-server (Phase β) so a
user can take a deck they've been editing locally and put it under
cards.mana.how/d/<slug> with one modal.
Pipeline:
• lib/api/cards-api.ts — typed fetch wrapper around the cards-server
/v1 surface. Reads the JWT from authStore, never from storage
directly. CardsApiError carries `{status, message, details}`
so UI can branch on 401/409/etc.
• lib/stores/author.svelte.ts — lazy-loaded author state. Caches
`cardsApi.authors.me()` on first access; resets cleanly on logout.
• lib/util/slug.ts — best-effort slugify mirror of the server-side
validator (server still has final say).
• lib/components/PublishDeckModal.svelte — three-stage flow:
become-author (slug + displayName + pseudonym), deck-meta (title,
description, language, license picker, semver, changelog), then
publishing → done with moderation-flag surface if AI mod returned
'flag'. Keys off authorStore.isAuthor to skip stage 1 for
returning authors.
• routes/decks/[id]/+page.svelte gets a "🌍 Veröffentlichen" button
next to "Lernen". Disabled until the deck has cards.
Wiring:
• hooks.server.ts injects __PUBLIC_CARDS_API_URL__ on every SSR'd
page so the client knows where cards-server lives.
• compose adds PUBLIC_CARDS_API_URL_CLIENT=https://cards-api.mana.how
to the cards-web container.
Validated: svelte-check 0/0, vite build green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2f-3 (final of the 2f-trio). The background tick-loop runner is
the most coupled of the three: it queries mana-api, mana-llm, and
mana-research, and writes through to the mana_sync DB. Wired up via
cross-LAN host-IPs to those Mini-side services + the existing RSA
key-pair for Mission-Grant decryption (MANA_AI_PRIVATE_KEY_PEM moved
into /srv/mana/.env on the GPU-Box; the matching MANA_AI_PUBLIC_KEY_PEM
stays on mana-auth's env-set as before).
Bonus rationale: AI Mission Runner now sits in the same compose
network as the GPU-Box's gpu-llm/gpu-ollama tasks, so future
"agent talks to local LLM" paths skip the Cloudflare round-trip.
Tunnel: mana-ai.mana.how repointed at the mana-gpu-server tunnel
(config v28). The Mini-side ingress was removed in the same step.
OTEL_EXPORTER_OTLP_ENDPOINT cleared since Tempo was retired in 2c.
Mini-side: container stopped + removed from docker-compose.macmini.yml.
Running count went from 39 → 42 because of unrelated services that
re-appeared on the latest CD pull (cards-server, memoro-web), but the
actual mana-ai service is gone — net move accomplished.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2f-2. RSS/Atom ingester (15-min tick → mana_platform.news.curated_articles)
moved to GPU-Box. Service has zero hot-path coupling, all the writes go
cross-LAN to Mini-postgres analog to the Glitchtip pattern.
Two implementation gotchas worth recording:
1. cross-arch image transfer doesn't work. Saved news-ingester:local
from the Mini (Apple M4 → linux/arm64), tried `docker load` on the
GPU-Box (linux/amd64) and got 'exec format error' on every restart.
Native build on the GPU-Box was the only path forward.
2. The original services/news-ingester/Dockerfile assumes
pnpm-workspace state from prior builds (no COPY for packages/shared-rss
in the build context). Fresh builds error with
ERR_PNPM_WORKSPACE_PKG_NOT_FOUND.
Workaround: a GPU-Box-specific Dockerfile at infrastructure/news-ingester/
that vendors shared-rss into the build via a workspace:* → file:ref
sed swap. Build context is the repo root (sparse-clone provides
packages/shared-rss + services/news-ingester). The Mini-side Dockerfile
stays untouched so existing CD builds aren't disturbed.
Mini-side: container stopped + removed from docker-compose.macmini.yml,
running container count 44 → 39.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wires cards-server into the Mac-mini stack so we can deploy alongside
the rest of the Mana services.
- Dockerfile mirrors the mana-credits 2-stage pattern (node+pnpm
installer → bun runtime), exposes :3072, includes a /health
healthcheck.
- docker-compose.macmini.yml: new cards-server block right after
mana-credits — depends on postgres + mana-auth, 128m mem, all the
env knobs from the Phase-α config (author payout BPS, community-
verified thresholds, sibling-service URLs).
- cloudflared-config.yml: cards-api.mana.how → :3072. Distinct from
cards.mana.how (the user-facing PWA) so the API surface is clearly
separated.
- sso-origins.ts: cards-api.mana.how added to PRODUCTION_TRUSTED_ORIGINS.
- mana-auth CORS_ORIGINS in compose: cards-api.mana.how added.
Restored whopxl.mana.how that had drifted out — sso-config.spec.ts
had been flagging it but the missing entry surfaced when I added
cards-api. spec is back to 8/8 green.
Deploy plan (next steps, not in this commit):
1. ./scripts/mac-mini/build-app.sh cards-server
2. docker exec mana-app-cards-server bun run db:push (creates the
`cards` schema + 16 tables in mana_platform)
3. ./scripts/mac-mini/sync-tunnel-config.sh
4. Smoke: curl https://cards-api.mana.how/health → 200
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
mana-sync's billing middleware short-circuited every push/pull with
402 for users without a sync subscription. Cards promises free Sync
in its Phase-1 GUIDELINES, so it shouldn't gate its own users on a
mana-credits subscription it never sells.
Implementation:
• billing.NewChecker now takes an exemptApps slice. The middleware
extracts {appId} from the URL path and short-circuits before the
user lookup if the app is in the set.
• Configurable via the BILLING_EXEMPT_APPS env var (comma-separated).
• Set BILLING_EXEMPT_APPS=cards on the mana-sync container so the
cards.mana.how Sync loop stops 402-ing.
• Tests cover the exemption + the empty/whitespace edge cases. All
other apps keep the original behaviour (fail-open if mana-credits
is unreachable, 402 if it explicitly says inactive).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two cleanups against the status-page DOWN list:
photon-self (photon.mana.how route):
mana-geocoding's /health/photon-self pings the photon backend, which
lives as a Docker container on the GPU-Box (port 2322). PHOTON_SELF_API_URL
was http://192.168.178.11:2322 — Mini-host can hit that fine but
Mini-Docker-containers can't (Colima-NAT-quirk we keep running into).
Routed photon through the mana-gpu-server tunnel (config v26) and
flipped the env var to https://photon.mana.how. Probe goes UP, geocoding
for sensitive queries (privacy:'local' provider tier) actually works
now too — was effectively orphaned before.
whopxl removed everywhere it still lingered:
Container hasn't existed on the Mini in months (no compose service,
no source dir under apps/, no listener on :5100 — only the dead
cloudflared route + a stale CORS_ORIGINS entry on mana-auth). Cleaned
cloudflared-config.yml, prometheus.yml blackbox-web target, and the
mana-auth CORS list. Old DNS CNAME for whopxl.mana.how stays for now;
no harm.
Plus while we were here: who-api.mana.how/api/decks was the right probe
for who-server's deck catalogue (root /api/decks lives on who-api, not
who.mana.how which is the SSR shell).
Live: status.mana.how shows 58/59 UP; the last 'whopxl' entry will
fall off after VM's TSDB rolls past the probe_success staleness window.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cards can now carry image, audio, and video attachments uploaded to
mana-media (the existing CAS service that already powers picture,
photos, wardrobe, etc.).
Pipeline:
• lib/media/upload.ts wraps POST /api/v1/media/upload (multipart,
app=cards). Returns { id, url, kind } with the right variant URL
per kind (medium for images, full file for audio/video). 25 MB
cap matches the website-upload pattern.
• mediaToFieldSnippet(): drops Markdown ![]() for images; raw
<audio>/<video controls> for the others — the user can later
tweak attributes by hand.
• Deck-detail card editor gains a "📎 Anhang" button next to every
text field (front/back/cloze). Pick → upload → snippet appended
to the field's content. Loading + error states surfaced inline.
Render:
• @mana/cards-core/render.ts whitelists `audio`, `source`, `video`
plus the `controls`/`preload`/`src`/`type` attrs in DOMPurify so
inline media survives sanitization. Markdown's <img> already
passed through the default policy.
Wiring:
• hooks.server.ts injects __PUBLIC_MANA_MEDIA_URL__.
• compose adds PUBLIC_MANA_MEDIA_URL_CLIENT=https://media.mana.how
to cards-web.
Phase 2 ideas: drag-drop directly into the textarea, paste-from-
clipboard for screenshots, mana-media auth scoping per user, Anki
import bringing media files along.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2e cleanup. status-page-gen + a dedicated nginx now run on the
GPU-Box (sparse repo clone provides the generator script + mana-apps.ts,
hourly git-pull via systemd timer). Container queries VictoriaMetrics
locally over docker-network ('http://victoriametrics:9090'), no public
vm.mana.how endpoint required — that hostname is also gone from the
GPU tunnel config (v25 → v26 effectively, removed in same PUT that
added status.mana.how).
DNS for status.mana.how now points at the mana-gpu-server tunnel.
Mini-tunnel ingress for it is removed; the previous 'mana-status-gen'
container on the Mini was stopped + rm'd.
Side benefit: closes the inode-stale-bind-mount bug that took status.
mana.how down for a few hours — single-file bind mounts on the Mini
break whenever the CD git-checkout rewrites the source file. The
GPU-Box mounts the same files but the systemd timer git-pulls in-
place, preserving the inode.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
After Phase 2c VM moved off the Mini, but the status-page generator
still queried localhost:9090 — and Colima containers can't reach the
GPU-Box's LAN IP through the Mini's bridge. Result: status.mana.how
showed 0/0 services UP across the board.
Routed VM through a new vm.mana.how Public Hostname on the
mana-gpu-server tunnel (config v24) so the Mini-side container reaches
it the same way browsers do. /api/v1/query path is identical, no
script changes required. Network-mode no longer needs to be host now
that the URL is public.
Verified live: status.json reports 49/53 services UP.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2c+2d cleanup. The 14 services that moved to the GPU-Box stack
(grafana, victoriametrics, loki, tempo, promtail, alertmanager,
vmalert, pushgateway, blackbox-exporter, alert-notifier, umami,
glitchtip + worker, forgejo) are now stopped on the Mini and stable
on the GPU box, so the rollback insurance can come out:
- docker-compose.macmini.yml: drop 14 service blocks (-369 lines) +
the now-orphan named volumes (victoriametrics_data, loki_data,
alertmanager_data, grafana_data, tempo_data).
- cloudflared-config.yml: drop the four hostnames whose DNS already
points at the mana-gpu-server tunnel — Mini-tunnel ingress for them
has been dead routing since 2026-05-06, removing the rules just makes
the file match reality. The hostnames now live in the GPU tunnel's
dashboard config (token-managed).
Containers + volumes stay on the Mini for now; running
`docker compose -f docker-compose.macmini.yml --env-file .env.macmini up -d --remove-orphans`
on the box drops them in one go when ready.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
First server-side error-tracking integration. Pattern mirrors the
client-side one in apps/mana/apps/web/src/hooks.client.ts:
- pull @mana/shared-error-tracking into mana-auth's deps (workspace pkg
with @sentry/node + a no-op fallback when GLITCHTIP_DSN is unset)
- call initErrorTracking() at the top of services/mana-auth/src/index.ts
before the rest of the module body executes — this lets Sentry hook
uncaughtException / unhandledRejection before any Hono handlers register
- wrap app.onError so non-HTTPException throws also flow into
captureException with path/method/query context. HTTPExceptions are
intentional 4xx/422 and stay out of the issue list (otherwise every
401 from a stale session would page somebody at 3am)
- compose: pass GLITCHTIP_DSN_MANA_PLATFORM through as GLITCHTIP_DSN per
service so each container's events get tagged with serverName='mana-auth'
DSN itself isn't in the repo; lives in .env.macmini on the Mac Mini and
is referenced from the Glitchtip credentials doc.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Builds out the Cards spinoff end-to-end so the standalone app at
cards.mana.how shares its data layer with the in-mana cards module
through a single pure-utility package.
Why a spinoff and not just a deeper module: per the GUIDELINES, Cards
gets its own brand + URL while reusing mana-auth, mana-sync, and the
mana-credits/billing stack. The in-mana module under mana.how/cards
stays untouched as the integrated experience.
Phase 0 — mana-modul foundation
• New tables cardReviews + cardStudyBlocks (Dexie v61) + plaintext
classification in the crypto registry.
• LocalCard learns a {type, fields} shape; legacy front/back columns
kept as a back-compat mirror so older builds keep rendering.
• FSRS v6 scheduler + Cloze parser + Markdown render pipeline.
• UI in apps/mana/.../routes/(app)/cards/ gets a learn session
(learn/[deckId]), 4-type card editor, due-counter, markdown lists.
Phase 1 — standalone (apps/cards/apps/web)
• SvelteKit 2 + Svelte 5 + Tailwind 4, port 5180.
• Own Dexie 'cards' DB with a slim 5-table schema.
• Own sync engine: pending-changes hooks, 1 s push / 5 s pull against
POST /sync/cards, server-apply with suppression to avoid ping-pong.
• Auth-Gate via @mana/shared-auth-ui (LoginPage / RegisterPage).
• Encryption hooks at every write/read/apply path, currently no-op
stubs — flipping to real vault-backed AES-GCM is a single-file
change in src/lib/data/crypto.ts.
Shared package — @mana/cards-core
• Pulls types, cloze, card-reviews, FSRS wrapper, and Markdown
renderer out of the mana module so both frontends import from one
source. mana-modul keeps thin re-export shims so consumers don't
need to change imports.
• 19 vitest tests carried over from the mana module.
Server-side wiring
• cards.mana.how added to mana-auth PRODUCTION_TRUSTED_ORIGINS and
its CORS_ORIGINS env (sso-config.spec.ts stays green).
• New cards-web container in docker-compose.macmini.yml (mirrors
manavoxel-web pattern, 128m, depends on mana-auth healthy).
• cloudflared-config.yml repoints cards.mana.how from :5000 (the
unified mana-web container) to :5180. mana.how/cards is unchanged.
Cleanup
• Removed an unrelated 2026-03/04 NestJS+Supabase+Expo experiment
that was lingering under apps/cards/ (apps/landing, supabase/,
.github/workflows, MANA_CORE_*.md, etc.). It predated this plan
and would have confused future readers.
Validation
• svelte-check on mana-web: 0 errors over 7697 files
• svelte-check on cards-web: 0 errors over 3481 files
• vitest on cards-core: 19/19 pass
• pnpm check:crypto: 214 tables classified
• bun test sso-config.spec.ts: 8/8 pass
• vite build on cards-web: green
Not done in this commit (deliberate)
• Real encryption (vault roundtrip) — Phase 2.
• WebSocket-driven pull (5 s polling for now).
• Mobile/landing standalone surfaces — Phase 2/3.
• The actual production cutover on the Mac mini (build, deploy,
cloudflared sync) — config is staged, deploy is a user action.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Arcade lives as its own pnpm workspace at ~/Documents/Code/arcade
now, with no @mana/* coupling. This drops every reference and the
games/ directory from the monorepo.
Removes:
- games/ directory (89 files: web + server + 22 HTML games + screenshots)
- @arcade/web, @arcade/server pnpm workspace entries (games/* globs)
- arcade scripts in root package.json (4 scripts)
- arcade.mana.how from mana-auth trusted origins + CORS_ORIGINS
- arcade entries in mana-apps registry, app-icons, URL overrides
- arcade.mana.how from cloudflared tunnel + prometheus blackbox probes
- arcade-web service block in docker-compose.macmini.yml
- generate-env.mjs entries for arcade server + web
- BRANDING_ONLY 'arcade' entry in registry consistency spec
- dead arcade translation keys in GuestWelcomeModal (DE+EN)
- arcade mention in CLAUDE.md, authentication guideline, MODULE_REGISTRY
Verified:
- services/mana-auth/src/auth/sso-config.spec.ts: 8/8 pass
- pnpm install regenerates lockfile cleanly (-536 lines)
- no remaining 'arcade' refs outside historical snapshot docs
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cold-start fetches from the mana-geocoding container to photon-self
on mana-gpu (over WSL2 mirrored networking) consistently take >10s on
the first probe and ~2s once warm. The previous 8s default caused the
chain to false-mark photon-self unhealthy on every cold path, leaking
to public photon for the next 30s health-cache window — and pinning
the public-photon answer in the 7d cache (now shortened to 1h).
Also wires the docker-compose macmini env to honor PROVIDER_TIMEOUT_MS
and CACHE_PUBLIC_TTL_MS overrides so production picks up the new
values without a code rebuild.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pelias was retired from the Mac mini on 2026-04-28; photon-self
(self-hosted Photon on mana-gpu) has been the live primary since then.
This removes the now-dead Pelias adapter, config, tests, and the
services/mana-geocoding/pelias/ stack — the entire compose file, the
geojsonify_place_details.js patch, the setup.sh import script.
Provider chain is now `photon-self → photon → nominatim`. The chain
keeps its `privacy: 'local' | 'public'` split, sensitive-query
blocking, coord quantization, and aggressive caching unchanged.
Three direct calls to nominatim.openstreetmap.org that bypassed
mana-geocoding now route through the wrapper:
- citycorners/add-city + citycorners/cities/[slug]/add use the shared
searchAddress() client (browser → same-origin proxy → mana-geocoding
→ photon-self).
- memoro mobile drops its OSM reverse-geocoding fallback entirely;
Expo's on-device reverse-geocoding stays as the sole path. Routing
through the wrapper would require a memoro-server proxy endpoint —
a follow-up if Expo's quality proves insufficient.
Other behavioral changes:
- CACHE_PUBLIC_TTL_MS dropped from 7d → 1h. The long TTL was a
privacy-amplification trick from the Pelias era; with photon-self
serving the bulk of traffic, a transient cross-LAN blip was pinning
cached fallback answers for days. 1h gives quick recovery.
- /health/pelias renamed to /health/photon-self; prometheus blackbox
config + status-page generator updated.
- mana-geocoding container no longer needs `extra_hosts:
host.docker.internal:host-gateway` (was only there for the
Pelias-on-host-network era).
113 tests passing. CLAUDE.md rewritten to reflect the post-Pelias
architecture.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The wrapper supports a `photon-self` provider when PHOTON_SELF_API_URL
is set, but the compose file wasn't forwarding the env-var into the
container. Add it as an env-substitution from .env.macmini so flipping
the GPU-server-hosted Photon on/off is one line in the env file.
Empty string = slot disabled (back-compat with the old config).
Required for the 2026-04-28 Photon-on-mana-gpu migration to take effect.
The wrapper code that consumes this env-var landed in 153ad8049
(dual-Photon support).
The compose mem_limits hadn't been revisited in months. Today's
live `docker stats` snapshot revealed:
- 5 services using <25% of their limit (waste)
- 3 services using >70% of their limit (OOM risk during spikes)
Adjusted both directions, no container removal, no behaviour change.
Each tweak carries a 1-line rationale in the file with the observed
RSS that motivated it.
Bumped (tight → comfortable):
mana-mon-cadvisor 128m → 160m (was 76% — bursts during stat collection)
mana-mon-alert-notifier 32m → 48m (was 79% — alert-bursts queue up)
mana-core-media 128m → 160m (was 63% — image-thumb spikes)
Trimmed (over-provisioned):
mana-research 256m → 128m (live ~57m, 22%)
mana-mail 256m → 128m (live ~11m bootstrap; legitimate growth headroom)
mana-app-uload-server 256m → 128m (live ~51m, 20%)
mana-service-llm 256m → 128m (live ~46m, 18%; thin proxy to upstream Ollama)
mana-app-llm-playground 128m → 64m (live ~22m, 17%; static-export demo)
Net delta: -496 MiB in compose limits — direct headroom for the
mana-web Vite build that previously OOM'd on the same VM. Combined
with the build-memory-headroom.sh wrapper (which still pauses the
monitoring stack during heavy builds), the Vite OOM risk is gone
on paper.
Containers will be recreated on next CD pass through `docker compose
up -d` (touched env or recipe). For the trimmed services, the new
limit is well above current RSS so nothing should OOM. For the bumped
services, the old limit was the tight one, so this only relaxes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Modul, Routen und Public-Domain heißen jetzt einheitlich "feedback":
- App-Registry: id 'community' → 'feedback', name 'Community' → 'Feedback',
Icon Megaphone → HeartHalf (passt zum bereits-globalen heart-half-Icon
am Module-Header und im PillNav-Usermenü)
- Modul-Config: communityModuleConfig → feedbackModuleConfig
- Routen-Refs: alle href/goto-Aufrufe in Modul-Views, MyWishesView,
Onboarding-Wish, Profile-MyWishes auf /feedback umgestellt
- /feedback/+layout: Brand "Mana Community" → "Mana Feedback", Megaphone
→ HeartHalf, "In Mana öffnen"-CTA zeigt jetzt auf /?app=feedback
- Public-Mirror Domain: community.mana.how → feedback.mana.how
(cloudflared-config.yml + docker-compose.macmini.yml CORS_ORIGINS +
PUBLIC_MANA_ANALYTICS_URL_CLIENT). DNS muss separat angelegt werden.
- Settings-Section: Hilfe-Text nennt jetzt feedback.mana.how
Internal: community_show_real_name + community_karma DB-Spalten bleiben
(Migration nicht im Scope dieses Renames). Settings-Search-Index-Kategorie
'community' bleibt ebenfalls — sie spiegelt das DB-Schema, nicht den
User-Begriff.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mac Mini was running at 99% memory pressure with 8.6 GB swap active —
load was OK but every cold-container request was paying disk-I/O for
swapped pages. Container observations:
redis 190/192 MB (99 %) — close to OOM, hot keys evicting
victoria 227/256 MB (89 %) — constant GC pressure
glitchtip 232/256 MB (91 %)
umami 223/256 MB (87 %)
Each bumped to 384 MB, total +512 MB reservation in the Colima VM.
Headroom for that comes from stopping the Pelias stack (~3 GB freed)
in the same change-window.
Redis additionally gets `--maxmemory 320mb --maxmemory-policy allkeys-lru`
so the daemon evicts its own LRU keys at ~80 % of mem_limit instead of
letting the kernel OOM-kill the whole container. Safe for our usage —
Redis only holds rate-limit counters + sync hot-paths, no critical state.
Pelias stays stopped pending a migration to mana-gpu; mana-geocoding
will need a Nominatim fallback before the migration so the Places
module's address lookup keeps working.
`sync_changes` lives in DB `mana_sync` (the dedicated sync engine
database), not in `mana_platform` where mana-auth's other queries land.
The compose env had this miswired since SYNC_DATABASE_URL was first
introduced — F4's bootstrapUserSingletons (`c07db300b`) ran into
"relation sync_changes does not exist" but its fire-and-forget caller
swallowed the failure silently.
Punkt 3's explicit `/api/v1/me/bootstrap-singletons` endpoint
(`099cac4a0`) surfaced the misconfig as a user-visible 500 on first
real boot, which is how it got caught.
This also unbreaks user-data.ts (GDPR data summary entity counts +
account deletion's sync-row cleanup) which was returning 0 / no-op
for the same reason.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 3.A des feedback-rewards-and-identity-Plans. Direkter Reziprozitäts-
Loop: User kriegt sofort etwas zurück fürs Mitwirken, Originalwunsch-
Eulen werden beim Ship belohnt, Reagierer kriegen einen Anteil.
mana-credits:
- Neuer Endpoint POST /api/v1/internal/credits/grant + grantCredits()
Service-Methode mit Idempotency via metadata.referenceId.
- transaction_type-Enum erweitert um 'grant' (eigener Typ statt
Mismatch mit 'refund').
- Migration 0001_grant_transaction_type.sql + partial-Index auf
metadata->>'referenceId' für O(log n) Idempotency-Lookup.
mana-analytics:
- FeedbackService stempelt sofort +5 Credits beim createFeedback (top-
level only, Replies bekommen nichts), wenn Mindest-20-Zeichen erfüllt
und Rate-Limit (10/User/24h via feedback_grant_log) nicht überschritten.
- adminUpdate triggert beim FRISCHEN Übergang nach 'completed':
+500 Credits an Original-Wisher + +25 an alle, die mit 👍 oder 🚀
reagiert haben. Doppel-Pay strukturell unmöglich via referenceId
(`<id>_shipped`, `<id>_reaction_<userId>`).
- Founder-Whitelist via FEEDBACK_FOUNDER_USER_IDS env (verhindert
Self-Reward).
- Drop voteCount-Spalte (durch reactions/score seit 0002 ersetzt).
- Migration 0003_grant_log_drop_vote_count.sql idempotent, lokal +
prod eingespielt.
Plan: docs/plans/feedback-rewards-and-identity.md (Phase 3.A-3.F).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
analytics.mana.how DNS already existed as a non-CNAME record — picking
the user-facing 'community.mana.how' subdomain instead. Added the
tunnel ingress + matched the CORS origin + client-side env var.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
mana-web SSR + browser need the analytics URL so the inline
FeedbackHook + /community page can talk to the new public-feedback
endpoints. SSR uses the internal docker hostname; browser uses the
public subdomain.
Note: analytics.mana.how DNS + Caddy reverse-proxy block must be
provisioned separately on the Mac Mini before browser-side calls
work — TODO in deploy-followup.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Switches the build context to repo-root so the pnpm-workspace install
can pull in @mana/shared-hono. Mirrors the mana-auth/mana-ai pattern
(node+pnpm installer stage → bun runtime stage).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Required by the public-community-hub stamping. Compose enforces the
var via :? syntax — startup fails fast if .env.macmini is missing it,
which beats silently using the dev default in production.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three edge-level fixes applied live to the Mac Mini today, now
committed so the canonical state matches:
1. apps/mana/apps/web/Dockerfile: add COPY for @mana/shared-crypto
(added recently as a workspace dep but the Dockerfile missed it,
so pnpm install failed with ERR_PNPM_WORKSPACE_PKG_NOT_FOUND on
every rebuild — same class as the shared-types / shared-ai /
shared-rss fixes earlier today).
2. docker-compose.macmini.yml (mana-web service): set
PUBLIC_MANA_RESEARCH_URL + PUBLIC_MANA_RESEARCH_URL_CLIENT. Without
this pair the SSR-injected window.__PUBLIC_MANA_RESEARCH_URL__ was
empty and research fetches 404'd against the current origin.
3. docker-compose.macmini.yml (umami service): pin image to
postgresql-v2.18.0. The rolling `postgresql-latest` tag jumped to
Umami 3.1.0 (Next.js 16) which crashed the container on every
POST /api/send — browser page loaders hung up to 10s on the
failing tracker request. v2.18.0 is the last known-stable v2;
DB schema is still v2-compatible so the downgrade is clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Redis runs with --requirepass, but mana-research was pointing at
redis://redis:6379 without credentials. Cache misses are not fatal
(the executor falls back to the upstream provider on every request)
but the NOAUTH error spam drowns real errors in logs/glitchtip.
Match the pattern other services use:
redis://:${REDIS_PASSWORD:-redis123}@redis:6379
Caught during the deep-research deploy smoke-test on 2026-04-22.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Opt-in path for missions that want Gemini Deep Research Max (up to 60 min
per task) instead of the shallow RSS pre-research. Because Max runs well
past a single 60-second tick, the state is carried across ticks:
tick N: submit → INSERT mission_research_jobs row → skip planner
tick N+k: poll → still running → skip planner (metric pending_skips)
tick N+m: poll → completed → inject as ResolvedInput, DELETE row, plan
- ManaResearchClient talks to mana-research's new internal
/v1/internal/research/async endpoints with X-Service-Key +
X-User-Id. Graceful-null on transport errors so a flaky
mana-research never crashes the tick loop.
- New table mana_ai.mission_research_jobs with PK (user_id, mission_id)
— presence is the "pending" flag; delete-on-terminal keeps queries
trivial.
- handleDeepResearch() encapsulates the state machine; planOneMission
now returns a discriminated union (planned | skipped | failed) so
"research pending" isn't miscounted as a parse failure.
- Opt-in at TWO gates to keep cost in check ($3–7/task, 1500 credits
per run):
1. MANA_AI_DEEP_RESEARCH_ENABLED=true server-side (default off)
2. DEEP_RESEARCH_TRIGGER regex matches the mission objective
(strict: "deep research", "tiefe recherche", "umfassende
recherche", "hintergrundrecherche", "deep dive")
Falls back to shallow RSS when either gate fails or the submit
errors upstream.
- Prom metrics: mana_ai_research_jobs_{submitted,completed,failed}_total
labelled by provider, plus _pending_skips_total.
- docker-compose wires MANA_RESEARCH_URL + the opt-in flag and adds
mana-research to depends_on.
- Full write-up with real API response shape (outputs plural, not
OpenAI-style), step-3 MCP-server plan (security-gated, not built),
ops + kill-switch: docs/reports/gemini-deep-research.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
New Bun/Hono service on port 3068 that bundles many web-research providers
behind a unified interface for side-by-side comparison. All eval runs
persist in research.* (mana_platform) so quality can be reviewed later.
Providers (Phase 1+2):
search: searxng, duckduckgo, brave, tavily, exa, serper
extract: readability (via mana-search), jina-reader, firecrawl
Endpoints:
POST /v1/search, /v1/search/compare — single + fan-out
POST /v1/extract, /v1/extract/compare — single + fan-out
GET /v1/runs, /v1/runs/:id — history
POST /v1/runs/:run/results/:id/rate — manual eval
GET /v1/providers, /v1/providers/health — catalog + readiness
Auto-routing: when `provider` is omitted, queries are classified via regex
(fast path, 0ms) with optional mana-llm fallback, then routed to the first
available provider for that query type (news → tavily, academic → exa,
semantic → exa, etc.).
Credits: server-key calls go through mana-credits reserve → commit/refund
so failed provider calls don't charge the user. BYO-keys supported via
research.provider_configs (UI arrives in Phase 4).
Cache: Redis with graceful degradation (1h TTL for search, 24h for
extract). Pay-per-use APIs only — no subscription-gated providers.
Docs: docs/plans/mana-research-service.md + docs/reports/web-research-capabilities.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two major tool expansions — the Recherche-Agent and Today-Agent can
now research the web autonomously (no browser needed), and a future
Meeting-Prep agent can read + create contacts.
=== research_news (server-side execution) ===
The biggest addition: mana-ai can now call mana-api's news-research
endpoints (POST /discover + /search) directly, without a browser.
Infrastructure:
- services/mana-ai/src/planner/news-research-client.ts — full HTTP
client with discover→search pipeline. 15s/30s timeouts. Graceful
null on any failure (network, mana-api down, bad response) so the
tick never crashes from research errors.
- config.manaApiUrl added (default http://localhost:3060); wired in
docker-compose.macmini.yml as http://mana-api:3060 + depends_on
mana-api with service_healthy condition.
Pre-planning research step (cron/tick.ts):
- Before the planner prompt is built, the tick checks if the
mission's objective or conceptMarkdown matches research keywords
(same RESEARCH_TRIGGER regex the webapp uses). When it matches:
* NewsResearchClient.research(objective) runs discovery + search
* Results are injected as a synthetic ResolvedInput with id
'__web-research__' and a formatted markdown context block
* The Planner then sees real article URLs/titles/excerpts and can
reference them in create_note / save_news_article steps
* Log line: "pre-research: N feeds, M articles"
Tool registration:
- research_news added to AI_PROPOSABLE_TOOL_NAMES + mana-ai tools.ts
with params (query, language?, limit?). This lets the planner also
explicitly propose a research step as a PlanStep (in addition to
the pre-planning auto-injection).
=== create_contact ===
- Added to AI_PROPOSABLE_TOOL_NAMES + mana-ai tools.ts with params
(firstName required, lastName/email/phone/company/notes optional).
- Contacts are encrypted at rest; server planner can plan the step
but execution stays on the webapp (same as all propose tools).
Full server-side contact resolution via Key-Grant is a future
enhancement.
- get_contacts added to webapp AUTO_TOOLS so agents can inspect
existing contacts without nagging (read-only, auto-policy).
Module coverage now:
✅ todo (5) ✅ calendar (2) ✅ notes (5) ✅ places (4)
✅ drink (3) ✅ food (2) ✅ news (1) ✅ journal (1)
✅ habits (3) ✅ news-research (1) ✅ contacts (1)
11 modules, 28 tools total (17 propose, 11 auto).
Tests: mana-ai 41/41 (drift-guard passes), shared-ai type-check
clean, webapp svelte-check 0 errors, 0 warnings.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Webapp now passes:
- PUBLIC_MANA_AI_URL / PUBLIC_MANA_AI_URL_CLIENT → getManaAiUrl()
resolves these; powers the Workbench "Datenzugriff" tab fetch.
- PUBLIC_AI_MISSION_GRANTS (default false) → gates the MissionGrant
dialog + audit tab. Flip to "true" in .env once the keypair is
provisioned.
Follow-up for operator: add a Cloudflare tunnel route for
mana-ai.mana.how → mana-ai:3067 (mirroring the existing pattern
for credits/events/llm) so the audit fetch resolves from the browser.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wire the Mission Key-Grant feature into the production Mac Mini
compose stack so mana-ai can boot and mana-auth can mint grants.
- New mana-ai service block (port 3066) — 256m mem limit, depends on
postgres + mana-llm, tick interval configurable via
MANA_AI_TICK_INTERVAL_MS / MANA_AI_TICK_ENABLED. Pulls
MANA_AI_PRIVATE_KEY_PEM from env; absent = grants silently disabled.
- mana-auth environment gains MANA_AI_PUBLIC_KEY_PEM (default empty
so existing deployments without the keypair degrade to 503
GRANT_NOT_CONFIGURED rather than failing to boot).
- mana-auth Dockerfile rewritten to the two-stage pnpm+bun pattern
used by mana-credits/mana-events — required now that mana-auth has
a @mana/shared-ai workspace dep. The previous single-stage
Dockerfile with service-scoped build context couldn't resolve any
@mana/* imports; that only worked historically because it fell
through at runtime via a pre-built layer.
- mana-ai Dockerfile copies packages/shared-ai into the installer
stage alongside shared-hono.
The build contexts for mana-auth flip from services/mana-auth to the
repo root. Existing CI/CD paths (scripts/mac-mini/build-app.sh) pass
through to docker compose build and pick up the new context
automatically — no script edits needed.
Flip-on procedure: on the Mac Mini, set MANA_AI_PUBLIC_KEY_PEM +
MANA_AI_PRIVATE_KEY_PEM in .env (already done, see
secrets/mana-ai/README.md on the host), then rebuild mana-auth +
build mana-ai.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three small config changes so the Kontext "Aus URL" flow (next commit)
is runnable from a plain `pnpm dev:mana:all`:
- package.json: include mana-crawler in the dev:mana:servers
concurrently group, and pass DATABASE_URL=…/mana_platform so the
Go binary doesn't try to connect to a non-existent `mana` DB (its
hardcoded default).
- .env.development: publish MANA_CRAWLER_URL=http://localhost:3023
(the crawler's default binary port — the macmini container is
a 3014 override, kept only in docker-compose). Also surface
MANA_LLM_DEFAULT_MODEL for the summariser.
- docker-compose.macmini.yml: inject MANA_CRAWLER_URL + the
default-model env into the mana-api container so production
can reach the internal crawler and pick the summariser model
consistently.
No runtime code touched.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>