blackbox-exporter can't resolve host.docker.internal on Colima, so
probes of host.docker.internal:4000 and :9200 always fail. Instead,
add a /health/pelias endpoint on the Hono wrapper that proxies to
the Pelias API, and update prometheus.yml to probe the wrapper's
proxied health endpoint.
Also simplifies the status page friendly_name() now that we don't
need to display the host.docker.internal targets.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Production deployment + observability for the self-hosted geocoding stack:
**docker-compose.macmini.yml**
- New mana-geocoding container (port 3018, internal-only — no traefik
labels, no Cloudflare route). Uses host.docker.internal to reach the
Pelias API on the host's pelias compose stack. Dockerfile added under
services/mana-geocoding/ using the same Bun/Hono pattern as mana-events.
**Prometheus**
- New blackbox-internal job probing mana-geocoding:3018/health, the
Pelias API on host.docker.internal:4000/v1/status, and Elasticsearch
at host.docker.internal:9200/_cluster/health. Kept separate from
blackbox-api which is reserved for public HTTPS endpoints.
**status.mana.how (generate-status-page.sh)**
- Include blackbox-internal in the metric query and add an "Interne
Dienste" section with its own summary card, right between Infrastruktur
and GPU Dienste. Summary grid goes from 4 to 5 columns with a
900px breakpoint.
- friendly_name() now handles http:// URLs and rewrites container-name
hosts like mana-geocoding:3018/health → "Mana Geocoding",
host.docker.internal:4000 → "Pelias API",
host.docker.internal:9200 → "Pelias Elasticsearch".
**Grafana uptime dashboard**
- Add an "Internal" series to the "Alle Dienste — Uptime-Verlauf" panel
- New "Interne Dienste Status" table panel showing per-instance up/down
- New "Geocoding Ø Latenz" stat panel for probe_duration_seconds
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- docker-compose: add empty default for AZURE_OPENAI_API_KEY to suppress
Docker Compose "variable is not set" warning
- setup-databases.sh: detect when pnpm filter matches no packages and
report "Skipped" instead of false "Schema pushed" success
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Hit "container name already in use" / "removal in progress" errors
three times during today's Phase 5 deploys. The previous restart
pattern was just `compose up -d --no-deps`, which fails when:
1. A previous interrupted recreate left a stale container under
the canonical name. The new `up` tries to claim the name and
gets a conflict.
2. Compose's recovery from #1 sometimes creates a hash-prefixed
orphan container (`<hash>_<container_name>`), which then
blocks the next clean run too.
3. Even `--force-recreate` can't always handle the case because
the old container is in the middle of being removed when the
new one is being created (race).
Two-step replacement that's reliable across all three failure modes:
Step 1 — `docker compose rm -fs SERVICES`
Stops + force-removes the canonical compose-managed container.
Idempotent: does nothing if already gone. Filters out the
"No stopped containers" log noise so the output stays clean.
Step 2 — orphan sweep via `docker rm -f`
For each service, look up its container_name from the
compose config (falls back to the service name if not set),
then `docker ps -aq --filter name=^${cname}$` for the canonical
one and `name=_${cname}$` for hash-prefixed orphans. Anything
found gets nuked. This catches the case where compose's own
state has lost track of an orphan it created earlier.
Step 3 — `docker compose up -d --no-deps --remove-orphans`
Creates the fresh container. The `--remove-orphans` flag also
silences the "Found orphan containers ([mana-game-whopixels])"
warning we kept seeing — that's a leftover from a removed
service that nobody had cleaned up.
The container_name extraction uses awk on `compose config` output
(verified locally: `mana-web` → `mana-app-web`) so the script doesn't
need a hard-coded service→container mapping.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds scripts/validate-cloudflared-config.mjs — a node-only validator
that lint-staged runs whenever cloudflared-config.yml is staged. The
goal is to catch the same failure modes that
`cloudflared tunnel ingress validate` would catch on the server, but
without requiring cloudflared to be installed on every dev box.
Checks:
- YAML parses
- tunnel: is a uuid
- credentials-file: ends with .json and contains the tunnel id
(warning when it doesn't — likely an out-of-sync remnant from a
previous rebuild, exactly the failure mode that bit us in the
first locally-managed switch)
- ingress: is a non-empty array
- every rule except the last has both hostname AND service
- the LAST rule is the catch-all `service: http_status:NNN`
- no duplicate hostnames (the most common copy-paste mistake)
- service URLs look like http(s):// / ssh:// / http_status:NNN
/ unix:/ / hello_world
- hostnames are lowercase dot-separated DNS labels (no spaces, no
weird characters)
Wired into lint-staged.config.js with a single glob entry; the
existing eslint + prettier flow is unchanged.
Tested against the live cloudflared-config.yml (passes, 51 hostnames)
and a synthetic broken file (catches all 6 categories of error +
the credentials-file/tunnel id drift warning).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two improvements to scripts/mac-mini/rebuild-tunnel.sh based on what
the first prod run actually surfaced.
═══ 1. Apex domain auto-fix via Cloudflare API ═══
`cloudflared tunnel route dns` cannot route the apex of a zone
(error code 1003: "An A, AAAA, or CNAME record with that host already
exists"). The CLI has no command to delete those records. The first
rebuild left mana.how returning 530 because the script silently
failed to route it and we had to fix the apex manually in the
dashboard.
The new `apex_route_via_api()` helper:
- Detects apex hostnames by dot count (one dot → two-label name)
- Uses $CLOUDFLARE_API_TOKEN if available
- Resolves the zone id by name
- Deletes any existing A / AAAA / CNAME records on the apex
- Creates a fresh proxied CNAME pointing at <tunnel>.cfargotunnel.com
- Cloudflare's CNAME flattening at the apex makes this work
transparently
If $CLOUDFLARE_API_TOKEN is not set, the script logs a warning at the
top of step 6 and falls back to the old behavior (route fails, user
fixes the apex manually). The token needs Zone:DNS:Edit on the
target zone.
═══ 2. Smarter HTTP verification ═══
The first run reported "5 hosts down (404/000)" but those were all
backend services without a root handler — credits/media/llm/mana-api
all return 404 at `/` and 200 at `/health`. The verify pass was
flagging healthy services as down and made the rebuild look more
broken than it was.
New `probe_host()` tries `/health` first, falls back to `/` only if
/health returned 4xx, and prefers a 2xx/3xx root response over a 4xx
/health. `probe_is_down()` only counts 5xx and 000 (libcurl error)
as failures — anything in 1xx-4xx means the request reached the
origin and the tunnel routing is correct, which is the actual thing
the verify pass cares about. `probe_label()` adds a one-word health
summary so the verify log reads "200 ok" / "401 auth required" /
"404 routed (no handler)" / "530 tunnel error" instead of just bare
status codes.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two related AI-infrastructure hardenings landing together because both
touch the same nutriphi/planta route definitions:
═══ 1. Wire-format schema versioning ═══
Adds AI_SCHEMA_VERSION + AiResponseEnvelope<T> in @mana/shared-types so
every AI structured-output endpoint speaks a single envelope dialect:
{ schemaVersion: '1', data: <validated object> }
Backend wraps via a small `envelope()` helper in each module's routes.ts;
frontend api.ts unwraps via `unwrapEnvelope<T>()` which throws an
AiSchemaVersionMismatchError if the server returns a version this
client wasn't compiled against.
Why this matters before launch:
- Catches stale-cache scenarios immediately ("client v1 talking to
server v2") with an actionable error in the network panel, not a
cascade of "field is undefined" bugs further down the stack
- Forces explicit version bumps when we make non-additive schema
changes — the bump rules are documented inline next to the constant
- Cheap to remove if it ever feels overkill: drop the envelope() call
on the backend and the unwrapEnvelope on the frontend, ~10 lines
═══ 2. Anthropic prompt-caching directive (forward-compat) ═══
Adds `providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' } } }`
on the system message in nutriphi + planta routes via a SYSTEM_CACHE_HINT
constant. This is a NO-OP today because:
- mana-llm currently routes to Gemini, not Claude
- Our system prompts are ~50 tokens, well under Anthropic's 1024-token
cache minimum
Kept anyway because it's ~5 lines per route and lights up automatically
when either condition flips (e.g. when we add per-user dietary preferences
as system context, pushing prompts past the threshold). The day we point
mana-llm at Claude Sonnet, every existing call site already has caching
enabled — no scavenger hunt through the routes.
System messages had to migrate from the `system:` shorthand to a full
messages[] entry to attach providerOptions, which is a tiny readability
loss but the only way to get per-message metadata into the AI SDK.
═══ Tests ═══
13 new cases in apps/mana/apps/web/.../nutriphi/ai-schemas.test.ts cover:
- AI_SCHEMA_VERSION presence + AiSchemaVersionMismatchError shape
- MealAnalysisSchema acceptance/rejection (confidence bounds, missing
nutrients, optional food fields, default empty arrays)
- PlantIdentificationSchema (every-field-optional design, defaults,
confidence range)
(Test file lives in the web app rather than packages/shared-types
because the latter has no test runner configured — adding vitest there
just for these would be overkill.)
Total nutriphi + planta suite: 62/62 passing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reconciles the in-repo cloudflared-config.yml with the actually-loaded
ingress map on the Mac Mini production tunnel — the previous repo file
was missing 30+ hostnames (per-app subdomains, mana-api, sync, llm,
media, credits, subscriptions, etc.) because it was last updated
before the unified Mana web app rollout. Adds the new mana-api.mana.how
ingress for apps/api on port 3060 so the unified backend has a public
client URL for the SvelteKit web app's PUBLIC_MANA_API_URL_CLIENT.
Drops the dead matrix.mana.how / element.mana.how routes — the matrix
subsystem was removed in 2514831a3 and those services no longer exist.
Adds scripts/mac-mini/sync-tunnel-config.sh — the one-command flow for
shipping a tunnel-config change: pull on the server, validate the
yaml, kickstart cloudflared via launchctl. setup-cloudflared-service.sh
already wires the launchd plist with --config <repo-path> pointing at
this file, so a fresh Mac Mini install + setup script + sync script
gives you a fully reproducible tunnel.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the services/news-ingester Bun service that pulls 25 public RSS/JSON
feeds into news.curated_articles every 15 min, with Mozilla Readability
fallback for thin RSS bodies and 30-day retention. apps/api /feed is
rewritten to read from the new pool table directly instead of the
sync_changes hack, with topics/lang/since/limit/offset query params.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The workbench-registry app id 'inventar' did not match its
@mana/shared-branding MANA_APPS counterpart 'inventory', so the tier-
gating join in apps/web/src/lib/app-registry/registry.ts silently
failed for the inventory module — it fell into the "no MANA_APPS
entry, default visible" fallback and was effectively un-gated. The
codebase had also voted overwhelmingly for 'inventar' (53 files) vs
'inventory' (3 files in shared-branding), so the long-standing
mismatch was just bookkeeping debt waiting to bite.
Pre-release, no live data, so the cleanest fix is to align everything
on the English 'inventory':
- Workbench-registry id, module.config.ts appId, module folder, route
folder and i18n locale folder all renamed via git mv
- Standalone apps/inventar/ workspace package renamed
- All imports, store identifiers (InventarEvents → InventoryEvents,
INVENTAR_GUEST_SEED, inventarModuleConfig), i18n keys and href/goto
paths follow the rename
- The German display label "Inventar" is preserved everywhere it is a
user-visible string (page titles, i18n values, toast labels)
- Dexie table prefixes (invCollections, invItems, …) are unchanged
- Drive-by fix: ListView.svelte was querying non-existent
inventarCollections/inventarItems tables — corrected to the actual
invCollections/invItems names from module.config
- The "inventar ↔ inventory id mismatch" workaround comment in
registry.ts is removed since the mismatch no longer exists
module-registry.ts also picks up the user's parallel newsModuleConfig
addition because both edits land in the same import block — keeping
them split would have left the build in an inconsistent state.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Local mana-auth has no built-in admin seed and `requireEmailVerification`
turned on with no real SMTP — every developer ends up writing the same
"register + UPDATE auth.users" SQL incantation by hand. Bundles it
into one idempotent script + a pnpm alias.
pnpm setup:dev-user # creates 3 default accounts
./scripts/dev/setup-dev-user.sh foo bar # creates / repairs one
What it does per user:
1. POST /api/v1/auth/register on mana-auth (so Better Auth's
signUpEmail handles password hashing the way the runtime
expects — no hand-rolled scrypt)
2. UPDATE auth.users SET email_verified = true, access_tier = 'founder'
so the new user can immediately log in AND exercise every
tier-gated module without a tier upgrade dance
Idempotent: existing users get tier + verification re-applied without
touching the password. Re-running after a partial setup is safe.
Defaults to three accounts (tills95 / tilljkb / rajiehq @gmail.com,
all with password "Aa-123456789") so the next dev doesn't have to
remember anything. Override via `TIER=alpha` / `DB_HOST=...` env
vars when needed.
Two preflight gates fail loud: psql in PATH + mana-auth reachable
on :3001. ON_ERROR_STOP=1 in psql so a bad SQL run doesn't get
silently swallowed.
Replaces the dangling `seed:dev-user` package.json alias that pointed
at a `pnpm --filter @mana/auth db:seed:dev` script that was never
created — clean rename to `setup:dev-user` to match the existing
`setup:env` / `setup:db` family.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
NOTE: the previous commit 048184bef carried this commit message but
accidentally bundled an unrelated PickerOverlay refactor instead of
this script change (lint-staged stash interaction). This is the
actual fix.
Per-app web Dockerfiles do `FROM sveltekit-base:local` and do NOT
re-COPY packages/shared-* — those packages are baked into the base
image. So a change to packages/shared-utils, packages/shared-ui, etc.
only reaches the live web app if the base image is also rebuilt.
This bit us THREE times on 2026-04-08 alone:
1. CSP fix in shared-utils ('wasm-unsafe-eval') sat unused in
production for over an hour because every `build-app.sh mana-web`
reused the cached base layer with old shared-utils.
2. The BaseListView export in shared-ui after the ListView
consolidation refactor — mana-web's build failed because Rollup
couldn't resolve the new symbol from the stale base.
3. Same shape, different package, repeatedly during the Gemma 4
migration push.
The pattern is identical every time and the manual workaround
(`build-app.sh --base` first) is something you only think to run if
you already know how the layering works. Make the script catch it.
New `is_base_image_stale` helper compares the base image's `Created`
timestamp against the latest git commit touching paths the base image
actually depends on (packages/, docker/Dockerfile.sveltekit-base,
pnpm-lock.yaml). When building any *-web service, if the image is
stale or missing, the base is rebuilt automatically before the
per-app build kicks off, with the triggering commit's oneline
printed for transparency.
Date parsing handles macOS Docker's local-TZ-offset RFC3339 format
(`...+02:00`, not Z). We strip from char 19 onward and parse the
literal local clock time with BSD date (no -u). GNU date is the
fallback for Linux dev boxes. If parsing fails for any reason we
conservatively force a rebuild rather than risk shipping stale code.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
While adding negative-path integration tests for the auth flow I
discovered that *neither* of the lockout primitives in
services/mana-auth/src/services/security.ts has actually been
working in production. Two independent silent failures that combined
into a "the lockout never triggers, ever" outcome:
1. recordAttempt() inserted into auth.login_attempts with explicit
`id = gen_random_uuid()`, but auth.login_attempts.id is a
`serial integer` column with `nextval('auth.login_attempts_id_seq')`
as default. The UUID-into-integer cast threw a type error every
single time, the bare `catch {}` swallowed it as "non-critical",
and not a single login attempt was ever persisted. Lockout's "5
failures in 15 min" check was running against an empty table.
2. checkLockout() built `attempted_at > ${new Date(...)}` via the
drizzle sql template, but postgres-js cannot bind a JS Date object
directly — it tries to byteLength() the parameter and crashes with
`Received an instance of Date`. Same anti-pattern: bare `catch`,
returns `{locked: false}` (fail-open), no log, completely invisible.
Both are "silent broken since the encryption-vault series of changes"
class — caught only because the integration test for the lockout flow
expected the 6th login attempt to return 429 and got 200 instead.
Fixes:
- recordAttempt(): drop the bogus `id` column from the INSERT (let the
sequence default assign it), default ipAddress to null instead of
letting `${undefined}` collapse the parameter slot, and surface
errors in the catch instead of swallowing them silently.
- checkLockout(): pass `windowStart.toISOString()` instead of the Date
object so postgres-js can serialize it. Same catch upgrade — log the
cause when failing open.
Failure-path test additions (tests/integration/auth-failures.test.ts):
- wrong password: assert 401, no JWT, +1 LOGIN_FAILURE in security_events,
+1 row in auth.login_attempts
- account lockout: 5 failed attempts then 6th returns 429 with
remainingSeconds, even with the correct password
- unverified email login: 403 with code = EMAIL_NOT_VERIFIED
- validate with garbage token: valid !== true
- resend verification: second mail arrives in mailpit
Plus the run-integration-tests.sh helper now runs both .test.ts files
and tests/integration/package.json's `test` script does the same.
Negative-control: reverted the recordAttempt fix (re-added the bogus
gen_random_uuid id), the wrong-password test failed at the
login_attempts assertion. Reverted the checkLockout fix, the lockout
test failed at the 429 assertion. Both fixes verified to be load-bearing.
6 tests, 45 expects, ~1.3s on a warm cache.
Two unrelated bugs in scripts/mac-mini/ensure-containers-running.sh,
both caught while debugging a mana-auth crash loop on 2026-04-08:
1. The recovery path passed --env-file "$PROJECT_ROOT/.env.macmini" to
docker compose, but that file has never existed on the server — only
.env does, and compose auto-loads it from the working directory. The
explicit --env-file silently caused recovered containers to start with
empty secrets (e.g. blank MANA_AUTH_KEK), which made mana-auth crash
the moment it came back up. The auto-recovery loop was therefore
self-defeating: it kept "fixing" auth into the same broken state
every 5 minutes for hours, with no notification because compose
exited 0. Drop --env-file entirely and cd into PROJECT_ROOT so
compose's standard .env discovery applies.
2. mana-infra-minio-init is a one-shot job container that legitimately
sits in "exited" state after running once. The script flagged it as
"stuck" every cycle, tried to "recover" it, and spammed the log with
ERROR lines. Add an explicit ONESHOT_INIT_CONTAINERS allowlist and
skip those names in both the initial scan and the post-recovery
verification.
Also tee compose output into the log so future failures actually leave
a breadcrumb instead of disappearing into the void.
Also: bump @mlc-ai/web-llm from a transitive dep (via @mana/local-llm)
to a direct dep of @mana/web. SvelteKit's adapter-node post-build
Rollup pass uses the web app's direct deps as its externals heuristic;
without this entry it warns "@mlc-ai/web-llm ... could not be resolved
- treating it as an external dependency" on every build. Functionally
harmless (the dynamic import in LocalLLMEngine only fires in the
browser), but the warning hid a real adapter-node misconfiguration
that would have bitten us if we'd ever tried to SSR /llm-test.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Local dev secrets like MANA_STT_API_KEY had no persistent home — they
lived only in the gitignored, generator-overwritten per-app .env files.
Every `pnpm setup:env` wiped them, so devs had to re-paste keys after
any env regeneration. Same recurring friction for MANA_LLM_API_KEY,
MANA_AUTH_KEK, OAuth keys, etc.
New layer: `.env.secrets` at the repo root.
- Gitignored, optional, never required for the build to pass
- Read by generate-env.mjs AFTER .env.development; non-empty values
override the matching key, so the merged result drives every per-app
.env the generator writes
- Empty values fall through to the .env.development defaults — a
freshly-copied .env.secrets.example is a no-op
- One source of truth for all dev secrets, propagated to every app
with one `pnpm setup:env`
Files:
- `.env.secrets.example` — committed template documenting all known
secret keys (mana-stt, mana-llm, auth KEK, sync JWT, MinIO, third-
party APIs). Devs `cp .env.secrets.example .env.secrets` and fill in.
- `.gitignore` — ignores .env.secrets, allows .env.secrets.example
- `scripts/generate-env.mjs` — loads .env.secrets if present, prints
"Loaded N secrets from .env.secrets" so devs see the override
taking effect
- `scripts/setup-secrets.mjs` + `pnpm setup:secrets` — convenience
script that SSHes to mana-server, greps the prod .env for the keys
defined in .env.secrets.example, and writes them locally. Confirms
before overwriting an existing .env.secrets unless --force is set;
reports which keys couldn't be found on the remote so devs know
what's left to fill manually
- `docs/LOCAL_DEVELOPMENT.md` + `docs/ENVIRONMENT_VARIABLES.md` —
walk-through and architecture diagram update
Verified end-to-end:
- `rm .env.secrets apps/mana/apps/web/.env && pnpm setup:env` →
STT key empty (no regression for devs who haven't opted in)
- `pnpm setup:secrets --force && pnpm setup:env` →
STT key propagated, "Loaded 3 secrets from .env.secrets" in output
- POST /api/v1/voice/transcribe with a real audio file →
full transcript back via gpu-stt.mana.how, end-to-end working
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
VictoriaMetrics + vmalert previously copied prometheus.yml/alerts.yml from
/mnt/prometheus-config/ into /etc/prometheus/ at container start. The copy
silently drifted from the host file whenever the container wasn't restarted —
which is exactly what hid the matrix/element removal from status.mana.how
until 2026-04-08, when VM was still actively scraping the deleted targets
because its in-container config snapshot pre-dated the cleanup.
Now both containers mount ./docker/prometheus directly into /etc/prometheus
(resp. /etc/alerts) read-only and point the binary at it, and deploy.sh
issues POST /-/reload to both after each deploy so config edits go live
without a container recreate.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds a 13-step integration test that exercises register → email
verification → login → JWT validation → /me/data → encryption-vault
init/key → logout against a real stack of postgres + redis + mailpit +
mana-auth + mana-notify in docker compose.
Verified locally that this catches every regression we hit on
2026-04-08 in well under a second:
- missing nanoid dependency → register endpoint 500
- missing MANA_AUTH_KEK env passthrough → mana-auth never starts
- missing encryption-vault SQL migrations → vault endpoints 500
- wrong cookie name in /api/v1/auth/login → no accessToken in response
- mana-notify SMTP misconfigured → mailpit poll times out
Files:
- docker-compose.test.yml — minimal isolated stack on alt ports
(postgres 5443, redis 6390, mailpit 1026/8026, mana-auth 3091,
mana-notify 3092). Runs alongside the dev stack without collision.
Postgres healthcheck runs a real query rather than just pg_isready
to avoid the race where pg_isready reports healthy while the docker
init scripts are still running on a unix socket.
- tests/integration/auth-flow.test.ts — bun test that drives the full
flow via fetch + mailpit's REST API. Cleans up its test user from
postgres in afterAll. Self-contained, no extra deps.
- tests/integration/README.md — what's covered, why it exists, how
to run locally + extend.
- scripts/run-integration-tests.sh — orchestrator. Brings up the
stack, pushes the @mana/auth Drizzle schema, applies the
encryption-vault SQL migrations (002, 003), restarts mana-auth so
it sees the fresh tables, runs the test, tears down on exit.
KEEP_STACK=1 to leave it up for manual mailpit inspection.
- docker-compose.dev.yml — also adds Mailpit as a regular dev service
(ports 1025/8025) so local development can have a working email
capture without spinning up the test stack.
- .github/workflows/ci.yml — new auth-integration job that runs on
every PR. Calls run-integration-tests.sh; on failure dumps
mana-auth + mana-notify logs and the mailpit message queue. Marked
as a required check via the existing PR validation pipeline.
Reproduced 3 clean runs and 1 negative-control run (removed nanoid
from package.json → mana-auth container exits → script aborts with
non-zero) before committing. Full happy path runs in ~22s on a warm
Docker cache.
The /api/v1/voice/parse-task and /api/v1/voice/parse-habit endpoints
forwarded transcripts to mana-llm without an X-API-Key header. This
worked against the local mana-llm container (no auth) but silently
fell back to the no-LLM path when pointed at gpu-llm.mana.how, which
requires an API key — voice quick-add would look like it was running
in degraded mode forever with no signal that auth was the cause.
Now both endpoints read MANA_LLM_API_KEY from the server-side env and
attach it as X-API-Key when present, mirroring the pattern already
used by /api/v1/voice/transcribe for mana-stt. When the var is empty
the header is omitted, so local Docker setups without auth still work.
Plumbing: generate-env.mjs writes MANA_LLM_URL + MANA_LLM_API_KEY into
apps/mana/apps/web/.env, .env.development gets the new keys with empty
defaults, ENVIRONMENT_VARIABLES.md documents the gateway and where to
get a key.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit bundles two unrelated changes that were swept together by an
accidental `git add -A` in another working session. Documented here so the
history reflects what's actually inside.
═══════════════════════════════════════════════════════════════════════
1. fix(mana-auth): /api/v1/auth/login mints JWT via auth.handler instead
of api.signInEmail
═══════════════════════════════════════════════════════════════════════
Previous attempt (commit 55cc75e7d) tried to fix the broken JWT mint in
/api/v1/auth/login by switching the cookie name from `mana.session_token`
to `__Secure-mana.session_token` for production. That was necessary but
not sufficient: Better Auth's session cookie value isn't just the raw
session token, it's `<token>.<HMAC>` where the HMAC is derived from the
better-auth secret. Reconstructing the cookie from auth.api.signInEmail's
JSON response only gave us the raw token, so /api/auth/token's
get-session middleware still couldn't validate it and the JWT mint kept
silently failing.
Real fix: do the sign-in via auth.handler (the HTTP path) rather than
auth.api.signInEmail (the SDK path). The handler returns a real fetch
Response with a Set-Cookie header containing the fully signed cookie
envelope. We capture that header verbatim and forward it as the cookie
on the /api/auth/token request, which now passes validation and mints
the JWT correctly.
Verified end-to-end on auth.mana.how:
$ curl -X POST https://auth.mana.how/api/v1/auth/login \
-d '{"email":"...","password":"..."}'
{
"user": {...},
"token": "<session token>",
"accessToken": "eyJhbGciOiJFZERTQSI...", ← real JWT now
"refreshToken": "<session token>"
}
Side benefits:
- Email-not-verified path is now handled by checking
signInResponse.status === 403 directly, no more catching APIError
with the comment-noted async-stream footgun.
- X-Forwarded-For is forwarded explicitly so Better Auth's rate limiter
and our security log see the real client IP.
- The leftover catch block now only handles unexpected exceptions
(network errors etc); the FORBIDDEN-checking logic in it is dead but
harmless and left in for defense in depth.
═══════════════════════════════════════════════════════════════════════
2. chore: remove the entire self-hosted Matrix stack (Synapse, Element,
Manalink, mana-matrix-bot)
═══════════════════════════════════════════════════════════════════════
The Matrix subsystem ran parallel to the main Mana product without any
load-bearing integration: the unified web app never imported matrix-js-sdk,
the chat module uses mana-sync (local-first), and mana-matrix-bot's
plugins duplicated features the unified app already ships natively.
Keeping it alive cost a Synapse + Element + matrix-web + bot container
quartet, three Cloudflare routes, an OIDC provider plugin in mana-auth,
and a steady drip of devlog/dependency churn.
Removed:
- apps/matrix (Manalink web + mobile, ~150 files)
- services/mana-matrix-bot (Go bot with ~20 plugins)
- docker/matrix configs (Synapse + Element)
- synapse/element-web/matrix-web/mana-matrix-bot services in
docker-compose.macmini.yml
- matrix.mana.how/element.mana.how/link.mana.how Cloudflare tunnel routes
- OIDC provider plugin + matrix-synapse trustedClient + matrixUserLinks
table from mana-auth (oauth_* schema definitions also removed)
- MatrixService import path in mana-media (importFromMatrix endpoint)
- Matrix notification channel in mana-notify (worker, metrics, config,
channel_type enum, MatrixOptions handler)
- Matrix entries from shared-branding (mana-apps + app-icons),
notify-client, the i18n bundle, the observatory map, the credits
app-label list, the landing footer/apps page, the prometheus + alerts
+ promtail tier mappings, and the matrix-related deploy paths in
cd-macmini.yml + ci.yml
Devlog/manascore/blueprint entries that mention Matrix are left intact
as historical record. The oauth_* + matrix_user_links Postgres tables
stay on existing prod databases — code can no longer write to them, drop
them in a follow-up migration if you want them gone for real.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Mac Mini hasn't run mana-llm/stt/tts/image-gen for a while — those
services live on the Windows GPU server now. The Mac-targeted
installers, plists, and platform-checking setup scripts have been
sitting in the repo as cargo-cult, suggesting Mac Mini deployment is
still a real option. It isn't.
Removed (Mac-Mini deployment infrastructure):
services/mana-stt/
- com.mana.mana-stt.plist (LaunchAgent)
- com.mana.vllm-voxtral.plist (LaunchAgent for the abandoned local Voxtral experiment)
- install-service.sh (single-service launchd installer)
- install-services.sh (mana-stt + vllm-voxtral installer)
- setup.sh (Mac arm64 installer)
- scripts/setup-vllm.sh (vLLM-Voxtral setup)
- scripts/start-vllm-voxtral.sh
services/mana-tts/
- com.mana.mana-tts.plist
- install-service.sh
- setup.sh (Mac arm64 installer)
scripts/mac-mini/
- setup-image-gen.sh (Mac flux2.c launchd installer)
- setup-stt.sh
- setup-tts.sh
- launchd/com.mana.image-gen.plist
- launchd/com.mana.mana-stt.plist
- launchd/com.mana.mana-tts.plist
setup-tts-bot.sh stays — it's the Matrix TTS bot installer (Synapse
side), not the mana-tts service.
Updated:
- services/mana-stt/CLAUDE.md, README.md — fully rewritten for the
Windows GPU reality (CUDA WhisperX, Scheduled Task ManaSTT, .env keys
matching the actual production .env on the box)
- services/mana-tts/CLAUDE.md, README.md — same treatment, documenting
Kokoro/Piper/F5-TTS on the Windows GPU under Scheduled Task ManaTTS
- scripts/mac-mini/README.md — dropped the STT setup section, replaced
with a pointer to docs/WINDOWS_GPU_SERVER_SETUP.md and the per-service
CLAUDE.md files
- docs/MAC_MINI_SERVER.md — expanded the "deactivated launchagents"
list to mention the now-removed plists, added the full GPU service
port table with public URLs, added a cleanup snippet for any old plists
still installed on a Mac Mini somewhere
Root file cleanup:
- mac-mini-setup.sh → scripts/mac-mini/bootstrap.sh (first-time bootstrap
belongs next to the other mac-mini setup-* scripts)
- test-chat-auth.sh → scripts/test-chat-auth.sh (ad-hoc smoke test, no
reason to live in the repo root)
- cloudflared-config.yml stays in root on purpose — it's the single source
of truth read by scripts/mac-mini/setup-*.sh and scripts/check-status.sh.
Docs:
- docs/POSTMORTEM_2026-04-07.md → docs/postmortems/2026-04-07-memoro-deploy-prod-wipe.md
(creates the postmortems/ home for future entries; descriptive name)
- docs/future/MAIL_SERVER_MAC_MINI_TEMP.md deleted — what it described
("Bereit zur Umsetzung", Stalwart on Mac Mini) is what's actually
running today, documented in docs/MAIL_SERVER.md. The DEDICATED variant
in docs/future/ remains since it's still a real future plan.
Root CLAUDE.md fix:
- @mana/local-store description was wrong — claimed it was legacy/standalone
only, but it's still used by apps/mana/apps/web itself, plus manavoxel,
arcade, and three shared packages.
Not touched (flagged for follow-up):
- NewAppIdeas/ (344K of "Roblox Reimagined" planning notes in repo root) —
user decision: archive externally or move under docs/future/
- Doc giants (PROJECT_OVERVIEW 41k, MATRIX_BOT_ARCHITECTURE 36k, etc.) —
splitting them is its own refactor
- Service CLAUDE.md staleness audit across 18 services — too broad for
this pass
Removed:
- apps/manacore/ — three Svelte files were byte-identical duplicates of
the apps/mana/ versions, leftover from the 2025 rename. Untracked .env
files in the same dir were also cleared.
- 21 empty apps/*/apps/web-archived/ directories — leftover from the
unification move, never tracked in git.
- services/it-landing/ — empty directory, picked up by the services/*
workspace glob for no reason.
- apps/news/apps/server-archived/ — empty.
Fixed:
- scripts/mac-mini/status.sh: COMPOSE_PROJECT_NAME fallback was still
manacore-monorepo from before the rename.
Documented:
- Root CLAUDE.md now describes apps/api/ (the @mana/api unified backend)
as a top-level peer to apps/mana/. It was completely missing from the
trimmed CLAUDE.md, which made the layout look frontend-only.
Two failures during the 2026-04-07 production outage triage were caused
not by the underlying outage but by `status.sh` and `health-check.sh`
hiding the broken state. Both scripts hardened so the same outage
shape can't reoccur invisibly.
status.sh — compose-vs-running diff
The old script printed "X containers running / Y total" without
noticing that some compose-defined containers were never started in
the first place. The Mac Mini was running 37 of 42 declared
containers and the script reported "37 running" with no indication
of the gap — `mana-core-sync` and `mana-api-gateway` were silently
missing for hours.
New behaviour: read every service from `docker compose config`,
diff its `container_name` against `docker ps`, and report each
declared service whose container is not currently up. The same
outage state would have been flagged on the very first run.
health-check.sh — public-hostname walk via Cloudflare DNS
The old script probed ~50 hardcoded `localhost:<port>/health`
endpoints across Chat, Todo, Calendar, etc. — but the per-app
HTTP backends those endpoints expected don't exist anymore (the
ghost-API cleanup removed them entirely). Every probe returned
HTTP 000 / connection refused, generating a wall of false-positive
alerts that drowned out the real signal.
The block was replaced with a dynamic walk of every `hostname:`
entry in `~/.cloudflared/config.yml`. Each hostname is probed via
the public Cloudflare tunnel, so DNS gaps, missing tunnel routes,
502/530 origin failures and timeouts surface as failures the same
way real users would experience them. On its first run after the
cleanup it surfaced eighteen previously-invisible hostname failures
(no DNS, 502, or 530) — every one of them a real production issue.
DNS resolution intentionally goes through `dig +short HOST @1.1.1.1`
instead of the local resolver. The Mac Mini's home-router DNS keeps
a negative cache for hours after the first failed lookup, so newly
added CNAMEs (like the post-outage sync/media records) appeared as
"no response" from inside the script for hours even though external
users saw them resolve immediately. Asking Cloudflare's DNS directly
gives the script the same view the public internet has.
The Matrix, Element, GPU-LAN-redundant and monitoring port-by-port
blocks were removed — the public-hostname walk covers all of them
via their `*.mana.how` hostnames going through the actual tunnel.
The "stuck container" detector now ignores `*-init` containers
(one-shot init pods, Exit 0 = success, intentionally never re-run).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds end-to-end browser voice capture for the Memoro module, mirroring the
existing dreams pattern: MediaRecorder → SvelteKit server proxy → mana-stt
on the Windows GPU box via Cloudflare tunnel.
Recording UI lives in /memoro page header (mic button + live timer + cancel +
sticky-permission retry). Server proxy at /api/v1/memoro/transcribe forwards
the blob with the server-held X-API-Key. memosStore.createFromVoice creates a
placeholder memo with processingStatus='processing' and fires transcribeBlob
in the background, which writes the transcript and flips status on completion
(or 'failed' with error in metadata).
Also corrects the mana-stt hostname across the repo: stt-api.mana.how (which
never existed in DNS) → gpu-stt.mana.how (the actual Cloudflare tunnel route
to the Windows GPU box). Adds an ENVIRONMENT_VARIABLES.md section explaining
how to obtain MANA_STT_API_KEY and where the tunnel terminates. Adds tunnel
health probes to the mac-mini health-check script so we catch tunnel-side
breakage in addition to LAN-side.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds a one-tap voice recorder at the top of the Dreams module. Speak
your dream right after waking, the audio is sent through a server-side
proxy to mana-stt, and the transcript appears in the entry as soon as
it lands.
- New /api/v1/dreams/transcribe SvelteKit server route proxies the
upload to mana-stt with the server-held MANA_STT_API_KEY (never
exposed to the browser); validates mime, size, missing config
- Adds MANA_STT_URL + MANA_STT_API_KEY to the mana-web env config in
generate-env.mjs (private, not PUBLIC_ prefixed)
- New DreamRecorder class wraps MediaRecorder with reactive
$state — status, elapsed timer, error; supports cancel
- dreamsStore.createFromVoice creates a placeholder dream with
processingStatus='transcribing' and kicks off the upload
- dreamsStore.transcribeBlob uploads, writes the result back into
the dream, falls back to processingStatus='failed' on errors
- Adds processingStatus + processingError + audioDurationMs to
LocalDream; backwards-compatible defaults in toDream
- Mic button in ListView with idle / requesting / recording
(with elapsed timer + pulsing red) / stopping states
- Cancel button discards the in-flight recording
- Transcribing badge ●●● + failed ! badge on dream rows
- Inline editor shows live transcription status; while it's running
and the user hasn't typed anything, the transcript folds into the
edit buffer as soon as it arrives
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New Hono+Bun service at services/mana-events on port 3065 with two
schemas in mana_platform: events_published (snapshots) and public_rsvps
(unauthenticated responses), plus a per-token hourly rate-limit bucket.
- Host endpoints (JWT) for publish/update/unpublish/list-rsvps
- Public endpoints for snapshot fetch + RSVP upsert with rate limiting
- New /rsvp/[token] page outside the auth gate, SSR-loads the snapshot
- Client store wires publishEvent/unpublishEvent to the server, syncs
snapshot updates after edits, and deletes the snapshot on event delete
- DetailView polls GET /events/:id/rsvps every 30s while open and lets
hosts import a public response into their local guest list
- generate-env, setup-databases.sh, .env.development, hooks.server.ts,
package.json wired for local dev
The previous startup.sh checked colima status via `colima status | grep running`
and, if that failed, ran `colima stop --force` unconditionally before starting.
This is destructive: a transient status mis-detection can kill a healthy running
VM, and the subsequent start often hangs because of leftover locks/processes.
Triggered today during the ManaCore→Mana rename: reloading the docker-startup
LaunchAgent ran the script, which falsely concluded colima was down, killed the
running VM, and left 12 zombie limactl processes plus a stale disk lock symlink.
The whole production stack (incl. Forgejo) was offline until manual cleanup.
Changes:
- Use `docker info` as the readiness check instead of `colima status` —
it directly tests the thing we care about (docker socket reachable)
- Only do cleanup work when we actually need to start; never SIGKILL a
running VM as a "precaution"
- When we do need to start: reap any zombie limactl/colima processes from
prior failed runs, and clear the stale disk-in-use lock if no process
actually holds it
- Verify successful start with `docker info`, not `colima status`
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
All web app subdomains (chat.mana.how, todo.mana.how, etc.) were removed
when the unified app launched, but monitoring configs still referenced them.
Update blackbox targets to use mana.how/route URLs, remove stale API backend
routes from cloudflared, clean up CORS origins, and fix status page generator
to handle route-based URLs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove standalone app Umami website IDs from .env.development and
generate-env.mjs. Remove injectUmamiAnalytics from all 21 standalone
app hooks.server.ts files. All analytics now flow through the single
ManaCore unified app website ID with module-level segmentation.
Landing page IDs are preserved (separate Astro sites).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mirrors the frontend unification (single IndexedDB) on the backend.
All services now use pgSchema() for isolation within one shared database,
enabling cross-schema JOINs, simplified ops, and zero DB setup for new apps.
- Migrate 7 services from pgTable() to pgSchema(): mana-user (usr),
mana-media (media), todo, traces, presi, uload, cards
- Update all DATABASE_URLs in .env.development, docker-compose, configs
- Rewrite init-db scripts for 2 databases + 12 schemas
- Rewrite setup-databases.sh for consolidated architecture
- Update shared-drizzle-config default to mana_platform
- Update CLAUDE.md with new database architecture docs
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New GPU service for fast text-to-video generation using LTX-Video (~2B params)
on the RTX 3090. Generates 480p clips in 10-30 seconds, uses ~10GB VRAM.
Includes Cloudflare Tunnel route, Prometheus monitoring, and health checks.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add ManaCore as first entry in MANA_APPS so the dashboard at mana.how
gets a tier badge. Map mana.how → manacore and inventar → inventory
in subdomain aliases.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add Mukke, Photos, Planta, SkillTree, Playground, Arcade to mana-apps.ts
with icons and APP_URLS. Fix manadeck→cards subdomain alias in status
page generator so the tier badge renders for the renamed app.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move release tier info (founder/alpha/beta/public) from a standalone
grid section into the existing service rows as small inline badges
next to each web app name. Cleaner, less visual noise.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Parse tier data automatically from mana-apps.ts (awk, read-only volume
mount) so the status page stays in sync without manual updates. Shows
founder/alpha/beta/public cards with per-app development status.
Tier data is also included in status.json for ManaScore consumption.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- generate-status-page.sh now also writes status.json alongside index.html
Format: { updated, summary: {up, total}, services: { appName: bool } }
- nginx status.mana.how serves status.json with CORS headers (public read)
and explicit location block to avoid rewrite to index.html
- ManaScore index page fetches status.json client-side on load and
injects green ● LIVE / red ● DOWN badge next to each app's status chip
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replace @arcade/backend (NestJS) with @arcade/server (Hono/Bun).
Same two endpoints, no auth required (public game generator):
- POST /api/games/generate — AI game generation (Gemini, Claude, GPT)
- POST /api/games/submit — Community game submission via GitHub PR
- GET /health — Health check
This removes the last remaining NestJS backend from the monorepo.
NestJS is now completely gone — all servers use Hono + Bun.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- inventar-web: fix mangled icon import in settings page
- skilltree-web: create missing lib/services/storage.ts for export/import
- startup.sh: add umami/synapse DB creation + synapse user setup with C locale
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Prevents the internal SSD from filling up if the external SSD is not
mounted or if `colima delete` wiped the datadisk symlink.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds 4 new Tier 3 metrics to the ecosystem health audit script:
- Git Activity: % of apps with commits in the last 30 days (97%)
- A11y Indicators: alt-text coverage, role=dialog, focusTrap (36%)
- Auth Guard Coverage: AuthGate/authGuard presence per app (83%)
- Docker Readiness: Dockerfile present per app (80%)
Overall score updated from 74 → 72 (23 metrics, 135 total weight).
Dashboard at /manascore/ecosystem updated with new category rows.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
5 new metrics:
- Toast Consistency (100%) — all apps use shared toastStore
- Store Pattern (95%) — 176 Runes stores vs 9 old writable/readable
- Shared Types (62%) — shared-types imports vs local type files
- Dep Freshness (80%) — avg 37 deps per app
- Bundle Config (100%) — all apps have SvelteKit adapter
Ecosystem Health Score: 74/100 (19 metrics total)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>