Hit "container name already in use" / "removal in progress" errors
three times during today's Phase 5 deploys. The previous restart
pattern was just `compose up -d --no-deps`, which fails when:
1. A previous interrupted recreate left a stale container under
the canonical name. The new `up` tries to claim the name and
gets a conflict.
2. Compose's recovery from #1 sometimes creates a hash-prefixed
orphan container (`<hash>_<container_name>`), which then
blocks the next clean run too.
3. Even `--force-recreate` can't always handle the case because
the old container is in the middle of being removed when the
new one is being created (race).
Two-step replacement that's reliable across all three failure modes:
Step 1 — `docker compose rm -fs SERVICES`
Stops + force-removes the canonical compose-managed container.
Idempotent: does nothing if already gone. Filters out the
"No stopped containers" log noise so the output stays clean.
Step 2 — orphan sweep via `docker rm -f`
For each service, look up its container_name from the
compose config (falls back to the service name if not set),
then `docker ps -aq --filter name=^${cname}$` for the canonical
one and `name=_${cname}$` for hash-prefixed orphans. Anything
found gets nuked. This catches the case where compose's own
state has lost track of an orphan it created earlier.
Step 3 — `docker compose up -d --no-deps --remove-orphans`
Creates the fresh container. The `--remove-orphans` flag also
silences the "Found orphan containers ([mana-game-whopixels])"
warning we kept seeing — that's a leftover from a removed
service that nobody had cleaned up.
The container_name extraction uses awk on `compose config` output
(verified locally: `mana-web` → `mana-app-web`) so the script doesn't
need a hard-coded service→container mapping.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two improvements to scripts/mac-mini/rebuild-tunnel.sh based on what
the first prod run actually surfaced.
═══ 1. Apex domain auto-fix via Cloudflare API ═══
`cloudflared tunnel route dns` cannot route the apex of a zone
(error code 1003: "An A, AAAA, or CNAME record with that host already
exists"). The CLI has no command to delete those records. The first
rebuild left mana.how returning 530 because the script silently
failed to route it and we had to fix the apex manually in the
dashboard.
The new `apex_route_via_api()` helper:
- Detects apex hostnames by dot count (one dot → two-label name)
- Uses $CLOUDFLARE_API_TOKEN if available
- Resolves the zone id by name
- Deletes any existing A / AAAA / CNAME records on the apex
- Creates a fresh proxied CNAME pointing at <tunnel>.cfargotunnel.com
- Cloudflare's CNAME flattening at the apex makes this work
transparently
If $CLOUDFLARE_API_TOKEN is not set, the script logs a warning at the
top of step 6 and falls back to the old behavior (route fails, user
fixes the apex manually). The token needs Zone:DNS:Edit on the
target zone.
═══ 2. Smarter HTTP verification ═══
The first run reported "5 hosts down (404/000)" but those were all
backend services without a root handler — credits/media/llm/mana-api
all return 404 at `/` and 200 at `/health`. The verify pass was
flagging healthy services as down and made the rebuild look more
broken than it was.
New `probe_host()` tries `/health` first, falls back to `/` only if
/health returned 4xx, and prefers a 2xx/3xx root response over a 4xx
/health. `probe_is_down()` only counts 5xx and 000 (libcurl error)
as failures — anything in 1xx-4xx means the request reached the
origin and the tunnel routing is correct, which is the actual thing
the verify pass cares about. `probe_label()` adds a one-word health
summary so the verify log reads "200 ok" / "401 auth required" /
"404 routed (no handler)" / "530 tunnel error" instead of just bare
status codes.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two related AI-infrastructure hardenings landing together because both
touch the same nutriphi/planta route definitions:
═══ 1. Wire-format schema versioning ═══
Adds AI_SCHEMA_VERSION + AiResponseEnvelope<T> in @mana/shared-types so
every AI structured-output endpoint speaks a single envelope dialect:
{ schemaVersion: '1', data: <validated object> }
Backend wraps via a small `envelope()` helper in each module's routes.ts;
frontend api.ts unwraps via `unwrapEnvelope<T>()` which throws an
AiSchemaVersionMismatchError if the server returns a version this
client wasn't compiled against.
Why this matters before launch:
- Catches stale-cache scenarios immediately ("client v1 talking to
server v2") with an actionable error in the network panel, not a
cascade of "field is undefined" bugs further down the stack
- Forces explicit version bumps when we make non-additive schema
changes — the bump rules are documented inline next to the constant
- Cheap to remove if it ever feels overkill: drop the envelope() call
on the backend and the unwrapEnvelope on the frontend, ~10 lines
═══ 2. Anthropic prompt-caching directive (forward-compat) ═══
Adds `providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' } } }`
on the system message in nutriphi + planta routes via a SYSTEM_CACHE_HINT
constant. This is a NO-OP today because:
- mana-llm currently routes to Gemini, not Claude
- Our system prompts are ~50 tokens, well under Anthropic's 1024-token
cache minimum
Kept anyway because it's ~5 lines per route and lights up automatically
when either condition flips (e.g. when we add per-user dietary preferences
as system context, pushing prompts past the threshold). The day we point
mana-llm at Claude Sonnet, every existing call site already has caching
enabled — no scavenger hunt through the routes.
System messages had to migrate from the `system:` shorthand to a full
messages[] entry to attach providerOptions, which is a tiny readability
loss but the only way to get per-message metadata into the AI SDK.
═══ Tests ═══
13 new cases in apps/mana/apps/web/.../nutriphi/ai-schemas.test.ts cover:
- AI_SCHEMA_VERSION presence + AiSchemaVersionMismatchError shape
- MealAnalysisSchema acceptance/rejection (confidence bounds, missing
nutrients, optional food fields, default empty arrays)
- PlantIdentificationSchema (every-field-optional design, defaults,
confidence range)
(Test file lives in the web app rather than packages/shared-types
because the latter has no test runner configured — adding vitest there
just for these would be overkill.)
Total nutriphi + planta suite: 62/62 passing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reconciles the in-repo cloudflared-config.yml with the actually-loaded
ingress map on the Mac Mini production tunnel — the previous repo file
was missing 30+ hostnames (per-app subdomains, mana-api, sync, llm,
media, credits, subscriptions, etc.) because it was last updated
before the unified Mana web app rollout. Adds the new mana-api.mana.how
ingress for apps/api on port 3060 so the unified backend has a public
client URL for the SvelteKit web app's PUBLIC_MANA_API_URL_CLIENT.
Drops the dead matrix.mana.how / element.mana.how routes — the matrix
subsystem was removed in 2514831a3 and those services no longer exist.
Adds scripts/mac-mini/sync-tunnel-config.sh — the one-command flow for
shipping a tunnel-config change: pull on the server, validate the
yaml, kickstart cloudflared via launchctl. setup-cloudflared-service.sh
already wires the launchd plist with --config <repo-path> pointing at
this file, so a fresh Mac Mini install + setup script + sync script
gives you a fully reproducible tunnel.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the services/news-ingester Bun service that pulls 25 public RSS/JSON
feeds into news.curated_articles every 15 min, with Mozilla Readability
fallback for thin RSS bodies and 30-day retention. apps/api /feed is
rewritten to read from the new pool table directly instead of the
sync_changes hack, with topics/lang/since/limit/offset query params.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
NOTE: the previous commit 048184bef carried this commit message but
accidentally bundled an unrelated PickerOverlay refactor instead of
this script change (lint-staged stash interaction). This is the
actual fix.
Per-app web Dockerfiles do `FROM sveltekit-base:local` and do NOT
re-COPY packages/shared-* — those packages are baked into the base
image. So a change to packages/shared-utils, packages/shared-ui, etc.
only reaches the live web app if the base image is also rebuilt.
This bit us THREE times on 2026-04-08 alone:
1. CSP fix in shared-utils ('wasm-unsafe-eval') sat unused in
production for over an hour because every `build-app.sh mana-web`
reused the cached base layer with old shared-utils.
2. The BaseListView export in shared-ui after the ListView
consolidation refactor — mana-web's build failed because Rollup
couldn't resolve the new symbol from the stale base.
3. Same shape, different package, repeatedly during the Gemma 4
migration push.
The pattern is identical every time and the manual workaround
(`build-app.sh --base` first) is something you only think to run if
you already know how the layering works. Make the script catch it.
New `is_base_image_stale` helper compares the base image's `Created`
timestamp against the latest git commit touching paths the base image
actually depends on (packages/, docker/Dockerfile.sveltekit-base,
pnpm-lock.yaml). When building any *-web service, if the image is
stale or missing, the base is rebuilt automatically before the
per-app build kicks off, with the triggering commit's oneline
printed for transparency.
Date parsing handles macOS Docker's local-TZ-offset RFC3339 format
(`...+02:00`, not Z). We strip from char 19 onward and parse the
literal local clock time with BSD date (no -u). GNU date is the
fallback for Linux dev boxes. If parsing fails for any reason we
conservatively force a rebuild rather than risk shipping stale code.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two unrelated bugs in scripts/mac-mini/ensure-containers-running.sh,
both caught while debugging a mana-auth crash loop on 2026-04-08:
1. The recovery path passed --env-file "$PROJECT_ROOT/.env.macmini" to
docker compose, but that file has never existed on the server — only
.env does, and compose auto-loads it from the working directory. The
explicit --env-file silently caused recovered containers to start with
empty secrets (e.g. blank MANA_AUTH_KEK), which made mana-auth crash
the moment it came back up. The auto-recovery loop was therefore
self-defeating: it kept "fixing" auth into the same broken state
every 5 minutes for hours, with no notification because compose
exited 0. Drop --env-file entirely and cd into PROJECT_ROOT so
compose's standard .env discovery applies.
2. mana-infra-minio-init is a one-shot job container that legitimately
sits in "exited" state after running once. The script flagged it as
"stuck" every cycle, tried to "recover" it, and spammed the log with
ERROR lines. Add an explicit ONESHOT_INIT_CONTAINERS allowlist and
skip those names in both the initial scan and the post-recovery
verification.
Also tee compose output into the log so future failures actually leave
a breadcrumb instead of disappearing into the void.
Also: bump @mlc-ai/web-llm from a transitive dep (via @mana/local-llm)
to a direct dep of @mana/web. SvelteKit's adapter-node post-build
Rollup pass uses the web app's direct deps as its externals heuristic;
without this entry it warns "@mlc-ai/web-llm ... could not be resolved
- treating it as an external dependency" on every build. Functionally
harmless (the dynamic import in LocalLLMEngine only fires in the
browser), but the warning hid a real adapter-node misconfiguration
that would have bitten us if we'd ever tried to SSR /llm-test.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
VictoriaMetrics + vmalert previously copied prometheus.yml/alerts.yml from
/mnt/prometheus-config/ into /etc/prometheus/ at container start. The copy
silently drifted from the host file whenever the container wasn't restarted —
which is exactly what hid the matrix/element removal from status.mana.how
until 2026-04-08, when VM was still actively scraping the deleted targets
because its in-container config snapshot pre-dated the cleanup.
Now both containers mount ./docker/prometheus directly into /etc/prometheus
(resp. /etc/alerts) read-only and point the binary at it, and deploy.sh
issues POST /-/reload to both after each deploy so config edits go live
without a container recreate.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit bundles two unrelated changes that were swept together by an
accidental `git add -A` in another working session. Documented here so the
history reflects what's actually inside.
═══════════════════════════════════════════════════════════════════════
1. fix(mana-auth): /api/v1/auth/login mints JWT via auth.handler instead
of api.signInEmail
═══════════════════════════════════════════════════════════════════════
Previous attempt (commit 55cc75e7d) tried to fix the broken JWT mint in
/api/v1/auth/login by switching the cookie name from `mana.session_token`
to `__Secure-mana.session_token` for production. That was necessary but
not sufficient: Better Auth's session cookie value isn't just the raw
session token, it's `<token>.<HMAC>` where the HMAC is derived from the
better-auth secret. Reconstructing the cookie from auth.api.signInEmail's
JSON response only gave us the raw token, so /api/auth/token's
get-session middleware still couldn't validate it and the JWT mint kept
silently failing.
Real fix: do the sign-in via auth.handler (the HTTP path) rather than
auth.api.signInEmail (the SDK path). The handler returns a real fetch
Response with a Set-Cookie header containing the fully signed cookie
envelope. We capture that header verbatim and forward it as the cookie
on the /api/auth/token request, which now passes validation and mints
the JWT correctly.
Verified end-to-end on auth.mana.how:
$ curl -X POST https://auth.mana.how/api/v1/auth/login \
-d '{"email":"...","password":"..."}'
{
"user": {...},
"token": "<session token>",
"accessToken": "eyJhbGciOiJFZERTQSI...", ← real JWT now
"refreshToken": "<session token>"
}
Side benefits:
- Email-not-verified path is now handled by checking
signInResponse.status === 403 directly, no more catching APIError
with the comment-noted async-stream footgun.
- X-Forwarded-For is forwarded explicitly so Better Auth's rate limiter
and our security log see the real client IP.
- The leftover catch block now only handles unexpected exceptions
(network errors etc); the FORBIDDEN-checking logic in it is dead but
harmless and left in for defense in depth.
═══════════════════════════════════════════════════════════════════════
2. chore: remove the entire self-hosted Matrix stack (Synapse, Element,
Manalink, mana-matrix-bot)
═══════════════════════════════════════════════════════════════════════
The Matrix subsystem ran parallel to the main Mana product without any
load-bearing integration: the unified web app never imported matrix-js-sdk,
the chat module uses mana-sync (local-first), and mana-matrix-bot's
plugins duplicated features the unified app already ships natively.
Keeping it alive cost a Synapse + Element + matrix-web + bot container
quartet, three Cloudflare routes, an OIDC provider plugin in mana-auth,
and a steady drip of devlog/dependency churn.
Removed:
- apps/matrix (Manalink web + mobile, ~150 files)
- services/mana-matrix-bot (Go bot with ~20 plugins)
- docker/matrix configs (Synapse + Element)
- synapse/element-web/matrix-web/mana-matrix-bot services in
docker-compose.macmini.yml
- matrix.mana.how/element.mana.how/link.mana.how Cloudflare tunnel routes
- OIDC provider plugin + matrix-synapse trustedClient + matrixUserLinks
table from mana-auth (oauth_* schema definitions also removed)
- MatrixService import path in mana-media (importFromMatrix endpoint)
- Matrix notification channel in mana-notify (worker, metrics, config,
channel_type enum, MatrixOptions handler)
- Matrix entries from shared-branding (mana-apps + app-icons),
notify-client, the i18n bundle, the observatory map, the credits
app-label list, the landing footer/apps page, the prometheus + alerts
+ promtail tier mappings, and the matrix-related deploy paths in
cd-macmini.yml + ci.yml
Devlog/manascore/blueprint entries that mention Matrix are left intact
as historical record. The oauth_* + matrix_user_links Postgres tables
stay on existing prod databases — code can no longer write to them, drop
them in a follow-up migration if you want them gone for real.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Mac Mini hasn't run mana-llm/stt/tts/image-gen for a while — those
services live on the Windows GPU server now. The Mac-targeted
installers, plists, and platform-checking setup scripts have been
sitting in the repo as cargo-cult, suggesting Mac Mini deployment is
still a real option. It isn't.
Removed (Mac-Mini deployment infrastructure):
services/mana-stt/
- com.mana.mana-stt.plist (LaunchAgent)
- com.mana.vllm-voxtral.plist (LaunchAgent for the abandoned local Voxtral experiment)
- install-service.sh (single-service launchd installer)
- install-services.sh (mana-stt + vllm-voxtral installer)
- setup.sh (Mac arm64 installer)
- scripts/setup-vllm.sh (vLLM-Voxtral setup)
- scripts/start-vllm-voxtral.sh
services/mana-tts/
- com.mana.mana-tts.plist
- install-service.sh
- setup.sh (Mac arm64 installer)
scripts/mac-mini/
- setup-image-gen.sh (Mac flux2.c launchd installer)
- setup-stt.sh
- setup-tts.sh
- launchd/com.mana.image-gen.plist
- launchd/com.mana.mana-stt.plist
- launchd/com.mana.mana-tts.plist
setup-tts-bot.sh stays — it's the Matrix TTS bot installer (Synapse
side), not the mana-tts service.
Updated:
- services/mana-stt/CLAUDE.md, README.md — fully rewritten for the
Windows GPU reality (CUDA WhisperX, Scheduled Task ManaSTT, .env keys
matching the actual production .env on the box)
- services/mana-tts/CLAUDE.md, README.md — same treatment, documenting
Kokoro/Piper/F5-TTS on the Windows GPU under Scheduled Task ManaTTS
- scripts/mac-mini/README.md — dropped the STT setup section, replaced
with a pointer to docs/WINDOWS_GPU_SERVER_SETUP.md and the per-service
CLAUDE.md files
- docs/MAC_MINI_SERVER.md — expanded the "deactivated launchagents"
list to mention the now-removed plists, added the full GPU service
port table with public URLs, added a cleanup snippet for any old plists
still installed on a Mac Mini somewhere
Root file cleanup:
- mac-mini-setup.sh → scripts/mac-mini/bootstrap.sh (first-time bootstrap
belongs next to the other mac-mini setup-* scripts)
- test-chat-auth.sh → scripts/test-chat-auth.sh (ad-hoc smoke test, no
reason to live in the repo root)
- cloudflared-config.yml stays in root on purpose — it's the single source
of truth read by scripts/mac-mini/setup-*.sh and scripts/check-status.sh.
Docs:
- docs/POSTMORTEM_2026-04-07.md → docs/postmortems/2026-04-07-memoro-deploy-prod-wipe.md
(creates the postmortems/ home for future entries; descriptive name)
- docs/future/MAIL_SERVER_MAC_MINI_TEMP.md deleted — what it described
("Bereit zur Umsetzung", Stalwart on Mac Mini) is what's actually
running today, documented in docs/MAIL_SERVER.md. The DEDICATED variant
in docs/future/ remains since it's still a real future plan.
Root CLAUDE.md fix:
- @mana/local-store description was wrong — claimed it was legacy/standalone
only, but it's still used by apps/mana/apps/web itself, plus manavoxel,
arcade, and three shared packages.
Not touched (flagged for follow-up):
- NewAppIdeas/ (344K of "Roblox Reimagined" planning notes in repo root) —
user decision: archive externally or move under docs/future/
- Doc giants (PROJECT_OVERVIEW 41k, MATRIX_BOT_ARCHITECTURE 36k, etc.) —
splitting them is its own refactor
- Service CLAUDE.md staleness audit across 18 services — too broad for
this pass
Removed:
- apps/manacore/ — three Svelte files were byte-identical duplicates of
the apps/mana/ versions, leftover from the 2025 rename. Untracked .env
files in the same dir were also cleared.
- 21 empty apps/*/apps/web-archived/ directories — leftover from the
unification move, never tracked in git.
- services/it-landing/ — empty directory, picked up by the services/*
workspace glob for no reason.
- apps/news/apps/server-archived/ — empty.
Fixed:
- scripts/mac-mini/status.sh: COMPOSE_PROJECT_NAME fallback was still
manacore-monorepo from before the rename.
Documented:
- Root CLAUDE.md now describes apps/api/ (the @mana/api unified backend)
as a top-level peer to apps/mana/. It was completely missing from the
trimmed CLAUDE.md, which made the layout look frontend-only.
Two failures during the 2026-04-07 production outage triage were caused
not by the underlying outage but by `status.sh` and `health-check.sh`
hiding the broken state. Both scripts hardened so the same outage
shape can't reoccur invisibly.
status.sh — compose-vs-running diff
The old script printed "X containers running / Y total" without
noticing that some compose-defined containers were never started in
the first place. The Mac Mini was running 37 of 42 declared
containers and the script reported "37 running" with no indication
of the gap — `mana-core-sync` and `mana-api-gateway` were silently
missing for hours.
New behaviour: read every service from `docker compose config`,
diff its `container_name` against `docker ps`, and report each
declared service whose container is not currently up. The same
outage state would have been flagged on the very first run.
health-check.sh — public-hostname walk via Cloudflare DNS
The old script probed ~50 hardcoded `localhost:<port>/health`
endpoints across Chat, Todo, Calendar, etc. — but the per-app
HTTP backends those endpoints expected don't exist anymore (the
ghost-API cleanup removed them entirely). Every probe returned
HTTP 000 / connection refused, generating a wall of false-positive
alerts that drowned out the real signal.
The block was replaced with a dynamic walk of every `hostname:`
entry in `~/.cloudflared/config.yml`. Each hostname is probed via
the public Cloudflare tunnel, so DNS gaps, missing tunnel routes,
502/530 origin failures and timeouts surface as failures the same
way real users would experience them. On its first run after the
cleanup it surfaced eighteen previously-invisible hostname failures
(no DNS, 502, or 530) — every one of them a real production issue.
DNS resolution intentionally goes through `dig +short HOST @1.1.1.1`
instead of the local resolver. The Mac Mini's home-router DNS keeps
a negative cache for hours after the first failed lookup, so newly
added CNAMEs (like the post-outage sync/media records) appeared as
"no response" from inside the script for hours even though external
users saw them resolve immediately. Asking Cloudflare's DNS directly
gives the script the same view the public internet has.
The Matrix, Element, GPU-LAN-redundant and monitoring port-by-port
blocks were removed — the public-hostname walk covers all of them
via their `*.mana.how` hostnames going through the actual tunnel.
The "stuck container" detector now ignores `*-init` containers
(one-shot init pods, Exit 0 = success, intentionally never re-run).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds end-to-end browser voice capture for the Memoro module, mirroring the
existing dreams pattern: MediaRecorder → SvelteKit server proxy → mana-stt
on the Windows GPU box via Cloudflare tunnel.
Recording UI lives in /memoro page header (mic button + live timer + cancel +
sticky-permission retry). Server proxy at /api/v1/memoro/transcribe forwards
the blob with the server-held X-API-Key. memosStore.createFromVoice creates a
placeholder memo with processingStatus='processing' and fires transcribeBlob
in the background, which writes the transcript and flips status on completion
(or 'failed' with error in metadata).
Also corrects the mana-stt hostname across the repo: stt-api.mana.how (which
never existed in DNS) → gpu-stt.mana.how (the actual Cloudflare tunnel route
to the Windows GPU box). Adds an ENVIRONMENT_VARIABLES.md section explaining
how to obtain MANA_STT_API_KEY and where the tunnel terminates. Adds tunnel
health probes to the mac-mini health-check script so we catch tunnel-side
breakage in addition to LAN-side.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The previous startup.sh checked colima status via `colima status | grep running`
and, if that failed, ran `colima stop --force` unconditionally before starting.
This is destructive: a transient status mis-detection can kill a healthy running
VM, and the subsequent start often hangs because of leftover locks/processes.
Triggered today during the ManaCore→Mana rename: reloading the docker-startup
LaunchAgent ran the script, which falsely concluded colima was down, killed the
running VM, and left 12 zombie limactl processes plus a stale disk lock symlink.
The whole production stack (incl. Forgejo) was offline until manual cleanup.
Changes:
- Use `docker info` as the readiness check instead of `colima status` —
it directly tests the thing we care about (docker socket reachable)
- Only do cleanup work when we actually need to start; never SIGKILL a
running VM as a "precaution"
- When we do need to start: reap any zombie limactl/colima processes from
prior failed runs, and clear the stale disk-in-use lock if no process
actually holds it
- Verify successful start with `docker info`, not `colima status`
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
All web app subdomains (chat.mana.how, todo.mana.how, etc.) were removed
when the unified app launched, but monitoring configs still referenced them.
Update blackbox targets to use mana.how/route URLs, remove stale API backend
routes from cloudflared, clean up CORS origins, and fix status page generator
to handle route-based URLs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New GPU service for fast text-to-video generation using LTX-Video (~2B params)
on the RTX 3090. Generates 480p clips in 10-30 seconds, uses ~10GB VRAM.
Includes Cloudflare Tunnel route, Prometheus monitoring, and health checks.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- inventar-web: fix mangled icon import in settings page
- skilltree-web: create missing lib/services/storage.ts for export/import
- startup.sh: add umami/synapse DB creation + synapse user setup with C locale
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Prevents the internal SSD from filling up if the external SSD is not
mounted or if `colima delete` wiped the datadisk symlink.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- check-disk-space.sh: always prune dangling images, unused volumes, and
build cache >7 days on every run (not just at critical threshold)
- check-disk-space.sh: auto-remove node_modules if found on server
(never needed — Docker builds inside containers)
- disk-check launchd: reduce interval from 60min to 15min to catch
disk issues faster (yesterday we hit 100% before hourly check caught it)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- check-disk-space.sh now pushes mac_disk_used_percent + mac_colima_disk_used_gb
to Pushgateway every hour so vmalert can alert on real macOS disk usage
- alerts.yml: replace broken node-exporter disk alerts with Pushgateway-based ones
- master-overview.json: add "Recent Errors (Loki)" section with live error log
stream, error rate timeseries and top error sources barchart
- move-colima-to-external-ssd.sh: guided script to move 200GB Colima VM
datadisk from internal SSD to /Volumes/ManaData (3.6TB external SSD)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
colima delete wipes the entire VM disk on every power cycle, forcing
full image rebuilds. colima stop --force is sufficient to clear stale
process state after a hard shutdown.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The startup script runs `colima delete` on hard shutdown recovery,
wiping the colima.yaml mount config. Then `colima start` only added
/Volumes/ManaData but forgot /Users/mana — causing all file bind-mounts
to appear as empty directories (VirtioFS can't see host files).
This was the root cause of Synapse/SearXNG/Alertmanager/Loki crashing
after the power outage. Now both mounts are always passed explicitly.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Loki was already running but had no log shipper. Adds Promtail to collect
Docker logs from all 66 containers with automatic tier labeling (infra,
auth, core, app, matrix, games) and a Grafana Logs Explorer dashboard.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Kill Docker Desktop if it auto-started
- Clean stale Colima state from hard shutdown (delete --force)
- Start Colima with VZ, 12GB RAM, VirtioFS
- Restore named volumes from backup if missing
- Start containers with --no-build to skip broken Dockerfiles
- Create missing databases
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mac Mini has docker at /usr/local/bin/docker, not in PATH.
Use same DOCKER_CMD pattern as build-app.sh.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
build-app.sh now checks available RAM before builds and only stops
monitoring containers when free memory is below 3 GB threshold.
New memory-baseline.sh script measures per-container and per-category
RAM usage for capacity planning.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove set -e to prevent abort on non-critical errors
- Suppress tar errors for volatile TSDB files (VictoriaMetrics)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Deactivate Ollama, FLUX.2, and Telegram Bot LaunchAgents on Mac Mini
- Remove extra_hosts from mana-llm (no longer needs host.docker.internal)
- Update health-check.sh to monitor GPU server services instead of local
- Update status.sh to show GPU server status instead of native services
- Rewrite MAC_MINI_SERVER.md: remove ~400 lines of Ollama/FLUX/Bot docs,
add GPU server architecture diagram and deactivation notes
- Update CAPACITY_PLANNING.md with post-offload numbers (~80-150 peak users)
Mac Mini is now a pure hosting server (Web, API, DB, Sync).
All AI workloads run on GPU server (RTX 3090) via LAN.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Forgejo v11 on port 3041 (git.mana.how via Cloudflare Tunnel)
- Forgejo Runner for CI/CD (GitHub Actions compatible)
- Built-in Docker registry and LFS support
- Registration disabled (admin-only)
- SSH access on port 2222
- Go Services CI workflow (.forgejo/workflows/go-services.yml)
- Setup script: scripts/mac-mini/setup-forgejo.sh
Replaces GitHub dependency for CI/CD. GitHub can remain as
mirror/backup while Forgejo becomes the primary Git host.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Photos NestJS backend was a proxy to mana-media that enriched
responses with local album/favorite/tag data. Now:
- Albums store → local-first via albumCollection + albumItemCollection
- Favorites → local-first via favoriteCollection (toggle in IndexedDB)
- Photo tags → local-first via photoTagCollection
- Photo listing/stats → direct mana-media API calls from frontend
- Upload → direct mana-media upload from frontend
- Delete → direct mana-media delete from frontend
Removed 27 TypeScript files, 1 Docker container, 1 port (3039).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Presi NestJS backend (40 source files, 50 deps) was a CRUD wrapper
around decks, slides, and themes — all now handled by local-first sync.
Only the share-link feature requires server-side state (public URLs
without auth), so a minimal Hono + Bun server replaces the entire
NestJS backend:
- apps/presi/apps/server/ — Hono server with share routes + GDPR admin
Uses @manacore/shared-hono for auth (JWKS), health, admin, errors
- Web app API client stripped to share-only (was 270 lines → 90 lines)
- Removed from docker-compose, CI/CD, Prometheus, env generation
- NestJS backend deleted (40 TS files, 8 test specs, 3038 lines)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Both apps are fully local-first via Dexie.js + mana-sync. Their NestJS
backends were pure CRUD wrappers (20 + 31 source files) that are no
longer needed.
Changes:
- Add packages/shared-hono: JWT auth via JWKS (jose), Drizzle DB factory,
health route, generic GDPR admin handler, error middleware
- Migrate zitare lists page from fetch() to listsStore (local-first)
- Rewrite clock timers store from API-based to timerCollection (Dexie)
- Update clock +layout.svelte CommandBar search to use local collections
- Remove zitare-backend + clock-backend from docker-compose, CI/CD,
Prometheus, env generation, setup scripts
- Add docs/TECHNOLOGY_AUDIT_2026_03.md with full repo analysis
Net result: -2 Docker containers, -2 ports, -2728 lines of code
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace 21 separate NestJS Matrix bot processes (~2.1 GB RAM, ~4.2 GB Docker images)
with a single Go binary using plugin architecture (8.6 MB binary, ~30 MB RAM).
New services:
- services/mana-matrix-bot/ — Go Matrix bot with 21 plugins (mautrix-go, Redis sessions)
- services/mana-api-gateway-go/ — Go API gateway (rate limiting, API keys, credit billing)
Deleted:
- 21 services/matrix-*-bot/ directories
- packages/bot-services/ and packages/matrix-bot-common/
- Legacy deploy scripts and CI build jobs
Updated:
- docker-compose.macmini.yml: new Go services, legacy bots removed
- CI/CD: change detection + build jobs for Go services
- Root package.json: new dev:matrix, build:matrix, test:matrix scripts
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
docker compose stop with service names can hang due to env var warnings.
Using docker stop/start with container names is more reliable.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add docker/Dockerfile.sveltekit-base: pre-built base with all 34 shared
packages (mirrors nestjs-base pattern), eliminates redundant COPY/build
steps from individual web Dockerfiles
- Add scripts/mac-mini/build-app.sh: stops monitoring stack before build
to free RAM, auto-restarts on exit (trap cleanup)
- Migrate todo web Dockerfile to use sveltekit-base:local (47 COPY lines
→ 2, 4 build steps → 0)
- Update CD workflow to build sveltekit-base when deploying web apps
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Full-stack achievement system for SkillTree with backend (NestJS) and frontend (SvelteKit):
- 26 achievements across 7 categories (XP, Skills, Levels, Activities, Streak, Branches, Special)
- 5 rarity tiers (Common → Legendary) with distinct styling
- Auto-unlock after XP gain, skill creation, and activity logging
- Celebration animation on unlock with sparkle effects
- Achievements page with category filters and progress tracking
- IndexedDB offline support with local condition evaluation
- Backend seeds achievements on startup, checks conditions after mutations
- Stats overview extended with achievement counter
- i18n translations (DE + EN)
Also adds docs/MONETIZATION_REPORT.md with ranked analysis of all apps.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>