Two major tool expansions — the Recherche-Agent and Today-Agent can
now research the web autonomously (no browser needed), and a future
Meeting-Prep agent can read + create contacts.
=== research_news (server-side execution) ===
The biggest addition: mana-ai can now call mana-api's news-research
endpoints (POST /discover + /search) directly, without a browser.
Infrastructure:
- services/mana-ai/src/planner/news-research-client.ts — full HTTP
client with discover→search pipeline. 15s/30s timeouts. Graceful
null on any failure (network, mana-api down, bad response) so the
tick never crashes from research errors.
- config.manaApiUrl added (default http://localhost:3060); wired in
docker-compose.macmini.yml as http://mana-api:3060 + depends_on
mana-api with service_healthy condition.
Pre-planning research step (cron/tick.ts):
- Before the planner prompt is built, the tick checks if the
mission's objective or conceptMarkdown matches research keywords
(same RESEARCH_TRIGGER regex the webapp uses). When it matches:
* NewsResearchClient.research(objective) runs discovery + search
* Results are injected as a synthetic ResolvedInput with id
'__web-research__' and a formatted markdown context block
* The Planner then sees real article URLs/titles/excerpts and can
reference them in create_note / save_news_article steps
* Log line: "pre-research: N feeds, M articles"
Tool registration:
- research_news added to AI_PROPOSABLE_TOOL_NAMES + mana-ai tools.ts
with params (query, language?, limit?). This lets the planner also
explicitly propose a research step as a PlanStep (in addition to
the pre-planning auto-injection).
=== create_contact ===
- Added to AI_PROPOSABLE_TOOL_NAMES + mana-ai tools.ts with params
(firstName required, lastName/email/phone/company/notes optional).
- Contacts are encrypted at rest; server planner can plan the step
but execution stays on the webapp (same as all propose tools).
Full server-side contact resolution via Key-Grant is a future
enhancement.
- get_contacts added to webapp AUTO_TOOLS so agents can inspect
existing contacts without nagging (read-only, auto-policy).
Module coverage now:
✅ todo (5) ✅ calendar (2) ✅ notes (5) ✅ places (4)
✅ drink (3) ✅ food (2) ✅ news (1) ✅ journal (1)
✅ habits (3) ✅ news-research (1) ✅ contacts (1)
11 modules, 28 tools total (17 propose, 11 auto).
Tests: mana-ai 41/41 (drift-guard passes), shared-ai type-check
clean, webapp svelte-check 0 errors, 0 warnings.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Webapp now passes:
- PUBLIC_MANA_AI_URL / PUBLIC_MANA_AI_URL_CLIENT → getManaAiUrl()
resolves these; powers the Workbench "Datenzugriff" tab fetch.
- PUBLIC_AI_MISSION_GRANTS (default false) → gates the MissionGrant
dialog + audit tab. Flip to "true" in .env once the keypair is
provisioned.
Follow-up for operator: add a Cloudflare tunnel route for
mana-ai.mana.how → mana-ai:3067 (mirroring the existing pattern
for credits/events/llm) so the audit fetch resolves from the browser.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wire the Mission Key-Grant feature into the production Mac Mini
compose stack so mana-ai can boot and mana-auth can mint grants.
- New mana-ai service block (port 3066) — 256m mem limit, depends on
postgres + mana-llm, tick interval configurable via
MANA_AI_TICK_INTERVAL_MS / MANA_AI_TICK_ENABLED. Pulls
MANA_AI_PRIVATE_KEY_PEM from env; absent = grants silently disabled.
- mana-auth environment gains MANA_AI_PUBLIC_KEY_PEM (default empty
so existing deployments without the keypair degrade to 503
GRANT_NOT_CONFIGURED rather than failing to boot).
- mana-auth Dockerfile rewritten to the two-stage pnpm+bun pattern
used by mana-credits/mana-events — required now that mana-auth has
a @mana/shared-ai workspace dep. The previous single-stage
Dockerfile with service-scoped build context couldn't resolve any
@mana/* imports; that only worked historically because it fell
through at runtime via a pre-built layer.
- mana-ai Dockerfile copies packages/shared-ai into the installer
stage alongside shared-hono.
The build contexts for mana-auth flip from services/mana-auth to the
repo root. Existing CI/CD paths (scripts/mac-mini/build-app.sh) pass
through to docker compose build and pick up the new context
automatically — no script edits needed.
Flip-on procedure: on the Mac Mini, set MANA_AI_PUBLIC_KEY_PEM +
MANA_AI_PRIVATE_KEY_PEM in .env (already done, see
secrets/mana-ai/README.md on the host), then rebuild mana-auth +
build mana-ai.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three small config changes so the Kontext "Aus URL" flow (next commit)
is runnable from a plain `pnpm dev:mana:all`:
- package.json: include mana-crawler in the dev:mana:servers
concurrently group, and pass DATABASE_URL=…/mana_platform so the
Go binary doesn't try to connect to a non-existent `mana` DB (its
hardcoded default).
- .env.development: publish MANA_CRAWLER_URL=http://localhost:3023
(the crawler's default binary port — the macmini container is
a 3014 override, kept only in docker-compose). Also surface
MANA_LLM_DEFAULT_MODEL for the summariser.
- docker-compose.macmini.yml: inject MANA_CRAWLER_URL + the
default-model env into the mana-api container so production
can reach the internal crawler and pick the summariser model
consistently.
No runtime code touched.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The frontend was calling /api/v1/credits/* and /api/v1/sync/* on
auth.mana.how, but those routes live on credits.mana.how (mana-credits
service). Add getManaCreditsUrl() helper, inject the URL via
hooks.server.ts, allow it in the CSP connect-src, and update both API
clients (credits.ts + sync.ts) to use it.
Also: pass MANA_CREDITS_URL + MANA_SERVICE_KEY to mana-sync so its
billing middleware can reach mana-credits at http://mana-credits:3002.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Production deployment + observability for the self-hosted geocoding stack:
**docker-compose.macmini.yml**
- New mana-geocoding container (port 3018, internal-only — no traefik
labels, no Cloudflare route). Uses host.docker.internal to reach the
Pelias API on the host's pelias compose stack. Dockerfile added under
services/mana-geocoding/ using the same Bun/Hono pattern as mana-events.
**Prometheus**
- New blackbox-internal job probing mana-geocoding:3018/health, the
Pelias API on host.docker.internal:4000/v1/status, and Elasticsearch
at host.docker.internal:9200/_cluster/health. Kept separate from
blackbox-api which is reserved for public HTTPS endpoints.
**status.mana.how (generate-status-page.sh)**
- Include blackbox-internal in the metric query and add an "Interne
Dienste" section with its own summary card, right between Infrastruktur
and GPU Dienste. Summary grid goes from 4 to 5 columns with a
900px breakpoint.
- friendly_name() now handles http:// URLs and rewrites container-name
hosts like mana-geocoding:3018/health → "Mana Geocoding",
host.docker.internal:4000 → "Pelias API",
host.docker.internal:9200 → "Pelias Elasticsearch".
**Grafana uptime dashboard**
- Add an "Internal" series to the "Alle Dienste — Uptime-Verlauf" panel
- New "Interne Dienste Status" table panel showing per-instance up/down
- New "Geocoding Ø Latenz" stat panel for probe_duration_seconds
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Dockerfile only copied services/mana-sync, but go.mod has a replace
directive pointing to ../../packages/shared-go which needs to be in the
build context. Switch context to repo root and copy both packages.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Dockerfile copied only its own package.json, causing bun install to
fail on @mana/shared-hono workspace dependency. Now copies workspace root
package.json and shared-hono/shared-types packages.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The workbench-registry app id 'inventar' did not match its
@mana/shared-branding MANA_APPS counterpart 'inventory', so the tier-
gating join in apps/web/src/lib/app-registry/registry.ts silently
failed for the inventory module — it fell into the "no MANA_APPS
entry, default visible" fallback and was effectively un-gated. The
codebase had also voted overwhelmingly for 'inventar' (53 files) vs
'inventory' (3 files in shared-branding), so the long-standing
mismatch was just bookkeeping debt waiting to bite.
Pre-release, no live data, so the cleanest fix is to align everything
on the English 'inventory':
- Workbench-registry id, module.config.ts appId, module folder, route
folder and i18n locale folder all renamed via git mv
- Standalone apps/inventar/ workspace package renamed
- All imports, store identifiers (InventarEvents → InventoryEvents,
INVENTAR_GUEST_SEED, inventarModuleConfig), i18n keys and href/goto
paths follow the rename
- The German display label "Inventar" is preserved everywhere it is a
user-visible string (page titles, i18n values, toast labels)
- Dexie table prefixes (invCollections, invItems, …) are unchanged
- Drive-by fix: ListView.svelte was querying non-existent
inventarCollections/inventarItems tables — corrected to the actual
invCollections/invItems names from module.config
- The "inventar ↔ inventory id mismatch" workaround comment in
registry.ts is removed since the mismatch no longer exists
module-registry.ts also picks up the user's parallel newsModuleConfig
addition because both edits land in the same import block — keeping
them split would have left the build in an inconsistent state.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
api.mana.how is already routed to mana-api-gateway (Go service on
port 3016) — has been since long before the apps/api consolidation.
Hijacking it would have broken whatever existing consumers point at
the gateway.
Switch the new unified Hono/Bun apps/api server to mana-api.mana.how
instead. Cloudflared tunnel route + Cloudflare DNS CNAME registered
on the Mac Mini side; mana-web's PUBLIC_MANA_API_URL_CLIENT updated
to match.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the missing production deployment artifacts for the unified
apps/api Hono/Bun server. Until now apps/api was code-only — built
during the consolidation sweep but never wired into the Mac Mini
compose stack, so all 17 product modules that depend on it
(calendar, todo, picture, planta, nutriphi, news, traces, presi,
music, contacts, storage, context, guides, research, chat, moodlit,
who) effectively had no backend in production. The frontend modules
shipped, but their compute calls fell through to localhost:3060 in
the browser and just failed.
This commit fixes the gap.
apps/api/Dockerfile (NEW)
-------------------------
Multi-stage Bun build that runs from the monorepo root so the four
workspace dependencies (@mana/shared-hono, @mana/shared-logger,
@mana/shared-storage, @mana/media-client) actually resolve. Builder
stage installs via pnpm with the --filter @mana/api... selector to
keep the install graph minimal; runtime stage copies the resulting
workspace tree (including the pnpm symlink farm) and runs the entry
script with bun directly — no compile step, since bun handles
TypeScript natively.
@mana/media-client lives under services/mana-media/packages/client,
not packages/, so the COPY path is the awkward
services/mana-media/packages/client → ./services/mana-media/packages/
client mirror to keep the workspace layout intact.
Healthcheck hits /health every 30s with a 15s start period — same
shape as the other Bun services in this compose file.
docker-compose.macmini.yml — new mana-api service
-------------------------------------------------
Slotted between glitchtip-worker and the games section. Build
context is the monorepo root (`.`) because the Dockerfile needs the
workspace tree. Container name `mana-api`, image `mana-api:local`,
mem_limit 384m (higher than the smaller Bun services because the
unified server holds 17 modules' route definitions + Drizzle schema
caches in memory).
Environment wires up everything apps/api needs:
- MANA_AUTH_URL → mana-auth:3001 for JWT validation
- MANA_LLM_URL → mana-llm:3025 for chat / picture / who LLM calls
- MANA_SEARCH_URL → mana-search:3012 for guides / research
- MANA_CREDITS_URL → mana-credits:3002 for credit validation
- MANA_MEDIA_URL → mana-media:3011 for image uploads
- DATABASE_URL → mana_platform Postgres for the few server-side
state stores (research_results, presi share-links, traces guides)
- MANA_SERVICE_KEY → for the credit/auth service-to-service calls
- LOGGER_FORMAT=json → structured logs for grafana ingestion
- CORS_ORIGINS=https://mana.how → only the unified web origin
needs access, the standalone game frontends don't call this
Port 3060 is exposed on the host so cloudflared can route
api.mana.how → mana-api:3060 (separate Mac Mini side step, not
in this commit).
docker-compose.macmini.yml — mana-web wiring
--------------------------------------------
Two new env vars:
PUBLIC_MANA_API_URL=http://mana-api:3060
PUBLIC_MANA_API_URL_CLIENT=https://api.mana.how
The hooks.server.ts injection plumbing for window.__PUBLIC_MANA_API_URL__
already existed (added in an earlier sweep but never had a value to
inject). The CSP connect-src list and the SSR injection script tag
also already include PUBLIC_MANA_API_URL_CLIENT — so once the env
arrives, the existing client-side getManaApiUrl() helper picks it
up automatically.
mana-web also gets a depends_on entry on mana-api with
condition: service_healthy so the web container doesn't start
serving requests against a dead API.
Verification
------------
docker compose -f docker-compose.macmini.yml config validates
cleanly (no YAML errors). Image build is NOT exercised in this
commit — that happens on the Mac Mini via build-app.sh after the
push lands.
Out of scope for this commit (Mac Mini side, manual steps):
1. ssh mana-server, git pull
2. ./scripts/mac-mini/build-app.sh mana-api (first build, ~3-5 min)
3. ./scripts/mac-mini/build-app.sh mana-web (rebuild with new env)
4. cloudflared route: add api.mana.how → mana-api:3060 to
~/.cloudflared/config.yml and `systemctl restart cloudflared`
5. Test https://api.mana.how/health from anywhere
6. Test https://mana.how/who in a browser
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Standalone games/whopixels has been replaced by the who module that
landed in the previous four commits. The whopixels Phaser RPG world
wrapper around the chat (~80% of the source) was deliberately
dropped during the port; the chat loop, the 26 historical-figure
personalities, and the [IDENTITY_REVEALED] sentinel trick all live
on inside apps/api/src/modules/who/.
What's gone in this commit:
games/whopixels/ — 33 source files, ~3.6k LOC
Phaser scenes (Boot, MainMenu, Game, RPG)
Managers (Player, NPC, World, Touch, Sound, Storage, ChatUI)
Vanilla http server with hand-rolled rate limit + Azure OpenAI
Static assets, css, jsconfig
docker-compose.macmini.yml — `whopixels` service block
Build context, Azure OpenAI env wiring, healthcheck. Port 5100
is now free. Comment left in place explaining the migration so
a future reader doesn't wonder why this gap exists.
What still has to happen outside this PR (Mac Mini side):
- docker rm -f mana-game-whopixels
- cloudflared route for whopixels.mana.how needs a redirect or
archive (sub-domain stops resolving once the container is gone
unless DNS / tunnel routes are touched separately)
The migration is non-destructive in terms of data: whopixels stored
no per-user state — sessions were in-memory, conversation history
lived only in the browser tab. There's nothing to migrate.
Net delta of the entire who module migration (5 commits combined):
+1880 LOC (RFC + backend + module + UI + branding)
-3666 LOC (whopixels)
───────
-1786 LOC
Closes Phase A.6 of docs/WHO_MODULE.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pre-launch theme system audit found multiple parallel layers in themes.css
(--theme-X full hsl strings, --X partial shadcn aliases, --color-X populated
by runtime store with raw channels) plus dead-code companion files. The
inconsistency caused light-mode regressions when scoped-CSS consumers
wrote `var(--color-X)` standalone — the variable holds raw HSL channels
which is invalid as a color value, browser fell back to inherited (white).
Rewrite to one consistent layer:
- Source of truth: --color-X defined as raw HSL channels (e.g.
`0 0% 17%`) in :root, .dark, and all variant [data-theme="..."]
blocks. Matches the format the runtime store
(@mana/shared-theme/src/utils.ts) writes, eliminating the
static-fallback-vs-runtime mismatch and the corresponding flash
of unstyled content on hydration.
- @theme inline uses self-reference + Tailwind v4 <alpha-value>
placeholder so utility classes generate correctly AND opacity
modifiers work: `text-foreground/50` → `hsl(var(--color-foreground) / 0.5)`.
- @layer components (.btn-primary, .card, .badge, etc.) wraps
var(--color-X) refs with hsl() — they were broken in light mode
too for the same reason.
Convention going forward (also documented in the file header):
1. Markup: use Tailwind utility classes (text-foreground, bg-card, …)
2. Scoped CSS: hsl(var(--color-X)) — always wrap with hsl()
3. NEVER raw var(--color-X) in CSS — that's the bug pattern
Net file: 692 → 580 LOC. Single source layer, no indirection.
Also delete dead companion files (zero imports anywhere):
- tailwind-v4.css (had broken self-reference, never imported)
- theme-variables.css (legacy hex-based palette)
- components.css (legacy component utilities)
- index.js / preset.js / colors.js (Tailwind v3 preset format,
irrelevant under Tailwind v4)
package.json exports map shrinks accordingly to just `./themes.css`.
Consumers using `hsl(var(--color-X))` (~379 files across mana-web,
manavoxel-web, arcade-web) keep working unchanged — the public API
name `--color-X` is preserved. Only the broken pattern `var(--color-X)`
(~61 files) needs a follow-up sweep, handled in a separate commit.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
VictoriaMetrics + vmalert previously copied prometheus.yml/alerts.yml from
/mnt/prometheus-config/ into /etc/prometheus/ at container start. The copy
silently drifted from the host file whenever the container wasn't restarted —
which is exactly what hid the matrix/element removal from status.mana.how
until 2026-04-08, when VM was still actively scraping the deleted targets
because its in-container config snapshot pre-dated the cleanup.
Now both containers mount ./docker/prometheus directly into /etc/prometheus
(resp. /etc/alerts) read-only and point the binary at it, and deploy.sh
issues POST /-/reload to both after each deploy so config edits go live
without a container recreate.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The first prod deploy of voice quick-add (3b41b39a3) silently fell
back for every transcript: title=transcript verbatim, dueDate=null,
priority=null, labels=[]. The endpoint code was reaching the
fallback() path even though mana-llm was healthy and reachable from
inside the mana-web container.
Root cause: SvelteKit's $env/dynamic/private explicitly excludes any
env var that starts with the public prefix (default PUBLIC_). The
parse-task code read
env.MANA_LLM_URL || env.PUBLIC_MANA_LLM_URL || 'http://localhost:3025'
expecting to fall back to PUBLIC_MANA_LLM_URL when MANA_LLM_URL was
unset, but $env/dynamic/private treats PUBLIC_MANA_LLM_URL as if it
didn't exist on the server side. So it always fell through to
http://localhost:3025, which from inside mana-web is nothing,
fetch threw, and coerce returned the fallback shape.
Two fixes:
1. docker-compose.macmini.yml — set MANA_LLM_URL (no prefix) on
mana-web alongside PUBLIC_MANA_LLM_URL. The PUBLIC_ var is still
needed for the browser-side playground and status page; the
private one is what the parse endpoints actually read.
2. parse-task and parse-habit — drop the dead env.PUBLIC_MANA_LLM_URL
fallback so the next dev who reads the code doesn't think it'd
ever work. Add a comment explaining the SvelteKit gotcha so the
next person setting up a new env var doesn't repeat this mistake.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The mana-service-llm container had OLLAMA_URL pointed at the GPU box's
LAN address (192.168.178.11:11434). On the Mac Mini host that route
works fine, but from inside any Colima container the entire
192.168.178.0/24 subnet gets synthesized RST — Colima's VM "claims"
the LAN range without being able to route to it, so every connect()
returns "Connection refused" before a packet ever leaves the box.
mana-llm started cleanly, reported the configured upstream as
"unhealthy", served an empty /v1/models list, and every chat
completion failed with "All connection attempts failed". The most
visible downstream effect: voice quick-add (parse-task, parse-habit)
silently degraded to its no-LLM fallback for everyone hitting the
local stack — same shape as a successful response, no error log,
just no enrichment.
The Mac Mini already runs a gpu-proxy LaunchAgent
(com.mana.gpu-proxy, /Users/mana/gpu-proxy.py) that forwards
127.0.0.1:13434 → 192.168.178.11:11434 alongside several other GPU
service ports. Pointing OLLAMA_URL at host.docker.internal:13434 and
adding the host-gateway extra_hosts mapping puts mana-llm on the
already-running rail. Verified end-to-end: from inside the container,
GET http://host.docker.internal:13434/api/tags now returns the full
model list (gemma3:4b, gemma3:12b, gemma3:27b, qwen2.5-coder:14b,
nomic-embed-text).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The matrix subsystem was removed in a prior commit. This commit cleans
up the small leftovers that grep found:
- docker-compose.macmini.yml: dropped the "Matrix Stack" port-range
comment, the "matrix" category from the naming convention, and a
stale watchtower comment about Matrix notifications.
- packages/credits/src/operations.ts: removed AI_BOT_CHAT credit
operation type and its definition. It was the billing entry for "Chat
with AI via Matrix bot" — no callers left.
- services/mana-credits gifts schema + service + validation: removed the
targetMatrixId column / param / Zod field. The corresponding
PostgreSQL column was dropped manually with
`ALTER TABLE gifts.gift_codes DROP COLUMN target_matrix_id` on prod.
- docker/grafana/dashboards/{master,system}-overview.json: removed the
`up{job="synapse"}` panel queries — they would have shown No Data
forever now that Synapse is gone.
Production-side cleanup performed in parallel (not in this commit):
- Stopped + removed mana-matrix-{synapse,element,web,bot} containers
- Removed mana-matrix-bot:local, matrix-web:latest,
matrixdotorg/synapse:latest, vectorim/element-web:latest images (~3 GB)
- Removed mana-matrix-bots-data Docker volume
- Removed /Volumes/ManaData/matrix/ media store (4.3 MB)
- DROP DATABASE matrix; DROP DATABASE synapse; on Postgres
Cosmetic leftovers intentionally untouched:
- Eisenhower matrix in todo (LayoutMode 'matrix') — productivity concept
- ${{ matrix.service }} in .github/workflows — GitHub Actions strategy
- services/mana-media/apps/api/dist/.../matrix/* — stale build output
(not in git, regenerated next mana-media build)
This commit bundles two unrelated changes that were swept together by an
accidental `git add -A` in another working session. Documented here so the
history reflects what's actually inside.
═══════════════════════════════════════════════════════════════════════
1. fix(mana-auth): /api/v1/auth/login mints JWT via auth.handler instead
of api.signInEmail
═══════════════════════════════════════════════════════════════════════
Previous attempt (commit 55cc75e7d) tried to fix the broken JWT mint in
/api/v1/auth/login by switching the cookie name from `mana.session_token`
to `__Secure-mana.session_token` for production. That was necessary but
not sufficient: Better Auth's session cookie value isn't just the raw
session token, it's `<token>.<HMAC>` where the HMAC is derived from the
better-auth secret. Reconstructing the cookie from auth.api.signInEmail's
JSON response only gave us the raw token, so /api/auth/token's
get-session middleware still couldn't validate it and the JWT mint kept
silently failing.
Real fix: do the sign-in via auth.handler (the HTTP path) rather than
auth.api.signInEmail (the SDK path). The handler returns a real fetch
Response with a Set-Cookie header containing the fully signed cookie
envelope. We capture that header verbatim and forward it as the cookie
on the /api/auth/token request, which now passes validation and mints
the JWT correctly.
Verified end-to-end on auth.mana.how:
$ curl -X POST https://auth.mana.how/api/v1/auth/login \
-d '{"email":"...","password":"..."}'
{
"user": {...},
"token": "<session token>",
"accessToken": "eyJhbGciOiJFZERTQSI...", ← real JWT now
"refreshToken": "<session token>"
}
Side benefits:
- Email-not-verified path is now handled by checking
signInResponse.status === 403 directly, no more catching APIError
with the comment-noted async-stream footgun.
- X-Forwarded-For is forwarded explicitly so Better Auth's rate limiter
and our security log see the real client IP.
- The leftover catch block now only handles unexpected exceptions
(network errors etc); the FORBIDDEN-checking logic in it is dead but
harmless and left in for defense in depth.
═══════════════════════════════════════════════════════════════════════
2. chore: remove the entire self-hosted Matrix stack (Synapse, Element,
Manalink, mana-matrix-bot)
═══════════════════════════════════════════════════════════════════════
The Matrix subsystem ran parallel to the main Mana product without any
load-bearing integration: the unified web app never imported matrix-js-sdk,
the chat module uses mana-sync (local-first), and mana-matrix-bot's
plugins duplicated features the unified app already ships natively.
Keeping it alive cost a Synapse + Element + matrix-web + bot container
quartet, three Cloudflare routes, an OIDC provider plugin in mana-auth,
and a steady drip of devlog/dependency churn.
Removed:
- apps/matrix (Manalink web + mobile, ~150 files)
- services/mana-matrix-bot (Go bot with ~20 plugins)
- docker/matrix configs (Synapse + Element)
- synapse/element-web/matrix-web/mana-matrix-bot services in
docker-compose.macmini.yml
- matrix.mana.how/element.mana.how/link.mana.how Cloudflare tunnel routes
- OIDC provider plugin + matrix-synapse trustedClient + matrixUserLinks
table from mana-auth (oauth_* schema definitions also removed)
- MatrixService import path in mana-media (importFromMatrix endpoint)
- Matrix notification channel in mana-notify (worker, metrics, config,
channel_type enum, MatrixOptions handler)
- Matrix entries from shared-branding (mana-apps + app-icons),
notify-client, the i18n bundle, the observatory map, the credits
app-label list, the landing footer/apps page, the prometheus + alerts
+ promtail tier mappings, and the matrix-related deploy paths in
cd-macmini.yml + ci.yml
Devlog/manascore/blueprint entries that mention Matrix are left intact
as historical record. The oauth_* + matrix_user_links Postgres tables
stay on existing prod databases — code can no longer write to them, drop
them in a follow-up migration if you want them gone for real.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
mana-auth's config.ts has hard-failed startup since commit e9915428c
(phase 2 encryption vault) when MANA_AUTH_KEK is unset in production.
.env.macmini.example documents the variable, but the docker-compose
service definition for mana-auth never had a corresponding
MANA_AUTH_KEK: ${MANA_AUTH_KEK} line in its environment block, so even
when the variable was set in the host .env, it never reached the
container. Result: every restart since yesterday looped on
"MANA_AUTH_KEK env var is required in production".
Added the env passthrough alongside BETTER_AUTH_SECRET with an inline
comment pointing at the generation command + service CLAUDE.md.
Operator action required on the Mac Mini:
KEK=$(openssl rand -base64 32)
echo "MANA_AUTH_KEK=$KEK" >> .env
./scripts/mac-mini/build-app.sh mana-auth # or compose up -d mana-auth
Then back the value up — it cannot be rotated today without re-wrapping
all existing user vaults (no background re-wrap job yet, kek_id column
on encryption_vaults is reserved for the future migration path).
Docker's embedded DNS resolver (127.0.0.11) forwards to the host
resolver, which on the Mac Mini forwards to the home router's
FRITZ!Box DNS. The router keeps a stale negative cache for hours
after a hostname first fails, so any newly added Cloudflare CNAME
(e.g. the GPU public hostnames recreated via the Cloudflare dashboard
during the 2026-04-07 cleanup) appears as "no such host" to the
blackbox probes for the entire negative-cache TTL — even though the
hostname resolves fine via 1.1.1.1 directly the entire time.
Symptom before the fix:
health-check.sh (uses dig @1.1.1.1) → All services healthy ✅
status.mana.how (via blackbox/VM) → 4 GPU services down ❌
The two views were lying to each other in opposite directions —
the public-facing status page reported four healthy services as
down while the operator runbook reported them as up. Confusing
and exactly the kind of monitoring discrepancy a launch should not
ship with.
Fix: pin the blackbox container to public DNS (Cloudflare + Google)
in compose. Blackbox now resolves directly against 1.1.1.1, bypassing
the home-router negative cache entirely. After the recreate the four
GPU probes flipped from probe_success=0 to probe_success=1 within
one scrape interval, and status.mana.how went from 38/42 to 41/42
(only gpu-video remains down — LTX Video Gen is intentionally not
deployed on the Windows GPU box yet).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three Mac Mini infrastructure follow-ups bundled:
1. docker-compose.macmini.yml — drop ghost backend env vars from
the mana-app-web service (todo, calendar, contacts, chat, storage,
cards, music, nutriphi `PUBLIC_*_API_URL{,_CLIENT}` plus the memoro
server URLs). The matching consumers were removed in the earlier
ghost-API cleanup commits, so these env entries had been wiring
nothing into the running container for several deploys. Force-
recreating mana-app-web after pulling this commit will pick up
the slimmer env automatically.
2. docker-compose.macmini.yml — bump `mana-mon-blackbox` mem_limit
from 32m to 128m. blackbox-exporter v0.25 sits north of 32m
under load and was OOM-restart-looping every ~90 seconds, which
in turn made `status.mana.how` and the prometheus probe metrics
stale (since the scraper was missing every other window).
3. docker/prometheus/prometheus.yml — split `blackbox-gpu` into two
jobs:
- `blackbox-gpu` now probes `/health` via the http_health
module, because the GPU services (whisper STT, FLUX image
gen, Coqui TTS) return 401/404 on `/` by design (auth or
API-only). The previous http_2xx-on-`/` probe was reporting
all four as down even though they answered `/health` with
200, which inflated the down count on status.mana.how.
- `blackbox-gpu-root` keeps the http_2xx-on-`/` probe for
Ollama, which has no `/health` endpoint but does answer
2xx on its root.
Both jobs share the same blackbox-exporter relabel rewrite so
the targets are routed through the exporter container, not
scraped directly by VictoriaMetrics.
Verified post-fix: status.mana.how reports 41/42 services up (only
`gpu-video` remains down — LTX Video Gen is intentionally not
deployed yet on the Windows GPU box).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The unified mana-web container needs MANA_STT_URL + MANA_STT_API_KEY at
runtime so its server-side proxies (/api/v1/memoro/transcribe and
/api/v1/dreams/transcribe) can reach mana-stt with the right credentials.
The browser never holds the key.
URL points at the public tunnel (https://gpu-stt.mana.how → Cloudflare
tunnel mana-gpu-server → Windows GPU box localhost:3020) so the resolver
works regardless of where the container runs. The API key is sourced from
the Mac Mini .env, which is gitignored.
Without this, the proxies short-circuit with HTTP 503 "mana-stt is not
configured" — observed today on first deploy of the recording pipeline.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Five small follow-ups on Phase 1b:
- docker-compose.macmini.yml: add the mana-events container with the
same shape as mana-credits, expose port 3065, add a Traefik route
for events.mana.how, and inject PUBLIC_MANA_EVENTS_URL into the
mana-web container so the SvelteKit SSR + browser both reach it.
- mana-events: background sweeper that deletes rsvp_rate_buckets
rows older than 2h every hour. Without it, long-published events
accumulate one row per traffic-hour forever (FK cascade only fires
on snapshot delete).
- PublicRsvpList: track consecutiveFailures and only show the error
banner after two failures in a row, so a single mid-poll network
hiccup doesn't flash a 30s error the user can't act on.
- apps/mana/apps/web: declare postgres as a devDep (already imported
by the e2e spec via pnpm hoisting, now explicit).
Stalwart requires username without domain for auth and the 'user' role
for SMTP access. Update SMTP_USER from admin to noreply.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The noreply account lacks SMTP auth permissions in Stalwart. Use the
admin account for now — SMTP_FROM still sends as noreply@mana.how.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Umami database was re-initialized with empty website table. Created
new ManaCore Web website in Umami and updated the ID in docker-compose
and .env.development. Fixes stats.mana.how 400 errors.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add SMTP_INSECURE_TLS env var to skip certificate verification for
internal Docker-network SMTP connections. Stalwart's self-signed cert
uses 'localhost' as CN which doesn't match the 'stalwart' hostname.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Set SMTP defaults to use internal Stalwart server (stalwart:587) with
noreply@mana.how credentials. Add stalwart as dependency for mana-notify.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Map host 8443 to container 8080 (HTTP admin UI). Use wget for
healthcheck since curl is not available in the Stalwart image.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The old image name stalwartlabs/mail-server doesn't exist on Docker Hub.
The correct image is stalwartlabs/stalwart.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add self-hosted Stalwart mail server (Rust, ~50MB RAM) to replace Brevo
as SMTP provider. mana-notify now sends via stalwart:587 internally.
Ports exposed: 25 (SMTP), 587 (submission), 465 (SMTPS), 993 (IMAPS),
8443 (web admin). Requires DNS setup (MX, SPF, DKIM, DMARC) and router
port-forwarding to complete the migration.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
SMTP_USER was empty because it wasn't in .env and had no default.
Add the Brevo account as default (was previously hardcoded in mana-auth).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy the volume-mounted generate.sh to /tmp before executing, so a
concurrent git pull doesn't corrupt the file mid-read.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
mana-notify was using NOTIFY_SERVICE_KEY (defaulting to dev-service-key)
while mana-auth sends MANA_CORE_SERVICE_KEY. Use the same env var so
mana-auth can authenticate with mana-notify.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace direct Brevo SMTP sending with HTTP calls to mana-notify's
notification API. This centralizes all email configuration in one
service (mana-notify) and removes the nodemailer dependency from
mana-auth. SMTP provider is now swappable via a single env var.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Upgrade shared-logger to dual-mode: JSON lines in production, console
in dev. Adds configureLogger() for service name + request ID.
- Add requestLogger middleware to shared-hono with request ID generation
and structured request/response logging.
- Align Promtail config with new JSON field names (requestId, ts, service).
- Add PUBLIC_GLITCHTIP_DSN + PUBLIC_UMAMI_WEBSITE_ID to mana-web docker config.
- Add /status page that polls all backend /health endpoints server-side.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mirrors the frontend unification (single IndexedDB) on the backend.
All services now use pgSchema() for isolation within one shared database,
enabling cross-schema JOINs, simplified ops, and zero DB setup for new apps.
- Migrate 7 services from pgTable() to pgSchema(): mana-user (usr),
mana-media (media), todo, traces, presi, uload, cards
- Update all DATABASE_URLs in .env.development, docker-compose, configs
- Rewrite init-db scripts for 2 databases + 12 schemas
- Rewrite setup-databases.sh for consolidated architecture
- Update shared-drizzle-config default to mana_platform
- Update CLAUDE.md with new database architecture docs
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add missing shared-uload package copy and zitare content build step to
Dockerfile. Replace wget/httpx healthchecks with bun fetch and stdlib
urllib to remove external dependencies in containers.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
mana-stt: add WhisperX service with CUDA GPU support, speaker diarization, and auto-fallback chain.
mana-notify: add locale fallback and default templates for task reminders.
CD: update deployment pipeline and docker-compose configuration.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Extract ~120 hardcoded German strings from 14 Svelte components into i18n locale
files using svelte-i18n $t() calls. Add new translation sections (taskForm, filters,
tags, subtasks, durationPicker, kanban, toolbar) across all 5 languages (de/en/fr/es/it).
Also add missing shared common translations for Spanish, French, and Italian
(150+ keys each) in packages/shared-i18n.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Dockerfile using sveltekit-base:local pattern (port 5038)
- docker-compose.macmini.yml entry with Traefik labels for memoro.mana.how
- Delete legacy authService.ts and auth.ts (app uses shared-auth-stores)
- Remove middleware env vars from env.ts and app.d.ts (dead code)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Parse tier data automatically from mana-apps.ts (awk, read-only volume
mount) so the status page stays in sync without manual updates. Shows
founder/alpha/beta/public cards with per-app development status.
Tier data is also included in status.json for ManaScore consumption.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>