Platform-Repo (Code/mana/) reserviert 3065 für mana-media; um Doppel-
Belegung zu vermeiden wandert mana-events (Public-RSVP / Event-Sharing)
auf 3115. Neuer Port-Block 311x ist unbenutzt und gehört strukturell
neben mana-mail (3042) bzw. die anderen 30xx Service-Ports.
Berührt jeden harden-coded 3065-Default — Server-Config, Webapp-Config,
SSR-Routes (rsvp/[token], status), Playwright-Webserver-Setup, e2e-Spec.
PUBLIC_MANA_EVENTS_URL in .env.development zieht beide Variablen mit.
PORT_SCHEMA.md trägt jetzt den Wechsel mit Datum + Begründung —
zukünftiges Ich soll nicht raten warum der Port aus der 30xx-Reihe
ausschert.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
First concrete piece of M3 (docs/plans/mana-mcp-and-personas.md). The
tick loop itself and the Claude Agent SDK + MCP integration are M3.b;
the action/feedback persistence endpoints are M3.c. This commit just
stands up the service so the remaining pieces have a shell to land in.
Service shape (Bun/Hono on :3070)
- src/config.ts
Env-driven configuration: auth URL, MCP URL, service key for
action/feedback callbacks (M3.c), Anthropic API key, deterministic
PERSONA_SEED_SECRET (must match scripts/personas/password.ts so the
runner can log back in without any stored credentials), tick
interval and concurrency, RUNNER_PAUSED kill-switch. Production
start asserts all secrets are set and the dev fallback secret is
rotated.
- src/password.ts
Bit-for-bit identical HMAC-SHA256 password derivation to
scripts/personas/password.ts. Duplicated deliberately: the two
sides can't share code (one is a repo-root utility script, the
other is a workspace service) but must stay in sync — comment
at the top calls this out.
- src/clients/auth.ts
Two upstream calls the runner needs for one tick: POST /auth/login
and GET /api/auth/organization/list. loginAndResolvePersonalSpace()
wraps both and picks the persona's auto-created personal space as
the write target (throws if none exists — Spaces-Foundation should
always have seeded one on signup).
- src/index.ts
Hono app: /health, /metrics (stub), and a dev-only /diag/login
endpoint that takes a persona email, derives the password, logs
in, resolves the personal space, and returns {userId, spaceId} as
an end-to-end sanity check. Disabled in production.
No tick loop yet — RUNNER_PAUSED prints an info line on boot, but
nothing fires. The dispatcher + Claude Agent SDK + MCP client land in
M3.b; the internal POST callbacks into mana-auth for persona_actions /
persona_feedback land in M3.c.
Infra
- Port 3070 added to docs/PORT_SCHEMA.md.
- Service listed in root CLAUDE.md next to mana-mcp.
- services/mana-persona-runner/CLAUDE.md documents what's built today,
what lands in M3.b/c, and the local diag smoke recipe.
Boot smoke verified: /health returns ok + paused/interval/concurrency,
/diag/login without email returns 400.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Foundation for autonomous Claude-driven testing. Plan:
docs/plans/mana-mcp-and-personas.md.
New packages
- @mana/tool-registry — schema-first ToolSpec<InputSchema, OutputSchema>
with zod generics, scope ('user-space' | 'admin') and policyHint
('read' | 'write' | 'destructive'). sync-client helpers speak the
mana-sync push/pull protocol directly so RLS and field-level LWW are
preserved. MasterKeyClient fetches per-user MKs via the existing
mana-auth GET /api/v1/me/encryption-vault/key endpoint (JWT-gated,
ZK-aware, already audited) — no new service-key endpoint built.
ZeroKnowledgeUserError surfaced as a typed throw.
- @mana/shared-crypto — AES-GCM-256 primitives extracted from the web
app's $lib/data/crypto/aes.ts so the server-side tool handlers and the
browser produce byte-for-byte identical wire format
(enc:1:{b64(iv)}.{b64(ct)}). Web app aes.ts now re-exports from
shared-crypto — 5 existing importers unchanged, svelte-check stays
green.
New service
- services/mana-mcp (:3069, Bun/Hono) — MCP Streamable HTTP gateway.
JWKS auth against mana-auth, per-user session isolation (session-id
belongs to the user who opened it — cross-user access returns 403),
admin-scoped tools filtered out before registration. MasterKeyClient
cached per process with a 5-minute TTL.
11 tools registered
- habits.{create,list,update,archive}, spaces.list (plaintext, M1)
- todo.{create,list,complete}, notes.{create,search}, journal.add
(encrypted — field lists match
apps/mana/apps/web/src/lib/data/crypto/registry.ts verbatim)
Infra
- Port 3069 added to docs/PORT_SCHEMA.md
- services/mana-mcp/CLAUDE.md with architecture, auth model,
tool-authoring recipe, local smoke-test steps
- Root CLAUDE.md services list updated
Type-check green across shared-crypto, mana-tool-registry, mana-mcp.
svelte-check on apps/mana/apps/web stays at 0 errors / 0 warnings.
Boot smoke verified: /health returns registry.loaded=true, unauthed
/mcp → 401, invalid-JWT /mcp → 401 with descriptive message.
Decisions locked in for later milestones (per plan D1–D10):
- Personas will be real mana-auth users (users.kind='persona'), no
service-key bypass (D1, D2)
- Tool-registry is the SSOT; mana-ai and the legacy
apps/api/src/mcp/server.ts get merged into it in M4 (three current
parallel tool catalogs collapse to one)
- Persona-runner (:3070) will be a separate service using the Claude
Agent SDK + MCP client (D5)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
New Bun/Hono service on port 3068 that bundles many web-research providers
behind a unified interface for side-by-side comparison. All eval runs
persist in research.* (mana_platform) so quality can be reviewed later.
Providers (Phase 1+2):
search: searxng, duckduckgo, brave, tavily, exa, serper
extract: readability (via mana-search), jina-reader, firecrawl
Endpoints:
POST /v1/search, /v1/search/compare — single + fan-out
POST /v1/extract, /v1/extract/compare — single + fan-out
GET /v1/runs, /v1/runs/:id — history
POST /v1/runs/:run/results/:id/rate — manual eval
GET /v1/providers, /v1/providers/health — catalog + readiness
Auto-routing: when `provider` is omitted, queries are classified via regex
(fast path, 0ms) with optional mana-llm fallback, then routed to the first
available provider for that query type (news → tavily, academic → exa,
semantic → exa, etc.).
Credits: server-key calls go through mana-credits reserve → commit/refund
so failed provider calls don't charge the user. BYO-keys supported via
research.provider_configs (UI arrives in Phase 4).
Cache: Redis with graceful degradation (1h TTL for search, 24h for
extract). Pay-per-use APIs only — no subscription-gated providers.
Docs: docs/plans/mana-research-service.md + docs/reports/web-research-capabilities.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reserves port 3042 in PORT_SCHEMA.md, adds mail pgSchema to
setup-databases.sh and init-db scripts, installs mana-mail workspace
dependencies.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New mana-geocoding service (port 3018) wraps a self-hosted Pelias
instance with LRU caching and OSM→PlaceCategory auto-mapping.
All geocoding queries stay within our infrastructure — no user
location data leaves the network.
Places module integration:
- Address autocomplete search in ListView (creates place with
name, coords, address, category in one step)
- Address search + reverse geocoding button in DetailView
- Auto-fill address via reverse geocoding during tracking
- OSM category mapping (amenity:restaurant→food, shop:*→shopping, etc.)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The workbench-registry app id 'inventar' did not match its
@mana/shared-branding MANA_APPS counterpart 'inventory', so the tier-
gating join in apps/web/src/lib/app-registry/registry.ts silently
failed for the inventory module — it fell into the "no MANA_APPS
entry, default visible" fallback and was effectively un-gated. The
codebase had also voted overwhelmingly for 'inventar' (53 files) vs
'inventory' (3 files in shared-branding), so the long-standing
mismatch was just bookkeeping debt waiting to bite.
Pre-release, no live data, so the cleanest fix is to align everything
on the English 'inventory':
- Workbench-registry id, module.config.ts appId, module folder, route
folder and i18n locale folder all renamed via git mv
- Standalone apps/inventar/ workspace package renamed
- All imports, store identifiers (InventarEvents → InventoryEvents,
INVENTAR_GUEST_SEED, inventarModuleConfig), i18n keys and href/goto
paths follow the rename
- The German display label "Inventar" is preserved everywhere it is a
user-visible string (page titles, i18n values, toast labels)
- Dexie table prefixes (invCollections, invItems, …) are unchanged
- Drive-by fix: ListView.svelte was querying non-existent
inventarCollections/inventarItems tables — corrected to the actual
invCollections/invItems names from module.config
- The "inventar ↔ inventory id mismatch" workaround comment in
registry.ts is removed since the mismatch no longer exists
module-registry.ts also picks up the user's parallel newsModuleConfig
addition because both edits land in the same import block — keeping
them split would have left the build in an inconsistent state.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit bundles two unrelated changes that were swept together by an
accidental `git add -A` in another working session. Documented here so the
history reflects what's actually inside.
═══════════════════════════════════════════════════════════════════════
1. fix(mana-auth): /api/v1/auth/login mints JWT via auth.handler instead
of api.signInEmail
═══════════════════════════════════════════════════════════════════════
Previous attempt (commit 55cc75e7d) tried to fix the broken JWT mint in
/api/v1/auth/login by switching the cookie name from `mana.session_token`
to `__Secure-mana.session_token` for production. That was necessary but
not sufficient: Better Auth's session cookie value isn't just the raw
session token, it's `<token>.<HMAC>` where the HMAC is derived from the
better-auth secret. Reconstructing the cookie from auth.api.signInEmail's
JSON response only gave us the raw token, so /api/auth/token's
get-session middleware still couldn't validate it and the JWT mint kept
silently failing.
Real fix: do the sign-in via auth.handler (the HTTP path) rather than
auth.api.signInEmail (the SDK path). The handler returns a real fetch
Response with a Set-Cookie header containing the fully signed cookie
envelope. We capture that header verbatim and forward it as the cookie
on the /api/auth/token request, which now passes validation and mints
the JWT correctly.
Verified end-to-end on auth.mana.how:
$ curl -X POST https://auth.mana.how/api/v1/auth/login \
-d '{"email":"...","password":"..."}'
{
"user": {...},
"token": "<session token>",
"accessToken": "eyJhbGciOiJFZERTQSI...", ← real JWT now
"refreshToken": "<session token>"
}
Side benefits:
- Email-not-verified path is now handled by checking
signInResponse.status === 403 directly, no more catching APIError
with the comment-noted async-stream footgun.
- X-Forwarded-For is forwarded explicitly so Better Auth's rate limiter
and our security log see the real client IP.
- The leftover catch block now only handles unexpected exceptions
(network errors etc); the FORBIDDEN-checking logic in it is dead but
harmless and left in for defense in depth.
═══════════════════════════════════════════════════════════════════════
2. chore: remove the entire self-hosted Matrix stack (Synapse, Element,
Manalink, mana-matrix-bot)
═══════════════════════════════════════════════════════════════════════
The Matrix subsystem ran parallel to the main Mana product without any
load-bearing integration: the unified web app never imported matrix-js-sdk,
the chat module uses mana-sync (local-first), and mana-matrix-bot's
plugins duplicated features the unified app already ships natively.
Keeping it alive cost a Synapse + Element + matrix-web + bot container
quartet, three Cloudflare routes, an OIDC provider plugin in mana-auth,
and a steady drip of devlog/dependency churn.
Removed:
- apps/matrix (Manalink web + mobile, ~150 files)
- services/mana-matrix-bot (Go bot with ~20 plugins)
- docker/matrix configs (Synapse + Element)
- synapse/element-web/matrix-web/mana-matrix-bot services in
docker-compose.macmini.yml
- matrix.mana.how/element.mana.how/link.mana.how Cloudflare tunnel routes
- OIDC provider plugin + matrix-synapse trustedClient + matrixUserLinks
table from mana-auth (oauth_* schema definitions also removed)
- MatrixService import path in mana-media (importFromMatrix endpoint)
- Matrix notification channel in mana-notify (worker, metrics, config,
channel_type enum, MatrixOptions handler)
- Matrix entries from shared-branding (mana-apps + app-icons),
notify-client, the i18n bundle, the observatory map, the credits
app-label list, the landing footer/apps page, the prometheus + alerts
+ promtail tier mappings, and the matrix-related deploy paths in
cd-macmini.yml + ci.yml
Devlog/manascore/blueprint entries that mention Matrix are left intact
as historical record. The oauth_* + matrix_user_links Postgres tables
stay on existing prod databases — code can no longer write to them, drop
them in a follow-up migration if you want them gone for real.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
mana-voice-bot's source default was 3050, which collided with mana-sync.
Today the collision is latent (voice-bot isn't deployed anywhere), but
sooner or later someone is going to start it on a host that's already
running mana-sync and the second one will refuse to bind. Moving to
3024 puts it inside the AI/ML port range alongside its dependencies
(stt 3020, tts 3022, image-gen 3023, llm 3025) and away from sync.
Updated:
- app/main.py — PORT default 3050 → 3024
- start.sh, setup.sh — same fix in the example commands
- CLAUDE.md — full rewrite. Old version described "Mac Mini deployment"
with launchd; the new version explicitly says "not deployed yet" and
documents the seven concrete steps to deploy on the Windows GPU box
alongside the other AI services (Scheduled Task, service.pyw, .env,
firewall rule, cloudflared route, WINDOWS_GPU_SERVER_SETUP.md update).
docs/WINDOWS_GPU_SERVER_SETUP.md:
- Added the missing ManaVideoGen scheduled task to all four
Start-ScheduledTask snippets — video-gen has been running on the
Windows GPU but the doc had never picked it up.
- Added a "mana-video-gen (Port 3026)" service section parallel to the
existing image-gen one, with venv path, repo pointer, model, etc.
- Added a repo-pendants table mapping C:\mana\services\<svc>\ to the
corresponding services/<svc>/ directory in the repo, plus a note that
changes should flow repo→Windows, not the other way around.
docs/PORT_SCHEMA.md:
- Reconciled the warning block with the post-cleanup reality: no more
active or latent port collisions (image-gen ↔ video-gen and
voice-bot ↔ sync are both resolved). Listed the actual ports per host
with public URLs. Kept the planned-vs-actual disclaimer for the
services that still don't match the aspirational ranges (mana-credits
3061 vs planned 3002, etc).
Source default was 3026 but Mac Mini production has been overriding to
3025 via the launchd plist in scripts/mac-mini/setup-image-gen.sh ever
since the service was set up. The override existed in exactly one place
that is not version-controlled in any obvious way — anyone redeploying
without that script would land on 3026 and clients pointing at 3025
would fail to connect.
Source default → 3025 across main.py, setup.sh, README, CLAUDE.md so the
launchd plist is no longer load-bearing. The Mac Mini setup script still
sets PORT=3025 explicitly; that's now belt-and-suspenders rather than the
only thing keeping production alive.
Also added a note clarifying that this Mac Mini service (flux2.c, MPS,
arm64-only) is *not* the same thing as the "image-gen" running on the
Windows GPU server (PyTorch + diffusers + CUDA, port 3023, code lives at
C:\mana\services\mana-image-gen\ outside this repo). Two different
implementations sharing a name was confusing the port-collision audit.
Updated docs/PORT_SCHEMA.md warning block to retract the previous false
claims of two active port collisions:
- image-gen ↔ video-gen on 3026 — wrong: image-gen runs on Mac Mini
on 3025 (now also the source default), video-gen is alone on the
Windows GPU on 3026
- voice-bot ↔ sync on 3050 — latent only: mana-voice-bot is not
deployed anywhere (no launchd, no scheduled task, no cloudflared
route), so the collision is in source defaults but not in production
The voice-bot 3050 default should still be moved before voice-bot is
ever deployed — flagged in the PORT_SCHEMA warning instead of silently
fixed since voice-bot deployment is its own decision.
New service docs:
- services/mana-stt/CLAUDE.md — FastAPI surface with Whisper MLX (local),
WhisperX (rich), and Voxtral (local + Mistral API). Documents the lazy
backend loading and the launchd plist setup on the Mac Mini.
- services/mana-events/CLAUDE.md — Hono/Bun service for public RSVP and
event-sharing. Documents the host (JWT) vs public (token) split, the
rate-limit sweeper, and the createApp factory pattern that lets unit
tests run without bootstrapping the production sweeper.
Stale entries fixed:
- mana-auth: dropped "rewritten from NestJS / drop-in replacement" — the
rewrite is the only mana-auth there is now. Email channel updated from
Brevo SMTP to self-hosted Stalwart (see docs/MAIL_SERVER.md).
- mana-notify: same Brevo → Stalwart fix in the channel table and env
var defaults.
PORT_SCHEMA.md flagged as aspirational:
- The doc was dated 2026-03-28 and presented as "single source of truth",
but cross-checking against actual service source files (config.go,
main.py, start.sh) shows nothing matches. Added a prominent warning at
the top with the real ports + two confirmed collisions:
* mana-image-gen and mana-video-gen both default to PORT 3026
* mana-voice-bot and mana-sync both default to PORT 3050
Today these are masked because image-gen + voice-bot live on the
Windows GPU server while video-gen + sync live on the Mac Mini, but
the moment they share a host they collide. Either execute the planned
reorg or pick non-colliding ports and rewrite the doc to match
reality — flagged as a real follow-up.