Wires the "Als Notiz speichern" action at the bottom of the Kontext
widget (UI itself landed in 003f75f7e) to actually open Notes next
to Kontext and focus the new note:
- workbench-scenes: new addAppAfter(appId, anchorAppId). addApp()
always appended, which pushed Notes to the far end of the
carousel; addAppAfter inserts directly after the anchor (Kontext)
and no-ops if the target is already open so the user's current
position isn't yanked around.
- notes/stores/selection: new transient in-memory focus signal
(focusedNoteId) that cross-module callers populate. Kept
non-persistent intentionally — surviving a remount would re-open
random notes after page loads.
- notes/ListView: $effect reads focusedNoteId, waits for the
Dexie liveQuery to surface the just-created row, opens it in
the inline editor, clears the focus signal, then scrolls the
matching data-note-id element into view via queueMicrotask so
the DOM has rendered the editor variant.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wire the Mission Key-Grant feature into the production Mac Mini
compose stack so mana-ai can boot and mana-auth can mint grants.
- New mana-ai service block (port 3066) — 256m mem limit, depends on
postgres + mana-llm, tick interval configurable via
MANA_AI_TICK_INTERVAL_MS / MANA_AI_TICK_ENABLED. Pulls
MANA_AI_PRIVATE_KEY_PEM from env; absent = grants silently disabled.
- mana-auth environment gains MANA_AI_PUBLIC_KEY_PEM (default empty
so existing deployments without the keypair degrade to 503
GRANT_NOT_CONFIGURED rather than failing to boot).
- mana-auth Dockerfile rewritten to the two-stage pnpm+bun pattern
used by mana-credits/mana-events — required now that mana-auth has
a @mana/shared-ai workspace dep. The previous single-stage
Dockerfile with service-scoped build context couldn't resolve any
@mana/* imports; that only worked historically because it fell
through at runtime via a pre-built layer.
- mana-ai Dockerfile copies packages/shared-ai into the installer
stage alongside shared-hono.
The build contexts for mana-auth flip from services/mana-auth to the
repo root. Existing CI/CD paths (scripts/mac-mini/build-app.sh) pass
through to docker compose build and pick up the new context
automatically — no script edits needed.
Flip-on procedure: on the Mac Mini, set MANA_AI_PUBLIC_KEY_PEM +
MANA_AI_PRIVATE_KEY_PEM in .env (already done, see
secrets/mana-ai/README.md on the host), then rebuild mana-auth +
build mana-ai.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Frontend plumbing for the Kontext "Aus URL" inline panel (the UI
itself was committed earlier as part of 003f75f7e):
- kontext/api.ts: new module-scoped fetch wrapper that talks to
/api/v1/context/import-url with a Bearer token. Kept in kontext/
rather than a shared helper because it's the only caller; moving
it to `context/` would mix two unrelated modules again.
- kontext/stores/kontext.svelte.ts: new appendContent(chunk) method.
The singleton row is encrypted at rest, so we decrypt the current
content before concatenating with a '\n\n---\n\n' separator and
writing back — going through setContent() to keep encryption +
Dexie hook behaviour consistent.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New backend endpoint that wraps mana-crawler + mana-llm in a single
call so the Kontext "Aus URL" UI can hit one route:
- Starts a crawl job (single page or up-to-20-page deep crawl) via
mana-crawler's /api/v1/crawl, polls status up to 90s, then fetches
paginated results.
- When multiple pages are returned, joins them into one markdown
document with H1-per-page section headers separated by ---.
- When summarize=true, routes the collected markdown through
mana-llm/chat/completions with a system prompt that asks for
"Überblick / Kernaussagen / Details" H2 structure in the source
language. sanitizeSummary() strips the common local-LLM artefacts
(```markdown fences, "Hier ist …:" preamble, stray leading H1)
so the output drops cleanly into the Kontext doc. On summary
failure the endpoint returns 502 rather than silently falling
back to the raw crawl.
- Credits are validated + consumed via @mana/shared-hono/credits
(1 credit crawl-only, 5 crawl+summary) under the new
AI_CONTEXT_IMPORT_URL action.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three small config changes so the Kontext "Aus URL" flow (next commit)
is runnable from a plain `pnpm dev:mana:all`:
- package.json: include mana-crawler in the dev:mana:servers
concurrently group, and pass DATABASE_URL=…/mana_platform so the
Go binary doesn't try to connect to a non-existent `mana` DB (its
hardcoded default).
- .env.development: publish MANA_CRAWLER_URL=http://localhost:3023
(the crawler's default binary port — the macmini container is
a 3014 override, kept only in docker-compose). Also surface
MANA_LLM_DEFAULT_MODEL for the summariser.
- docker-compose.macmini.yml: inject MANA_CRAWLER_URL + the
default-model env into the mana-api container so production
can reach the internal crawler and pick the summariser model
consistently.
No runtime code touched.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pre-push runs svelte-check --fail-on-warnings. Two items were blocking:
- +layout.svelte: drop four .pill-nav-toggle* rules that have no
matching markup anywhere in the app (dead CSS from the pill-nav
rework that removed the collapse toggle).
- kontext KontextView.svelte: drop the explicit `: Phase[]` annotation
on `let importPhases = \$derived.by(...)`. Svelte 5's runes return a
wrapped type that TypeScript reads as a function when the annotation
is present, which is what produced the "not callable" + "state
invalid placement" chain. Inferred type is the same Phase[] shape.
No runtime behaviour change.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes the "iteration is running, no feedback" black hole. The user now
sees, per running iteration:
⏳ Frage Planner · frage Planner an ⏱ 23s
[Abbrechen]
Phases (\`IterationPhase\`):
resolving-inputs → calling-llm → parsing-response →
staging-proposals → finalizing
The runner advances through these via \`setIterationPhase\` between each
await, writing currentPhase + phaseDetail + phaseStartedAt onto the
iteration. UI reads them via Dexie liveQuery — no polling.
Cancel:
- \`requestIterationCancel\` writes cancelRequested=true on the iteration
- runner polls \`isCancelRequested\` between every phase + per stage step
- cancellation finalises as \`failed\` with summary \`'cancelled by user'\`
- UI button is disabled + relabelled "Wird abgebrochen…" until the next
poll picks it up
Hard timeout: 90 s wall-clock per iteration via Promise.race against a
CancelledError. Wedged backends (e.g. flaky mana-llm) fail fast with
"timeout after 90s" instead of sitting in \`running\` forever.
Elapsed counter is a \$state variable ticking once a second, scoped to
the ListView component — Dexie isn't touched. Auto-cleaned on
component destroy.
shared-ai re-exports \`IterationPhase\` so server-side mana-ai can
inspect the same phase enum (no consumer there yet, but the type is
ready for the run-status endpoint planned in HEALTH page).
77/77 webapp tests still green; svelte-check clean.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Phase 4 — everything needed to flip the Mission Key-Grant feature on
safely per deployment. No new behaviour; purely operational plumbing.
- PUBLIC_AI_MISSION_GRANTS feature flag (default off). hooks.server.ts
injects window.__PUBLIC_AI_MISSION_GRANTS__, api/config.ts exposes
isMissionGrantsEnabled(). Grant UI (dialog + status box) and the
Workbench "Datenzugriff" tab both hide when the flag is off.
- PUBLIC_MANA_AI_URL added to the injection set so the webapp can reach
the new audit endpoint from production.
- Prometheus alerts (new mana_ai_alerts group):
- ManaAIServiceDown (warning, 2m)
- ManaAIGrantScopeViolation (critical, 0m) — MUST stay at 0; any
increment pages immediately
- ManaAIGrantSkipsHigh (warning, 15m) — flags keypair drift
- ManaAIPlannerParseFailures (warning, 10m) — prompt/LLM drift
- Runbook in docs/plans/ai-mission-key-grant.md: initial keypair gen,
leak-response procedure (rotate + invalidate all grants + audit),
scope-violation triage.
- User-facing doc in apps/docs security.mdx: new "AI Mission Grants"
section with the three hard constraints (ZK users blocked, scope
changes invalidate cryptographically, revocation is one click) plus
an honest threat-model comparison column showing where grants shift
the tradeoff.
Rollout remaining (not code): generate keypair on Mac Mini, provision
MANA_AI_PRIVATE_KEY_PEM + MANA_AI_PUBLIC_KEY_PEM via Docker secrets,
flip PUBLIC_AI_MISSION_GRANTS=true starting with till-only.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Phase 3 — user-facing side of the Mission Key-Grant rollout. Users
can now opt into server-side execution, revoke it, and inspect every
decrypt the runner has performed.
Webapp:
- MissionGrantDialog explains the scope (record count, tables, TTL,
audit visibility, revocation) and calls requestMissionGrant. Error
paths render distinctly for ZK, not-configured, missing vault.
- Mission detail shows a Server-Zugriff box with status pill
(aktiv/abgelaufen/nicht erteilt), Neu-erteilen + Zurückziehen
buttons. Only renders for missions with at least one encrypted-
table input.
- store.ts: setMissionGrant / revokeMissionGrant helpers, Proxy-
stripped like the rest of the store's writes.
- Workbench adds a Timeline/Datenzugriff tab switch. Audit tab queries
the new GET /api/v1/me/ai-audit endpoint, renders decrypt events
with color-coded status pills (ok/failed/scope-violation) and
stable reason strings.
- getManaAiUrl() added to api/config for the audit fetch.
mana-ai:
- GET /api/v1/me/ai-audit (JWT-gated via shared-hono authMiddleware)
backed by readDecryptAudit() — withUser + RLS double-gate so a user
can only read their own rows.
- Limit capped at 1000, newest-first.
Missions without a grant continue to work exactly as before; the
grant UI is purely additive.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Browser \`structuredClone\` itself fails on Svelte 5 \$state Proxies
("Failed to execute 'structuredClone' on 'Window': [object Array]
could not be cloned") — it doesn't transparently unwrap the Proxy the
way I'd hoped. The structured-clone algorithm refuses any non-cloneable
host object, including Svelte's reactive wrappers.
JSON.parse(JSON.stringify(...)) traverses through the Proxy by reading
each property normally, producing a plain-data copy that Dexie can
serialise without complaint.
Mission payloads are pure JSON values (strings/numbers/arrays/objects)
so JSON-roundtrip is lossless. The new \`deepClone\` helper is local to
this file with a comment pointing at the structured-clone failure.
77/77 webapp tests still green.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
`createMission`, `updateMission`, and `{start,finish}Iteration` all
received caller-supplied objects that can be Svelte 5 \$state Proxies
(MissionInputPicker binds the inputs array with \$state). IndexedDB's
structured-clone algorithm doesn't accept proxied arrays and throws
`DataCloneError: [object Array] could not be cloned` — visible to
users as "Mission anlegen" failing silently after clicking Create.
Wrap each proxy-carrying payload in `structuredClone()` at the store
boundary:
- createMission: `inputs` + `cadence`
- updateMission: whole `patch` (anything can be proxy)
- startIteration: `plan`
- finishIteration: `plan` (conditional)
`structuredClone` is the native browser / Bun helper; strips Proxies
while preserving Dates / Maps / Sets / nested plain data. Store stays
robust to any future caller that forgets to snapshot before passing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Phase 2 of Mission Key-Grant. The tick loop now honours a mission's
grant by unwrapping the MDK and passing it + the record allowlist into
the resolvers. Encrypted modules (notes, tasks, calendar, journal,
kontext) resolve server-side instead of returning null.
- crypto/decrypt-value.ts: mirror of webapp AES-GCM wire format
(enc:1:<iv>.<ct>) — read-only, server never wraps
- db/resolvers/encrypted.ts: factory + 5 concrete resolvers. Scope-
violation bumps a metric + writes a structured audit row, decrypt
failures same. Zero-decrypt (no grant, or record absent) = silent
null, no audit noise.
- db/audit.ts: best-effort append to mana_ai.decrypt_audit; write
failures never cascade into tick failures.
- cron/tick.ts: buildResolverContext unwraps grant per mission; MDK
reference only lives for the scope of planOneMission.
- ResolverContext plumbed through resolveServerInputs; existing goals
resolver unchanged semantically.
- Metrics: mana_ai_decrypts_total{table}, mana_ai_grant_skips_total
{reason}, mana_ai_grant_scope_violations_total{table} (alert > 0).
Missions without a grant still run exactly as before — plaintext
resolvers fire, encrypted ones short-circuit to null. No behaviour
regression for existing users.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Phase 1 of the Mission Key-Grant rollout. Webapp can now request a
wrapped per-mission data key; mana-ai can unwrap and (Phase 2) use it.
mana-auth:
- POST /api/v1/me/ai-mission-grant — HKDF-derives MDK from the user
master key, RSA-OAEP-2048-wraps with the mana-ai public key, returns
{ wrappedKey, derivation, issuedAt, expiresAt }
- MissionGrantService refuses zero-knowledge users (409 ZK_ACTIVE) and
returns 503 GRANT_NOT_CONFIGURED when MANA_AI_PUBLIC_KEY_PEM is unset
- TTL clamped to [1h, 30d]
mana-ai:
- configureMissionGrantKey + unwrapMissionGrant with structured failure
reasons (not-configured / expired / malformed / wrap-rejected)
- mana_ai.decrypt_audit table + RLS policy scoped to
app.current_user_id — append-only row per server-side decrypt attempt
- MANA_AI_PRIVATE_KEY_PEM env slot; absent = grants silently disabled
No existing behaviour changes: missions without a grant run exactly as
before. Grant flow is wired end-to-end but unused until Phase 2 lands
the encrypted resolver.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Foundation for Phase 2+ of the Mission Key-Grant flow: lets mana-ai
execute missions that depend on encrypted inputs (notes/tasks/events/
journal/kontext) without needing an open browser tab. Opt-in per
mission, Zero-Knowledge users excluded.
- Canonical HKDF-SHA256 derivation (scope-bound via tables + recordIds
in the HKDF info string → scope changes invalidate the grant
cryptographically, not just via a runtime check)
- Mission.grant field on the shared Mission type
- Golden snapshot + drift-guard test so webapp wrap path and mana-auth
wrap endpoint can't silently diverge
- Ideas backlog at docs/future/AI_AGENTS_IDEAS.md
- Full rollout plan at docs/plans/ai-mission-key-grant.md
- COMPANION_BRAIN_ARCHITECTURE.md §21 captures the flow + privacy
guarantees + non-goals
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Groups the six AI-Workbench apps plus the Companion chat under a
dedicated category, moved to order 0 so they're the first thing a user
sees when adding a page. Brain icon.
- `categories.ts`
- New `ai` type added to AppCategory union at order 0
- Old 'companion' category stays for derived projections (myday,
activity, goals) that aren't themselves AI-driven — renamed doc
only, behavior unchanged
- APP_CATEGORY_MAP reassigns: companion, ai-missions, ai-workbench,
ai-rituals, ai-policy, ai-insights, ai-health → 'ai'
- `AppPagePicker.svelte` collapsed-state map gains `ai: false` so the
new category is expanded by default (it's the user's new front door)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reverts the previous /companion-carousel misstep. The user's model is
that Missions / Workbench / Rituals / Policy / Insights / Health each
live as their OWN app in the root `/` workbench scene, alongside todo /
calendar / notes / etc. — openable from the normal app picker, freely
combinable with any other module card.
- New modules, each with a ListView.svelte usable inside AppPage's
PageShell (no self-wrapping, no shell-control props):
`lib/modules/ai-missions/ListView.svelte`
`lib/modules/ai-workbench/ListView.svelte`
`lib/modules/ai-rituals/ListView.svelte`
`lib/modules/ai-policy/ListView.svelte`
`lib/modules/ai-insights/ListView.svelte`
`lib/modules/ai-health/ListView.svelte`
- Registered in `app-registry/apps.ts`: `ai-missions`, `ai-workbench`,
`ai-rituals`, `ai-policy`, `ai-insights`, `ai-health`. Each picks a
distinct color + icon.
- `/companion/+page.svelte` restored to the simple chat it was before.
The companion app (chat) remains as its own registered app —
unchanged.
- Removed: `lib/modules/companion/pages/` + the
`workbench-settings.svelte.ts` store (both were the dead-end
PageCarousel approach).
User model now:
` / ` (workbench root) → [+ App] → any of the 6 AI apps + any module
/companion → full-screen chat (unchanged)
/todo, /calendar, … → module-inline ghost inbox stays
svelte-check clean; webapp AI tests 71/71 green.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Drops the split /companion/missions + /companion/workbench +
/companion/rituals sub-routes and rebuilds /companion around the
shared PageCarousel pattern (same one /todo uses). Every feature is
now a self-contained page the user opens/closes/reorders/resizes in
one unified surface.
New pages:
- **Home** (default) — compact stats + one-click shortcuts to every
other page, with "offen" badge when already open
- **Chat** — conversation sidebar + active chat inline
- **Missions** — list ↔ create ↔ detail master-detail inside one pane
- **Workbench** — timeline grouped by iteration + Revert per bucket,
Mission filter dropdown (replaces the old ?mission=… query-param)
- **Rituals** — migrated from /companion/rituals
- **Policy** — NEW: 3-way per-tool toggle (auto/propose/deny) with
localStorage-backed overrides merged into DEFAULT_AI_POLICY live
- **Insights** — NEW: approval rate, 14-day bar chart of AI events,
per-mission stats, top-5 recurring user-feedback strings. All from
local Dexie liveQueries, no server calls.
- **Health** — NEW: foreground runner status, manual tick trigger,
link out to status.mana.how for server-runner uptime
Plumbing:
- `stores/workbench-settings.svelte.ts` — persistent open-pages list
in localStorage (id + widthPx + heightPx + maximized); open/close/
resize/moveLeft/moveRight helpers
- `pages/page-meta.ts` — central registry (title, color, icon,
description) consumed by home shortcuts + PagePicker
- `pages/PagePicker.svelte` — lists available (not-yet-open) pages
with icon + description
- Old sub-routes deleted: /companion/{missions,workbench,rituals}/
Webapp tests still 71/71 green; svelte-check clean on the new pages.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wires mana-ai into the existing observability stack so tick throughput,
plan-failure rates, planner latencies, and snapshot refresh health are
visible in Grafana + Prometheus, and the service's uptime surfaces on
status.mana.how under the "Internal" section.
- `src/metrics.ts` — prom-client Registry with `mana_ai_` prefix.
Counters: ticks_total, plans_produced_total, plans_written_back_total,
parse_failures_total, mission_errors_total, snapshots_new/updated,
snapshot_rows_applied_total, http_requests_total.
Histograms: tick_duration_seconds (0.1–120s), planner_request_
duration_seconds (0.25–60s), http_request_duration_seconds (0.005–10s).
- `src/index.ts` — HTTP middleware labels every request by
method/path/status; `/metrics` serves the Prometheus text format.
- `src/cron/tick.ts` — increments counters + wraps the tick with
`tickDuration.startTimer()`. Snapshot stats fold through.
- `src/planner/client.ts` — wraps `complete()` in a latency histogram
timer so planner tail latency shows up separately from tick duration.
- `docker/prometheus/prometheus.yml` —
1. New `mana-ai` scrape job against `mana-ai:3066/metrics` (30s).
2. `/health` added to the `blackbox-internal` job so uptime shows on
status.mana.how alongside mana-geocoding.
- `scripts/generate-status-page.sh` — friendly label for the new probe:
`mana-ai:3066/health` → "Mana AI Runner" (generator already iterates
`blackbox-internal`, no other changes needed).
- `package.json` — prom-client ^15.1.3
All 17 Bun tests still pass; tsc clean.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sync status, user menu (bar-mode + overlay fallback) and logout now use
the shared Pill component like the rest of PillNavigation. All pill
styling now lives in a single place.
- Pill gains an escape-hatch `data?: Record<string, string>` prop for
arbitrary data-* attributes (used by the user-menu trigger which is
click()-ed via querySelector from the consuming layout) and a
bindable `element` binding for imperative focus/positioning
(replaces the old bind:this={userMenuTrigger}).
- Remove the now-dead inline .pill / .glass-pill / .logout-pill /
.pill-label / .pill.active / .pill.icon-only CSS from PillNavigation
(all living in Pill.svelte now). ~110 lines of CSS gone.
- The mobile override that forced 44px min-height on .pill is also gone;
Pill sizes are controlled via the size prop.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replaces the O(N sync_changes) LWW replay in every tick with an
incremental snapshot table refresh. Each tick now applies only the
delta since the last run, then runs a single indexed SELECT on the
snapshot table to find due missions.
- `db/migrate.ts` — idempotent migration. Creates `mana_ai` schema and
`mana_ai.mission_snapshots` table on boot. Partial index on
active+nextRunAt powers the tick's "due" query.
- `db/snapshot-refresh.ts`
- `refreshSnapshots(sql)` one-pass: joins sync_changes and snapshots
on (user_id, mission_id), picks out pairs whose source max
created_at exceeds the snapshot cursor. Per-pair refresh wrapped
in `withUser` for RLS scoping on the source SELECT.
- Bootstrap: missing snapshot rows seed from a full replay of their
mission's history; subsequent ticks apply only the delta.
- Delete tombstones purge the snapshot row.
- `db/missions-projection.ts` `listDueMissions` — single SELECT against
`mana_ai.mission_snapshots` with an indexed WHERE. Dropped the legacy
cross-user scan + per-user two-phase read (unused now). `mergeAndFilter`
stays for its existing test coverage.
- `cron/tick.ts` calls `refreshSnapshots` before `listDueMissions` and
logs when the refresh actually applied rows. No behaviour change
externally.
- `index.ts` awaits `migrate()` on boot (top-level `await` — Bun
supports it natively).
Closes the last item on the AI-Workbench roadmap's "future work" list.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
PillNavigation rendered three near-identical inline pill blocks (prepended
elements, main nav items, appended elements). Consolidate onto the Pill
component so the visual base stays in lockstep with the bottom-stack bars.
- Extend Pill with size='sm'|'md'. sm = 36px with 18px icons (PillNav
style); md = 44px with 20px icons (bar pills, default).
- Move the icon-only padding override into Pill itself.
- Extract the Mana-Logo SVG (duplicated inline) to ManaLogoIcon.svelte.
- Replace the three inline pill loops in PillNavigation with <Pill size='sm'>.
Mana-logo and iconSvg cases ride the `leading` snippet. onClick vs href
disambiguation is collapsed into a single Pill call per item.
- Remove the now-unreachable .pill-icon scoped CSS that was only meaningful
for the removed inline SVGs (Phosphor icon sizing comes from the size
prop).
Net: ~70 lines removed from PillNavigation.svelte without changing the
render output. Bar-mode triggers (sync / ai / theme / user) still render
inline because their logic is too entangled with activeBarId — leave for
a follow-up.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Each iteration bucket now carries a Revert button that undoes every AI
write attributed to that iteration. Closes the last open Workbench
feature in the roadmap.
- `data/ai/revert/inverse-operations.ts` — pluggable registry mapping
event types to their undo actions. Ships with inverses for the five
most common proposable outcomes:
* TaskCreated → tasksStore.deleteTask
* TaskCompleted → tasksStore.toggleComplete (back to incomplete)
* CalendarEventCreated → eventsStore.deleteEvent
* PlaceCreated → placesStore.deletePlace
* DrinkLogged → drinkStore.deleteEntry
Events with no registered inverse are tallied as `skippedUnsupported`
— user knows to handle those manually rather than the service
silently doing nothing.
- `data/ai/revert/revert-iteration.ts` — orchestrator. Filters
`_events` by `actor.iterationId + actor.missionId`, sorts
newest-first (so a completion unwinds before the underlying task
deletion), applies each inverse, returns `RevertStats` summary.
- Workbench UI: Revert button with confirm dialog on every bucket.
Shows "X zurückgenommen · Y nicht unterstützt · Z fehlgeschlagen"
result alert.
- 5 unit tests cover: happy path, unsupported types, failure isolation,
user-event skipping, newest-first ordering.
With this, the AI Workbench has full audit + undo semantics: user sees
everything the AI did, can approve/reject at stage time, and can roll
back approved actions after the fact.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CommandBar was a near-duplicate of QuickInputBar's InputBar with the UX
of a Cmd+K modal. Only arcade still used it. Migrate arcade onto the
existing GlobalSpotlight (hosted by PillNavigation) so there is a single
Cmd+K modal across all mana apps, then remove CommandBar entirely.
Arcade changes (games/arcade/apps/web/src/routes/(app)/+layout.svelte):
- Merge commandBarQuickActions into spotlightActions (nav-level items)
- Convert handleCommandBarSearch into a ContentSearcher that returns the
game list grouped under a single 'Spiele' category
- Drop the standalone Cmd+K handler; PillNavigation + GlobalSpotlight
handle the shortcut
- Remove the <CommandBar> render; add `contentSearcher` +
spotlightPlaceholder to <PillNavigation>
shared-ui cleanup:
- Delete packages/shared-ui/src/command-bar/ (CommandBar.svelte,
CommandBar.types.ts, index.ts)
- Drop CommandBar / CommandBarItem from the public @mana/shared-ui export
- Delete docs/central-services/COMMAND-BAR.md (stale)
No more duplication: highlight + debounce live in search-core, and the
only remaining Cmd+K surface is GlobalSpotlight.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Both QuickInputBar (InputBar.svelte), CommandBar.svelte, and GlobalSpotlight
were duplicating syntax highlighting and the 150ms search debounce. Pull
these into a new `packages/shared-ui/src/search-core/` module so the two
input surfaces stay in sync on feel and matching rules.
- search-core/highlight.ts — HighlightPattern type, locale-aware
getHighlightPatterns(), and the shared highlightText() (HTML-escape +
span wrap). Patterns were previously in quick-input/highlightPatterns.ts
+ inline in CommandBar.svelte.
- search-core/config.ts — SEARCH_DEBOUNCE_MS = 150. Used from InputBar,
CommandBar, GlobalSpotlight, and apps/mana web SearchEngine.
- quick-input/highlightPatterns.ts + types.ts become thin back-compat
re-exports.
- Public surface: @mana/shared-ui now exports getHighlightPatterns,
highlightText, SEARCH_DEBOUNCE_MS, and the HighlightPattern type.
No UX change. UIs still live in their own files (per earlier split
recommendation: shared backend, separate surfaces).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes the "cross-user scan" caveat on the mission read path. The
earlier implementation pulled every aiMissions row server-wide and
partitioned by user_id in memory — fine for a pre-launch single-user
deploy, not a cross-user infrastructure.
New flow:
1. `listMissionUsers(sql)` — one cross-user DISTINCT query. This is
the ONLY surface that still reads across users; documented as
requiring BYPASSRLS on the service's DB role (or ownership without
FORCE).
2. `listDueMissionsForUser(sql, userId, now)` — RLS-scoped via
`withUser(sql, userId, tx => ...)` just like the write path in
`iteration-writer.ts`. Defense-in-depth: even if the SELECT mis-
filters, RLS drops any row whose user_id doesn't match the session
setting.
3. `listDueMissions(sql, now)` — two-phase composition of the above.
The LWW merge + due-filter logic is factored out into a pure
`mergeAndFilter(rows, userId, now)`. Fully unit-tested (6 Bun cases):
active-due happy-path, future nextRunAt, non-active state, delete
tombstone, multi-row LWW merge, userId stamping.
Matches the pattern already in use for writes (`db/connection.ts:withUser`
+ `db/iteration-writer.ts`). Docstring on `listMissionUsers` spells out
the remaining BYPASSRLS dependency so ops knows what role the service
needs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Extract Pill.svelte as the single visual primitive (44px, icon+label,
active/primary/danger variants) used by PillDropdownBar and TagStrip.
PillNav keeps its own internal .pill class (36px, icon-only-oriented).
- Extract phosphor-icon-map.ts to deduplicate the icon lookup tables
that previously lived inline in PillDropdownBar.
- Unify bar slot heights in (app)/+layout.svelte: 56px PillNav,
64px for tags / quickinput / tabbar / dropdown-bar. Remove debug
outlines. Collapse bottom-stack gap so bars sit flush below PillNav.
- SceneAppBar wrapped in 64px slot, scene-pill/app-tab 40px to match.
- Enforce single-bar policy: opening one bar closes the others.
- QuickInputBar strip-down: remove leading CheckSquare icon and trailing
nav-toggle snippet; bar is pure search input now.
- Move user-menu (last PillNav pill) to bar-mode with short content:
Einstellungen, Light/Dark/System segmented, Theme, Logout.
- Swap tabs nav icon from Columns to Cards for better readability.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Planner reads `mission.iterations[].userFeedback` — not
`pendingProposals.userFeedback` — so feedback stored only on the
proposal row never reached the next plan. rejectProposal now also
appends to the matching iteration's `userFeedback` (merging with any
existing reasons via "\n· " bullets) when the proposal carries
missionId + iterationId.
Lazy imports the mission store to avoid a proposal↔mission cycle.
Best-effort: proposal rejection still succeeds even if the bubble fails.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rejecting a ghost-card proposal now reveals an inline textarea. Trimmed
feedback lands in `proposal.userFeedback` where the next Planner
iteration already reads it via iteration history — the AI learns from
concrete rejection reasons without needing a settings UI.
- Click "Ablehnen" → textarea + Cancel/Reject buttons replace the
approve/reject row
- Empty submission still rejects (userFeedback stays undefined so the
Planner sees "no reason given" rather than an empty string)
- `rejectingId` + `rejectDraft` are per-inbox local state; only one
reject form is open at a time
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
One-line drop-in per module page. Each module's proposals now render as
ghost cards inline wherever the user already expects to see that
module's content — no separate approvals inbox to hunt for.
Covers all four remaining modules with proposable tools in
AI_PROPOSABLE_TOOL_NAMES:
- calendar → create_event
- places → create_place, visit_place
- drink → undo_drink
- food → (auto-tools only today; the inbox is ready for future
propose-class food tools)
Now the only modules with proposable tools NOT showing the inbox are
those that don't have their own /route (all covered by todo, calendar,
places, drink, food).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- 1–9 scroll to the Nth open app on the workbench homepage; 0 opens the
app picker.
- q/w/e toggle the bottom bars (workbench tabs / search / tags); r opens
the user-menu PillDropdownBar (expanding the PillNav first if needed);
t toggles the PillNav visibility.
Adds a `data-user-menu-trigger` hook on the user pill so the layout can
drive the menu bar programmatically without duplicating its config.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Makes the webapp's AI policy and the server's tool allow-list physically
impossible to drift. Adds the missing entries the guard caught on first
run: `complete_tasks_by_title`, `visit_place`, `undo_drink` now have
parameter schemas server-side too.
- `packages/shared-ai/src/policy/proposable-tools.ts`
- `AI_PROPOSABLE_TOOL_NAMES` as `const` array + literal union type
- `AI_PROPOSABLE_TOOL_SET` for set-membership checks
- Webapp `DEFAULT_AI_POLICY` derives its `propose` entries from the
shared list via `Object.fromEntries(...)` — adding a tool there is now
a one-line change in `@mana/shared-ai`
- mana-ai `AI_AVAILABLE_TOOLS`: module-load assertion compares its
hardcoded names against `AI_PROPOSABLE_TOOL_SET` and throws with a
pointed error on drift (extras in one direction, missing in the
other). Service refuses to start on mismatch — better than silent
degradation.
- Bun test (`tools.test.ts`) runs the same contract plus sanity checks
(non-empty description, required params carry docs). Vitest policy
test adds the symmetric check on the webapp side.
All three runtimes now green: webapp 66/66, shared-ai 2/2,
mana-ai 9/9 Bun tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mission-Input-Picker now surfaces open tasks + upcoming calendar events
alongside notes / kontext / goals. When the foreground runner runs, the
corresponding resolvers decrypt the records client-side and hand real
content (title + due date / event time + description) into the Planner
prompt.
- `tasksResolver` + `tasksIndexer` — reads unencrypted subset + decrypts
via `decryptRecords('tasks', …)`. Picker shows only OPEN tasks (not
completed ones) to keep the list relevant. Resolver output is
`[status]{ · fällig date}{ \n description}` — terse by design.
- `calendarResolver` + `calendarIndexer` — similarly decrypts events;
picker prioritizes upcoming events (sorted by startIso), shows
near-term times as "bald: YYYY-MM-DDTHH:MM" to make recency obvious
- Both tables are encrypted client-side — server-side mana-ai resolvers
remain intentionally absent (per the privacy contract in
`services/mana-ai/src/db/resolvers/types.ts`)
With this, a user can create a Mission like "plan my week" and link a
few tasks + calendar events as context; the in-browser planner sees the
full picture (decrypted), while the off-tab runner still plans from
objective + concept only for those missions.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Plugs plaintext-safe Mission context into the Planner prompt per tick.
Before this, `resolvedInputs: []` was always passed — the LLM only saw
the mission's concept + objective. Now goals (the only plaintext
category of linked inputs today) resolve and land in the prompt.
Privacy constraint is explicit and documented: tables in the webapp's
encryption registry (notes, kontext, journal, dreams, …) arrive at
`sync_changes.data` as ciphertext — the master key lives in mana-auth
KEK-wrapped and never reaches this service. Resolvers for encrypted
modules therefore don't exist server-side; missions referencing them
should use the foreground runner which decrypts client-side.
- `db/resolvers/types.ts` — ServerInputResolver contract
- `db/resolvers/record-replay.ts` — single-record LWW replay
(tighter WHERE than `missions-projection.ts`, used by all resolvers)
- `db/resolvers/goals.ts` — reads `companionGoals` via replayRecord,
mirrors the webapp's default goalsResolver output shape
- `db/resolvers/index.ts` — registry with `registerServerResolver` /
`unregisterServerResolver` / `resolveServerInputs`. Seeds `goals`.
Drift-tolerant: missions pointing at unregistered modules silently
skip those inputs.
- `cron/tick.ts` — wires `resolveServerInputs(sql, m.inputs, m.userId)`
into the planner input; updates the outdated "stubbed" comment
5 Bun tests over the registry (handled + unhandled + thrown +
mixed cases + seeded default).
Future: expand to plaintext tables if/when more land (habits without
free-text, dashboard configs, tags), or introduce a decrypt-via-auth
sidecar if users opt into server-side access to encrypted content.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Completes the off-tab AI pipeline. mana-ai now writes produced plans
back to `sync_changes` as a server-sourced Mission iteration; the webapp
picks it up on next sync and translates each PlanStep into a local
Proposal via the existing createProposal flow. User sees the resulting
ghost cards in the matching module's AiProposalInbox with full mission
attribution.
Server (mana-ai v0.3):
- `db/connection.ts` — `withUser(sql, userId, fn)` RLS-scoped tx helper
mirroring the Go `withUser` pattern (SET LOCAL app.current_user_id)
- `db/iteration-writer.ts`
- `planToIteration(plan, id, now)` — shared-ai AiPlanOutput → inline
MissionIteration with `source: 'server'` + status='awaiting-review'
- `appendServerIteration(sql, input)` — INSERT sync_changes row with
op=update, data={iterations: [...]} + field_timestamps + actor
JSONB={kind:'system', source:'mission-runner'}
- `cron/tick.ts` — after parse success: build iteration, append to
mission.iterations, persist via appendServerIteration. Stats now
include `plansWrittenBack`.
Actor union:
- `packages/shared-ai/src/actor.ts` + webapp actor: `system.source` gains
`'mission-runner'` so the server's own writes are attributed correctly
and distinguishable from projection/rule writes
Webapp:
- `data/ai/missions/server-iteration-staging.ts`
- `startServerIterationStaging()` subscribes to aiMissions via Dexie
liveQuery; on each Mission update, walks iterations looking for
`source='server'` entries that haven't been staged yet
- For each such iteration: creates a Proposal per PlanStep under
`{kind:'ai', missionId, iterationId, rationale}` so policy + hooks
fire correctly
- Writes proposalIds back into plan[].proposalId + status='staged' so
other tabs and app restarts skip re-staging
- Idempotent: in-memory `processedIterations` Set + durable
proposalId marker
- Wired into (app)/+layout.svelte alongside startMissionTick
- 3 unit tests: translate server iteration → proposal, skip
already-staged, ignore browser iterations
Full pipeline now: user creates Mission in /companion/missions →
mana-ai tick picks it up → calls mana-llm → parses plan →
writes iteration → synced to webapp → staging effect creates
proposals → user approves in /todo (or any module) → task lands with
`{actor: ai, missionId, iterationId, rationale}` attribution.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Service now produces plans end-to-end for due missions. Takes the
shared prompt/parser from @mana/shared-ai, calls mana-llm's
OpenAI-compatible endpoint, parses + validates the response against a
server-side tool allow-list.
- `src/planner/tools.ts` — hardcoded subset of webapp tools where
policy === 'propose'. Mirror of `DEFAULT_AI_POLICY` in the webapp;
drift just means the server doesn't suggest newly-added tools
(graceful degradation). Contract test between the two lists is a
sensible follow-up.
- `src/cron/tick.ts`
- Iterates due missions, builds the shared Planner prompt per mission,
parses the LLM response, logs the resulting plan
- Per-mission try/catch so one flaky LLM response doesn't abort the
queue; stats now track `plansProduced` + `parseFailures`
- `serverMissionToSharedMission()` converts the projection shape to
the shared-ai Mission type at the boundary
- `resolvedInputs: []` today — the Planner sees concept + objective +
iteration history only. Full resolvers (notes/kontext/goals via
Postgres replay) land alongside write-back in the next PR.
- No write-back yet: the plan is logged but not persisted to
`sync_changes`. Write-back needs an RLS-scoped helper mirroring
mana-sync's `withUser` pattern — tracked explicitly as the remaining
open piece in CLAUDE.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Single source of truth for AI Workbench types shared between the webapp
(Vite/SvelteKit) and the server-side mana-ai Bun service. Prevents the
two runtimes from drifting on prompt shape or mission structure.
- `@mana/shared-ai` package:
- `actor.ts` — Actor union (user | ai | system) + helpers, mirrors the
webapp's runtime type so server-side consumers parse incoming actors
without re-declaring
- `missions/types.ts` — Mission, MissionCadence, MissionInputRef,
MissionIteration, PlanStep, MissionState. Adds optional
`iteration.source: 'browser' | 'server'` to distinguish foreground
vs server-produced iterations (groundwork for proposal write-back)
- `planner/prompt.ts` — `buildPlannerPrompt` pure function
- `planner/parser.ts` — `parsePlannerResponse` strict JSON validator
- Vitest smoke tests (2) cover prompt → parse round-trip + unknown-
tool rejection
- Webapp:
- `missions/types.ts` re-exports from shared-ai, keeps webapp-local
`MISSIONS_TABLE` constant + `planStepStatusFromProposal` bridge
- `missions/planner/{types,prompt,parser}.ts` become re-export stubs
so existing imports keep working unchanged
- Existing webapp tests (60) continue to pass — the wire code didn't
move, just its home
Next: mana-ai service imports buildPlannerPrompt/parsePlannerResponse
from shared-ai + wires mana-llm + writes iteration back as a
'source=server' row (tracked in services/mana-ai/CLAUDE.md).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Background Hono/Bun service that scans mana_sync for due Missions and
will plan them via mana-llm without requiring an open browser tab.
Complements the foreground `startMissionTick` in the webapp.
v0.1 scope — scaffold that's deployable, boots cleanly, and reads real
data. Execution write-back is tracked as the next PR so we don't commit
a half-baked proposal-sync design.
Shipped:
- Hono app on :3066 with `/health` + service-key-gated `/internal/tick`
- `src/db/missions-projection.ts` — field-level LWW replay of
`sync_changes` for appId='ai' / table='aiMissions' → live Mission
records. Mirrors the webapp's `applyServerChanges` semantics against
Postgres instead of Dexie.
- `src/db/connection.ts` — bounded `postgres.js` pool (max 4, idle 30s)
- `src/cron/tick.ts` — overlap-guarded scheduler, `runTickOnce()` also
reachable via HTTP for CI/ops triggering
- `src/planner/client.ts` — mana-llm HTTP client shape
(OpenAI-compatible `/v1/chat/completions`)
- `src/middleware/service-auth.ts` — X-Service-Key gate, no end-user JWTs
reach this service
- Dockerfile + graceful SIGTERM shutdown (stops timer + releases pool)
Not yet implemented (documented in CLAUDE.md with design trade-offs):
- Prompt/parser server-side copies — today they live in the webapp.
Recommended next step: extract `@mana/shared-ai` package.
- Input resolvers for notes / kontext / goals — need projections or a
mana-sync internal endpoint
- Plan → Mission-iteration write-back + how proposals get back to the
user's device (leaning option (a): server writes iterations, the
webapp's sync effect translates them into local Proposals)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes the cross-device attribution loop. When another device pushes a
change with `actor: { kind: 'ai', missionId, … }`, the receiving device
now persists that attribution onto the record so the Workbench timeline
and per-module ghost badges render the same way they would on the
originating device.
- `readFieldActors()` sibling helper next to `readFieldTimestamps` for
reading the per-field actor map off a record
- `applyServerChanges`:
- Insert-new-record: stamp every field with `change.actor`, set
`__lastActor` on the whole record
- Insert-as-upsert: stamp only the winning fields (same LWW condition
as the timestamp merge), update `__lastActor` to the change actor
- Field-level update: same per-field + whole-record stamping
- Pre-actor clients (change.actor undefined) fall back to USER_ACTOR so
legacy rows still have a valid stamp
- All three paths also add the new hidden keys to their "skip" lists so
incoming payloads can't smuggle old bookkeeping fields through
With this, the full pipeline is cross-device:
Device A (AI writes) → meta.actor + __lastActor + pendingChange.actor
mana-sync (Go) → persists actor JSONB on sync_changes row
Device B (sync pull) → applyServerChanges re-stamps __lastActor +
__fieldActors from the incoming change
Device B (Workbench) → renders the AI's activity from `_events` with
correct rationale + mission context
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Matches the wire contract the Go server just learned to persist. Every
PendingChange now carries the actor through the Dexie row into the POST
payload; SyncChange on the receiving side accepts an opaque actor blob.
- `sync.ts`
- `SyncChange.actor?: Actor` on the wire type; documented as opaque +
back-compat with pre-actor clients
- `PendingChange.actor?: Actor` — the pending-changes row already gets
an actor stamped by the Dexie hook, the type now reflects it
- `isValidSyncChange` accepts actor as an object or undefined, never
asserts internal shape (the payload is opaque by design)
- Push payload includes `actor: p.actor` alongside the other fields
- `module-registry.test.ts` — `pendingProposals` added to INTERNAL_TABLES.
It's a local-only staging table that intentionally does NOT sync
(approved writes run the underlying tool, which syncs normally).
Follow-up still open: when applyServerChanges writes a record from an
incoming change, stamp `__lastActor` + `__fieldActors` from the incoming
actor so the Workbench timeline attributes cross-device writes
correctly.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds an opaque JSON `actor` column alongside the existing field_timestamps
so cross-device consumers can distinguish user / ai / system writes. The
server never parses the shape — it just stores and re-emits the blob the
webapp stamped in its Dexie hook.
- `sync/types.go` — Change.Actor as json.RawMessage with omitempty; nil
for pre-actor clients so wire remains backward-compatible
- `store/postgres.go`
- Migrate: CREATE TABLE includes `actor JSONB` for fresh DBs;
ALTER TABLE ADD COLUMN IF NOT EXISTS actor JSONB for existing ones
(idempotent, safe to re-run)
- RecordChange signature takes json.RawMessage; pgx writes nil as NULL
- All three SELECT paths (GetChangesSince, GetAllChangesSince,
StreamAllUserChanges) return actor, Scan into ChangeRow.Actor
- ChangeRow.Actor added with doc noting "missing = user" consumer rule
- `sync/handler.go` — Change.Actor threaded through HandleSync →
RecordChange, and populated on both changeFromRow (pull/POST replies)
and convertChanges (SSE stream)
- Tests: roundtrip of an AI-actor payload + omitempty verification for
pre-actor clients. All existing tests still pass.
Webapp types still need `actor?: Actor` on SyncChange + PendingChange to
match the wire, and applyServerChanges needs to stamp __lastActor /
__fieldActors from incoming changes for Workbench attribution on other
devices — both tracked as separate follow-ups.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>