Completes the off-tab AI pipeline. mana-ai now writes produced plans
back to `sync_changes` as a server-sourced Mission iteration; the webapp
picks it up on next sync and translates each PlanStep into a local
Proposal via the existing createProposal flow. User sees the resulting
ghost cards in the matching module's AiProposalInbox with full mission
attribution.
Server (mana-ai v0.3):
- `db/connection.ts` — `withUser(sql, userId, fn)` RLS-scoped tx helper
mirroring the Go `withUser` pattern (SET LOCAL app.current_user_id)
- `db/iteration-writer.ts`
- `planToIteration(plan, id, now)` — shared-ai AiPlanOutput → inline
MissionIteration with `source: 'server'` + status='awaiting-review'
- `appendServerIteration(sql, input)` — INSERT sync_changes row with
op=update, data={iterations: [...]} + field_timestamps + actor
JSONB={kind:'system', source:'mission-runner'}
- `cron/tick.ts` — after parse success: build iteration, append to
mission.iterations, persist via appendServerIteration. Stats now
include `plansWrittenBack`.
Actor union:
- `packages/shared-ai/src/actor.ts` + webapp actor: `system.source` gains
`'mission-runner'` so the server's own writes are attributed correctly
and distinguishable from projection/rule writes
Webapp:
- `data/ai/missions/server-iteration-staging.ts`
- `startServerIterationStaging()` subscribes to aiMissions via Dexie
liveQuery; on each Mission update, walks iterations looking for
`source='server'` entries that haven't been staged yet
- For each such iteration: creates a Proposal per PlanStep under
`{kind:'ai', missionId, iterationId, rationale}` so policy + hooks
fire correctly
- Writes proposalIds back into plan[].proposalId + status='staged' so
other tabs and app restarts skip re-staging
- Idempotent: in-memory `processedIterations` Set + durable
proposalId marker
- Wired into (app)/+layout.svelte alongside startMissionTick
- 3 unit tests: translate server iteration → proposal, skip
already-staged, ignore browser iterations
Full pipeline now: user creates Mission in /companion/missions →
mana-ai tick picks it up → calls mana-llm → parses plan →
writes iteration → synced to webapp → staging effect creates
proposals → user approves in /todo (or any module) → task lands with
`{actor: ai, missionId, iterationId, rationale}` attribution.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Service now produces plans end-to-end for due missions. Takes the
shared prompt/parser from @mana/shared-ai, calls mana-llm's
OpenAI-compatible endpoint, parses + validates the response against a
server-side tool allow-list.
- `src/planner/tools.ts` — hardcoded subset of webapp tools where
policy === 'propose'. Mirror of `DEFAULT_AI_POLICY` in the webapp;
drift just means the server doesn't suggest newly-added tools
(graceful degradation). Contract test between the two lists is a
sensible follow-up.
- `src/cron/tick.ts`
- Iterates due missions, builds the shared Planner prompt per mission,
parses the LLM response, logs the resulting plan
- Per-mission try/catch so one flaky LLM response doesn't abort the
queue; stats now track `plansProduced` + `parseFailures`
- `serverMissionToSharedMission()` converts the projection shape to
the shared-ai Mission type at the boundary
- `resolvedInputs: []` today — the Planner sees concept + objective +
iteration history only. Full resolvers (notes/kontext/goals via
Postgres replay) land alongside write-back in the next PR.
- No write-back yet: the plan is logged but not persisted to
`sync_changes`. Write-back needs an RLS-scoped helper mirroring
mana-sync's `withUser` pattern — tracked explicitly as the remaining
open piece in CLAUDE.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Single source of truth for AI Workbench types shared between the webapp
(Vite/SvelteKit) and the server-side mana-ai Bun service. Prevents the
two runtimes from drifting on prompt shape or mission structure.
- `@mana/shared-ai` package:
- `actor.ts` — Actor union (user | ai | system) + helpers, mirrors the
webapp's runtime type so server-side consumers parse incoming actors
without re-declaring
- `missions/types.ts` — Mission, MissionCadence, MissionInputRef,
MissionIteration, PlanStep, MissionState. Adds optional
`iteration.source: 'browser' | 'server'` to distinguish foreground
vs server-produced iterations (groundwork for proposal write-back)
- `planner/prompt.ts` — `buildPlannerPrompt` pure function
- `planner/parser.ts` — `parsePlannerResponse` strict JSON validator
- Vitest smoke tests (2) cover prompt → parse round-trip + unknown-
tool rejection
- Webapp:
- `missions/types.ts` re-exports from shared-ai, keeps webapp-local
`MISSIONS_TABLE` constant + `planStepStatusFromProposal` bridge
- `missions/planner/{types,prompt,parser}.ts` become re-export stubs
so existing imports keep working unchanged
- Existing webapp tests (60) continue to pass — the wire code didn't
move, just its home
Next: mana-ai service imports buildPlannerPrompt/parsePlannerResponse
from shared-ai + wires mana-llm + writes iteration back as a
'source=server' row (tracked in services/mana-ai/CLAUDE.md).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Background Hono/Bun service that scans mana_sync for due Missions and
will plan them via mana-llm without requiring an open browser tab.
Complements the foreground `startMissionTick` in the webapp.
v0.1 scope — scaffold that's deployable, boots cleanly, and reads real
data. Execution write-back is tracked as the next PR so we don't commit
a half-baked proposal-sync design.
Shipped:
- Hono app on :3066 with `/health` + service-key-gated `/internal/tick`
- `src/db/missions-projection.ts` — field-level LWW replay of
`sync_changes` for appId='ai' / table='aiMissions' → live Mission
records. Mirrors the webapp's `applyServerChanges` semantics against
Postgres instead of Dexie.
- `src/db/connection.ts` — bounded `postgres.js` pool (max 4, idle 30s)
- `src/cron/tick.ts` — overlap-guarded scheduler, `runTickOnce()` also
reachable via HTTP for CI/ops triggering
- `src/planner/client.ts` — mana-llm HTTP client shape
(OpenAI-compatible `/v1/chat/completions`)
- `src/middleware/service-auth.ts` — X-Service-Key gate, no end-user JWTs
reach this service
- Dockerfile + graceful SIGTERM shutdown (stops timer + releases pool)
Not yet implemented (documented in CLAUDE.md with design trade-offs):
- Prompt/parser server-side copies — today they live in the webapp.
Recommended next step: extract `@mana/shared-ai` package.
- Input resolvers for notes / kontext / goals — need projections or a
mana-sync internal endpoint
- Plan → Mission-iteration write-back + how proposals get back to the
user's device (leaning option (a): server writes iterations, the
webapp's sync effect translates them into local Proposals)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes the cross-device attribution loop. When another device pushes a
change with `actor: { kind: 'ai', missionId, … }`, the receiving device
now persists that attribution onto the record so the Workbench timeline
and per-module ghost badges render the same way they would on the
originating device.
- `readFieldActors()` sibling helper next to `readFieldTimestamps` for
reading the per-field actor map off a record
- `applyServerChanges`:
- Insert-new-record: stamp every field with `change.actor`, set
`__lastActor` on the whole record
- Insert-as-upsert: stamp only the winning fields (same LWW condition
as the timestamp merge), update `__lastActor` to the change actor
- Field-level update: same per-field + whole-record stamping
- Pre-actor clients (change.actor undefined) fall back to USER_ACTOR so
legacy rows still have a valid stamp
- All three paths also add the new hidden keys to their "skip" lists so
incoming payloads can't smuggle old bookkeeping fields through
With this, the full pipeline is cross-device:
Device A (AI writes) → meta.actor + __lastActor + pendingChange.actor
mana-sync (Go) → persists actor JSONB on sync_changes row
Device B (sync pull) → applyServerChanges re-stamps __lastActor +
__fieldActors from the incoming change
Device B (Workbench) → renders the AI's activity from `_events` with
correct rationale + mission context
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Matches the wire contract the Go server just learned to persist. Every
PendingChange now carries the actor through the Dexie row into the POST
payload; SyncChange on the receiving side accepts an opaque actor blob.
- `sync.ts`
- `SyncChange.actor?: Actor` on the wire type; documented as opaque +
back-compat with pre-actor clients
- `PendingChange.actor?: Actor` — the pending-changes row already gets
an actor stamped by the Dexie hook, the type now reflects it
- `isValidSyncChange` accepts actor as an object or undefined, never
asserts internal shape (the payload is opaque by design)
- Push payload includes `actor: p.actor` alongside the other fields
- `module-registry.test.ts` — `pendingProposals` added to INTERNAL_TABLES.
It's a local-only staging table that intentionally does NOT sync
(approved writes run the underlying tool, which syncs normally).
Follow-up still open: when applyServerChanges writes a record from an
incoming change, stamp `__lastActor` + `__fieldActors` from the incoming
actor so the Workbench timeline attributes cross-device writes
correctly.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds an opaque JSON `actor` column alongside the existing field_timestamps
so cross-device consumers can distinguish user / ai / system writes. The
server never parses the shape — it just stores and re-emits the blob the
webapp stamped in its Dexie hook.
- `sync/types.go` — Change.Actor as json.RawMessage with omitempty; nil
for pre-actor clients so wire remains backward-compatible
- `store/postgres.go`
- Migrate: CREATE TABLE includes `actor JSONB` for fresh DBs;
ALTER TABLE ADD COLUMN IF NOT EXISTS actor JSONB for existing ones
(idempotent, safe to re-run)
- RecordChange signature takes json.RawMessage; pgx writes nil as NULL
- All three SELECT paths (GetChangesSince, GetAllChangesSince,
StreamAllUserChanges) return actor, Scan into ChangeRow.Actor
- ChangeRow.Actor added with doc noting "missing = user" consumer rule
- `sync/handler.go` — Change.Actor threaded through HandleSync →
RecordChange, and populated on both changeFromRow (pull/POST replies)
and convertChanges (SSE stream)
- Tests: roundtrip of an AI-actor payload + omitempty verification for
pre-actor clients. All existing tests still pass.
Webapp types still need `actor?: Actor` on SyncChange + PendingChange to
match the wire, and applyServerChanges needs to stamp __lastActor /
__fieldActors from incoming changes for Workbench attribution on other
devices — both tracked as separate follow-ups.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Derived-state writes should be attributed to the projection subsystem,
not to whoever triggered the upstream event. `_streakState` is local-
only today so no cross-device user-visible effect, but once any derived
table joins sync this is the only correct model.
- `markActive` and `ensureSeeded` now run under
`runAsAsync({ kind: 'system', source: 'projection' }, …)`
- Sets the pattern for future projections (DaySnapshot, correlations, …)
to follow verbatim when they start writing persistently
Closes one of the Step-1 follow-ups tracked in
COMPANION_BRAIN_ARCHITECTURE §20. Remaining:
- mana-sync Go + Postgres migration for the `actor` field
- rule-engine to wrap its future writes the same way (no writes today)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Single-page view of everything the AI has done, grouped by mission
iteration. Closes the "what did the assistant actually touch today?"
question that used to require raw event-log spelunking.
- `data/ai/timeline/queries.ts`
- `useAiTimeline({ missionId?, module?, limit? })` — reactive live
query over `_events`, filtered to `actor.kind === 'ai'`. Over-fetches
by 3x and client-filters because `actor.kind` isn't indexed; cap at
500 entries keeps it cheap.
- `bucketByIteration(events)` — groups events sharing
`actor.missionId + actor.iterationId` into a single visual unit so
the rationale reads once per iteration rather than once per event.
Pure function, fully unit-tested.
- `routes/(app)/companion/workbench/+page.svelte`
- Buckets rendered chronologically with mission-link header + rationale
- Per-event row shows module + event type + payload title + deep-link
back into the module
- Module dropdown filter + `?mission=…` query-string for mission-scoped
views (linked from /companion/missions detail header)
- `/companion` sidebar + missions detail header now link to the Workbench
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes the "blind Planner" gap: users can now attach context records to
a Mission, and the Runner resolves them through the existing resolver
registry before calling the Planner. The LLM sees the actual linked
note content, not just the mission objective.
- `data/ai/missions/input-index.ts` — sibling registry to input-resolvers.
Resolvers turn a ref into prompt text (Runner path); indexers list
candidates for the picker UI (create-form path). Same shape, different
direction; keeps modules decoupled from the AI layer on both ends.
- `data/ai/missions/default-resolvers.ts` — registers indexers for
notes (title + content preview, capped at 200), kontext (the singleton
doc), goals (title + progress). Co-located with the resolvers so the
two halves stay in sync.
- `components/ai/MissionInputPicker.svelte` — drop-in picker: module
selector → candidate list → chip-style selected display. Binds
directly to `MissionInputRef[]` so forms use it as a single control.
- `/companion/missions` — picker wired into the create form between
Konzept and Cadence; detail view's meta block now lists the linked
inputs so users can see what context the Planner will see.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Create, review, and control AI Missions from the app. Closes the last
UX gap in the end-to-end pipeline — users no longer need the Dexie
console to drive the Runner.
- `data/ai/missions/queries.ts` — `useMissions({ state? })` live query
+ single-mission `useMission(id)`. Decryption-ready via `decryptRecords`
wrapper (no-op today, future-proof when/if Missions get added to the
crypto registry).
- `routes/(app)/companion/missions/+page.svelte`
- Inline create form: title + objective + markdown concept + cadence
picker (manual / interval-minutes / daily-hour). Weekly + cron are
wired in state but not exposed until a richer picker is worth it.
- List / detail layout, responsive to narrow viewports.
- Detail view: run-now button (invokes runMission with productionDeps),
pause/resume/complete/delete lifecycle actions, iteration history
with per-iteration feedback form.
- Uses shared-icons + scoped CSS with theme-token fallbacks.
- `routes/(app)/companion/+page.svelte` — footer nav links to
/companion/missions and /companion/rituals from the chat sidebar.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Connects the dependency-injected Runner to the real LlmOrchestrator and
drives it on a foreground tick in the app shell. Registers sensible
default input resolvers so Missions linked to notes / kontext / goals
work without per-module opt-in.
- `data/ai/missions/setup.ts`
- `productionDeps` wires `aiPlanTask` through `llmOrchestrator.run`
- `startMissionTick(intervalMs = 60_000)` kicks an immediate run then
schedules `runDueMissions` on interval. Idempotent + overlap-guarded
so a slow LLM run can't pile up ticks.
- `stopMissionTick` clears the interval for teardown / HMR.
- `data/ai/missions/default-resolvers.ts` — resolvers for notes (title +
decrypted content), kontext (singleton markdown), goals (progress
projection). Registered once when the tick starts.
- `(app)/+layout.svelte` wires startMissionTick into the idle-phase init
block alongside startEventStore / startStreakTracker / etc., and
stopMissionTick into the teardown path.
System is now end-to-end runnable in the browser: create a Mission with
cadence 'interval', wait for the tick, see proposals appear in
/todo's AiProposalInbox. Missions UI (create/edit form) still open.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Executes one iteration end-to-end: resolve Mission inputs → build the
policy-filtered tool allowlist → invoke the Planner → stage each
PlannedStep as a Proposal (or auto-run if policy says so) → finalize
the iteration with summary + status.
- `data/ai/missions/runner.ts`
- `runMission(id, deps)` runs a single iteration. Planner + stageStep
are injected so the Runner is unit-testable without a live LLM.
- `runDueMissions(now, deps)` scans for active missions past their
nextRunAt and runs each once. Safe to call on a foreground tick.
- Reuses the iteration id returned by `startIteration` so
`finishIteration` updates the same row (fixed a dup-id bug the
tests caught).
- `data/ai/missions/input-resolvers.ts` — registry: modules register a
resolver at init, Runner looks up by module name. Missing resolvers
degrade gracefully to "fewer inputs", never crash a run.
- `data/ai/missions/available-tools.ts` — exposes only tools the AI
policy rates non-`deny`. Defence-in-depth with the executor + parser.
overallStatus derivation:
0 steps → 'approved' (no-op run is valid)
all steps failed → 'failed'
any step staged (proposal id) → 'awaiting-review'
all steps ran auto → 'approved'
Planner throw is caught and recorded as a failed iteration — one bad
mission can't stall the queue.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Turns a Mission (concept + objective + linked inputs + iteration history)
into a structured plan of tool-call proposals via the shared
LlmOrchestrator.
- `data/ai/missions/planner/types.ts` — AiPlanInput, AiPlanOutput,
PlannedStep, ResolvedInput, AvailableTool
- `data/ai/missions/planner/prompt.ts` — pure builder producing system +
user messages. System prompt enforces a strict fenced-JSON contract and
lists available tools with parameter schema. User prompt injects the
mission content, resolved input records, and the last 3 iterations
(especially any userFeedback so the planner can course-correct).
- `data/ai/missions/planner/parser.ts` — strict parser with a
discriminated ParseResult union. Rejects unknown tools, missing
rationale, malformed shape. Tolerates missing optional fields.
- `llm-tasks/ai-plan.ts` — aiPlanTask LlmTask, minTier 'browser',
contentClass 'personal'. On parse failure returns an empty plan with
an explanatory summary rather than throwing, so the Runner can record
a failed iteration without killing the queue.
No Runner yet — the planner is pure (input in, plan out). Runner (next
commit) will resolve inputs from modules, invoke the task, stage each
PlannedStep as a Proposal under the AI actor, and update the Mission
iteration.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Foundational entity for the AI Workbench Runner. A Mission carries the
user's standing instruction (concept + objective), references to the
modules it should draw context from, a cadence, and an append-only
history of iterations. Each iteration records the plan the AI generated
for that run plus the resulting proposal statuses and user feedback.
- `data/ai/missions/types.ts` — Mission, PlanStep, MissionIteration,
MissionCadence union (manual / interval / daily / weekly / cron)
- `data/ai/missions/cadence.ts` — pure `nextRunForCadence(cadence, from)`
used by the store on create / update / finishIteration
- `data/ai/missions/store.ts` — CRUD + lifecycle
(pause / resume / complete / archive / delete) + iteration helpers
(start / finish / addFeedback)
- `data/ai/module.config.ts` — new `ai` sync app; Missions sync
cross-device (unlike the local-only `pendingProposals`)
- `db.version(18)` adds the `aiMissions` Dexie store with indexes on
state, createdAt, nextRunAt, and [state+nextRunAt] for the Runner's
"due now" query
Iterations live inline on the Mission record — append-only, small N,
always loaded together by the Planner. No separate child table.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- COMPANION_BRAIN_ARCHITECTURE §20: Actor model, policy layer,
pendingProposals lifecycle, ghost-UI pilot, roadmap, open follow-ups,
manual test snippet
- DATA_LAYER_AUDIT §9: new Actor columns on records
(`__lastActor`, `__fieldActors`), `pendingProposals` table, write-path
diagrams for user / AI / approval, open mana-sync Go + Postgres work
- apps/mana/CLAUDE.md: short AI Workbench section with pointers + Dexie
hook now lists actor stamping
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
First pilot of the AI Workbench ghost-state pattern. A reusable
`<AiProposalInbox module="todo" />` component renders pending proposals
for a given module as dashed-outline ghost cards above the real content —
zero UI when the AI is idle, approve / reject inline when it's not.
- `data/ai/proposals/queries.ts` — reactive `useAiProposals` live query
with module / status / missionId filters. Module filter resolves via
the tool registry so each proposal auto-routes to the right page.
- `components/ai/AiProposalInbox.svelte` — the drop-in inbox component.
Shows tool description + params + AI rationale; approve runs the
original intent under the AI actor context (preserving attribution),
reject stores the row with status=rejected for the next planner pass.
- Wired into /todo for the pilot. Other modules opt in by adding one
line once their tools land in DEFAULT_AI_POLICY.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The prior dance (93bb94a12 drop .ts, bb278fb3c switch to .js) kept
breaking one consumer or the other:
- bare specifiers (no extension) satisfied svelte-check but broke the
Node ESM loader invoked via @tailwindcss/node during SSR — SSR of
every (app) route 500'd with ERR_MODULE_NOT_FOUND on 'theme'.
- .js extensions satisfied svelte-check and Vite but still broke the
Tailwind loader, because the files on disk are .ts — Node ESM walks
the actual filesystem and can't rewrite .js → .ts the way tsc does
at type-check time.
Flip the web app's tsconfig to "allowImportingTsExtensions": true and
put the .ts extensions back. tsc now accepts the imports, and Node's
loader finds the real file on disk. No build step, no emit, and the
shared-types package stays a pure source-only TS workspace.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the staging layer that turns AI-attributed tool calls into user-reviewed
proposals instead of direct writes.
- `data/ai/policy.ts` — per-tool AiPolicy (`auto` | `propose` | `deny`) with
module-level defaults and a global fallback. `user` and `system` actors
always bypass (they ARE the decision / are trusted subsystems).
- `data/ai/proposals/` — Proposal + Intent types, store with
create/list/approve/reject/expire. Proposals are local-only (do NOT
sync); the approved write syncs through the normal module path.
- `tools/executor.ts` routes by actor+policy: `auto` runs directly under
`runAsAsync(actor, ...)`, `propose` stages a Proposal carrying rationale
+ mission metadata, `deny` refuses. `executeToolRaw` bypasses the policy
gate — used only on the approval path where consent already exists.
Default policy is conservative: read-only and append-only self-state
(log_drink, log_meal) auto-execute; everything that mutates user-visible
records defaults to propose.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Extends the creating/updating hooks to capture the ambient actor
synchronously and freeze it onto every write:
- `__lastActor` on each record (whole-record attribution for Workbench badges)
- `__fieldActors` parallel to `__fieldTimestamps` (field-level attribution
for inline diff rendering — e.g. "AI changed due date, user changed title")
- `actor` on `_pendingChanges` rows so mana-sync + cross-device views can
distinguish AI- vs user-initiated writes
Also adds `kontextDoc` to v17 (missing from schema while module was live)
alongside the new `pendingProposals` table for staged AI intents.
Actor is captured in-hook rather than at emit time because
`trackPendingChange` is deferred via setTimeout and would otherwise lose
ambient context.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Introduces a discriminated Actor union (user | ai | system) threaded through
the event pipeline so downstream consumers can distinguish human writes from
AI-initiated ones and derived subsystem writes.
- `EventMeta.actor: Actor` is required (no legacy fallback — pre-launch)
- `emitDomainEvent` takes an options bag `{ actor?, causedBy? }`; falls back
to the ambient actor set by `runAs` / `runAsAsync`
- `runAs` / `runAsAsync` pin the actor at defined boundaries (tool executor,
mission runner, projection dispatcher) — primitives capture synchronously
so ambient context is never SoT past the write moment
Foundation for the AI Workbench. Follow-up: mana-sync server must accept
and persist `actor` in pending-change payloads.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New Mana module "Kontext" — one editable markdown document per user,
displayed as a workbench page. Toggle between rendered preview and a
raw textarea editor (Cmd/Ctrl+E); debounced autosave 500 ms.
Content is encrypted at rest (new `kontextDoc` table, singleton row).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Commit 93bb94a12 dropped the extensions on shared-types re-exports
to make the web app's svelte-check pass (its tsconfig has no
allowImportingTsExtensions). That satisfied tsc but broke SSR: the
dev server tripped with ERR_MODULE_NOT_FOUND on every (app) route
because Node's native ESM loader (used by downstream tooling like
@tailwindcss/node) cannot resolve bare relative specifiers without
an extension, and only Vite-owned paths got the bundler-style
resolution the fix relied on.
Switch to the TypeScript-ESM idiomatic `.js` extension. tsc with
moduleResolution: "bundler" still type-checks against the actual
.ts source, and at runtime both Vite and Node resolve `.js` the
same way — no tsconfig flag flip required.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The .mana backup parser in src/lib/data/backup/format.ts imports
inflateRaw from pako but the package was never declared. The
production Vite build fails to resolve it — pnpm let it through
locally only because some other workspace dep hoists pako into
node_modules.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- mail/ListView: add a11y_click_events_have_key_events ignore to match
the existing a11y_no_static_element_interactions suppression
- sleep/MorningLog + companion/CompanionChat: mark intentional
initial-value state reads with state_referenced_locally ignore
- goals/GoalEditor: add tabindex="-1" to the dialog role element
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- shared-types/index.ts: drop .ts import extensions (web-app tsconfig
has no allowImportingTsExtensions; bundler resolution handles it)
- backup/format.test.ts: narrow buildZip return to Uint8Array<ArrayBuffer>
so the Blob() constructor accepts it under strict lib.dom
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Survey of cross-domain AI context systems (Gemini Personal Intelligence,
Apple Intelligence, Qira) and architectural options for a Mana-wide
reasoning layer over the 40+ modules.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Local-first module with meditatePresets/Sessions/Settings tables, hub
ListView with stats + recent sessions, and SessionPlayer with
BreathingCircle + MoodPicker. Route at /meditate.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds four audit scripts (module health, inter-module coupling, per-function
cognitive complexity, D3 treemap) with generated reports under docs/ and
an iframe-embedded workbench app at /admin/complexity. Reports regenerate
weekly via the module-health GitHub Action.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
eventstream was confusingly branded "Events" in the app registry,
colliding with the real events calendar module. Renamed to activity
(DE: Aktivität) since it's a live activity feed across all modules.
cycles -> period (DE: Periode) makes the menstrual-tracking module
self-describing. Tables cycles/cycleDayLogs/cycleSymptoms renamed to
periods/periodDayLogs/periodSymptoms; field cycleId -> periodId;
TimeBlockType 'cycle' -> 'period'; domain event CycleDayLogged ->
PeriodDayLogged. Generic "cycle" usages (billing, lifecycle, breath,
bicycle, import cycles) left untouched.
Constant disambiguation: prior DEFAULT_PERIOD_LENGTH (bleeding days)
renamed to DEFAULT_BLEEDING_DAYS; prior DEFAULT_CYCLE_LENGTH (28d full
cycle) is now DEFAULT_PERIOD_LENGTH.
Pre-launch, no data migration needed.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Moves the BYOK key CRUD from the standalone /settings/ai-keys subpage
directly under the new BYOK tier card in the main AI settings section.
Users now manage keys in-context where they toggle the tier.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Following the shared-icons fix (d5cabed14), audit every workspace
package's src/index.ts for top-level side effects and flag the
ones that are safe to tree-shake:
- Pure TS re-export barrels (types, theme, utils, llm, storage):
"sideEffects": false — lets Vite prune entire submodules when a
consumer only imports a subset of named exports. Matters most for
shared-llm where the orchestrator/BYOK branch isn't needed on
every route.
- Packages that ship .svelte components (branding, ui, links):
"sideEffects": ["**/*.svelte", "**/*.css"] — same tree-shaking
benefit for TS modules, but keeps Svelte component CSS injection
intact.
The state-holding submodules (shared-ui drag-state/toast,
shared-llm store, shared-links mutations) are still evaluated
whenever their exports are referenced, so behaviour is unchanged —
the flag only lets the bundler skip modules that aren't in the
dependency graph at all.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- DATA_LAYER_AUDIT.md: new section 8 covering the export/import flow
end-to-end — architecture diagram, .mana format, protocol-stability
commitments we locked in pre-launch (eventId + schemaVersion + op
vocab + tombstones-forever), encryption-boundary argument, file
map, and the remaining backup backlog (M4b, M5, signature,
resumable download, dedup table).
- services/mana-sync/CLAUDE.md: /backup/export row in API table with
explicit note that it sits outside the billing gate, new Backup /
Restore section with format sketch + split between writer.go (pure)
and handler.go (shim), test-coverage line mentions the backup cases,
project-structure tree lists backup/*.go, Security section mentions
RLS still applies to the export path.
No code changes.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Refactor: HTTP handler becomes a thin shim over a pure WriteBackup(w,
userID, createdAt, iter) function. RowIterator abstracts the store, so
tests feed synthetic ChangeRow slices and production feeds
StreamAllUserChanges. Zero behavior change in production — same bytes
on the wire.
Tests (all pass):
- TestWriteBackup_Roundtrip: three rows across two apps, assert zip has
2 entries, events.jsonl has 3 JSON lines in order, insert omits
fieldTimestamps, update surfaces them, manifest apps are sorted,
eventsSha256 equals a recomputed sha of the decompressed body.
- TestWriteBackup_EmptyUser: empty userID refused up-front.
- TestWriteBackup_NoRows: zero-row export still produces a valid zip
with an empty events.jsonl and a manifest with eventCount=0 and a
non-empty sha (sha of empty input).
- TestWriteBackup_DefaultsSchemaVersionZeroRowsToOne: legacy rows with
schema_version=0 clamp to 1 so the manifest never claims a protocol
version that never existed.
Paired with the vitest zip parser suite on the TS side, this closes
the Go-writes / JS-reads round-trip without needing live mana-sync.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
phosphor-svelte ships a 28 MB lib of per-icon Svelte components and
does not declare "sideEffects" in its own package.json. When
@mana/shared-icons re-exports that package without its own
"sideEffects" hint, Vite/Rollup conservatively assume every
transitive module evaluation might matter and cannot aggressively
prune unused icons across chunk boundaries.
Our re-exports (index.ts: `export * from 'phosphor-svelte'` + a small
name→component registry) are pure ESM barrels with no top-level
runtime code, so flagging the package as side-effect-free is safe
and lets the bundler drop unused icons and skip evaluating the
icon-registry module from chunks that only want a named icon.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Builds synthetic PKZIP archives in-memory (same deflateRaw the runtime
uses on the inflate side) and asserts:
- round-trip through parseBackup surfaces manifest + events + matching
sha256
- events.jsonl iteration yields both records with fieldTimestamps intact
- wrong formatVersion is rejected with a clear error
- missing manifest.json or events.jsonl is rejected by name
- non-zip input is rejected at EOCD scan
- sha mismatch surfaces as differing manifest vs computed hash fields
- iterateEvents skips blanks + throws on malformed JSON
This is the only untrusted-input frontier in the backup flow, so it
earns a real test harness rather than relying on integration smoke.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Client-side restore for the same-account case:
- lib/data/backup/format.ts: hand-rolled .mana (zip) parser. Walks the
central directory, inflates DEFLATE entries via pako (already in the
repo), exposes manifest + events.jsonl + recomputed sha256. No new
dependency; the archive shape is narrow enough that 200 lines cover it.
- lib/data/backup/import.ts: validates manifest (userId match is hard-
refused, eventsSha256 must match, schemaVersionMax ≤ client support),
streams events through iterateEvents(), batches 300 per appId and
replays via the existing applyServerChanges() path. LWW makes the
operation idempotent.
- settings/my-data: file picker, progress bar, per-phase labels, success
summary with event count + source timestamp.
Scope is intentionally same-account only: events originate from the
server for this user, so re-pushing them is unnecessary. Cross-account
migration needs the MK transfer path in M5.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Bug: setting taskOverrides['companion.chat'] = 'byok' didn't work
when the user's allowedTiers was empty/['none']. The tier-too-low
check in run() compared task.minTier ('browser') against userMaxTier
('none') and threw TierTooLowError before the override was even read.
Same issue in canRun() and candidateTiers().
Fix: when a per-task override exists, treat it as opt-in to that tier
even if not in the global allowedTiers. The override is the user's
explicit per-task signal — overriding the global default is exactly
what an override is for.
- run(): effectiveMaxTier = max(override, userMaxTier)
- candidateTiers(task, override): adds override to baseTiers
- canRun(): now passes the override to candidateTiers
The Companion chat now correctly uses BYOK when selected from the
toolbar, even if the user hasn't enabled BYOK in their global LLM
settings.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The single $effect that wired SceneAppBar into bottomBarStore was
re-writing barComponent on every reactive tick — every change to
openApps, locale or activeSceneId redirected through .set() and
re-assigned the component reference identically.
Add a setProps() method to bottomBarStore that mutates only barProps,
and split the workbench effect in two: a registration effect that
fires on the scenes-empty/non-empty transition, and a props effect
that pushes fresh data without touching barComponent.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the raw JSONL response with a zip container:
events.jsonl — one SyncChange per line, as before
manifest.json — formatVersion, schemaVersion, userId, eventCount,
eventsSha256, apps, timestamps, schemaVersionMin/Max
Single DB pass: events.jsonl is written while a sha256 hasher tees
every byte of the uncompressed JSONL. The manifest lands as a second
zip entry after the stream closes, so eventsSha256 is filled without
rescanning.
Integrity-check on the restore side becomes trivial (re-hash the
decompressed events.jsonl and compare). Signature over manifest.json
is deferred to a later phase; sha256 already catches corruption.
Client-side: default filename + UI label updated to .mana. Fetch flow
is unchanged — browser gets a zip blob and triggers a download.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>