After yesterday's type-check cascade repair (c34175afa), the root
\`pnpm run type-check\` progressed through 5 more packages but still
stopped on two pre-existing failures:
- \`services/mana-media\` delivery route: \`c.body(transformedBuffer)\`
passed a Node \`Buffer<ArrayBufferLike>\`, but Hono 4.7 types the
body argument as \`Uint8Array<ArrayBuffer>\` (strict — no
ArrayBufferLike). \`Uint8Array.from(buf)\` gives a clean copy with a
fresh \`ArrayBuffer\` backing that the strict type accepts. Runtime
cost for a handful of KB per image transform is negligible next to
the Sharp pipeline that produced the buffer.
- \`packages/shared-llm\`: same rune issue as local-stt + local-llm —
\`store.svelte.ts\` uses \`$state\` and transitively pulls in
\`local-llm/src/svelte.svelte.ts\`. Plain tsc can't resolve Svelte 5
runes. Same treatment: \`type-check\` script explicitly skips with a
message pointing at svelte-check.
Root \`pnpm run type-check\` now reaches \`@context/mobile\`, which has
real code-level type errors (adapter shape mismatches, an RN event-
handler typing drift, and a deleted Supabase module still imported by
\`utils/supabaseTest.ts\`). Those need domain changes, not config
tweaks — out of scope for this repair pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Enable real-time token streaming during the planner "calling-llm" phase
so the user sees live progress ("empfange Plan… 128 tokens") instead of
a static spinner. The parser still receives the full text once complete —
no partial-JSON risk.
Changes:
- Extract shared SSE parser from playground into @mana/shared-llm/sse-parser
- remote.ts: use stream:true when onToken callback is provided
- AiPlanInput: add optional onToken field (shared-ai)
- ai-plan task: pass onToken through to backend.generate()
- runner.ts: throttled (500ms) phaseDetail updates during streaming
- Playground: refactored to use shared SSE parser
Also includes: AI agent architecture comparison report (docs/reports/)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Following the shared-icons fix (d5cabed14), audit every workspace
package's src/index.ts for top-level side effects and flag the
ones that are safe to tree-shake:
- Pure TS re-export barrels (types, theme, utils, llm, storage):
"sideEffects": false — lets Vite prune entire submodules when a
consumer only imports a subset of named exports. Matters most for
shared-llm where the orchestrator/BYOK branch isn't needed on
every route.
- Packages that ship .svelte components (branding, ui, links):
"sideEffects": ["**/*.svelte", "**/*.css"] — same tree-shaking
benefit for TS modules, but keeps Svelte component CSS injection
intact.
The state-holding submodules (shared-ui drag-state/toast,
shared-llm store, shared-links mutations) are still evaluated
whenever their exports are referenced, so behaviour is unchanged —
the flag only lets the bundler skip modules that aren't in the
dependency graph at all.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Until now, modules wanting to use the orchestrator had to await each
LLM call inline in their store code. That's fine for foreground tasks
("user clicked summarize") but a non-starter for background work
("auto-tag every new note", "generate a title for every voice memo
after STT finishes"). Background tasks need to:
- Queue up while no LLM tier is ready, then drain when one becomes
available (e.g. user just enabled the browser tier from settings)
- Survive page reloads, browser restarts, and the user navigating
away mid-execution
- Run one at a time without blocking the foreground UI
- Allow modules to subscribe to results reactively without polling
- Retry transient failures (network, model loading) but not
semantic ones (tier-too-low, content blocked)
Phase 4 ships exactly that.
Architecture:
packages/shared-llm/src/queue.ts — LlmTaskQueue class
+ QueuedTask interface (the persistent row shape)
+ EnqueueOptions (refType/refId/priority/maxAttempts)
+ TaskRegistry type (name → LlmTask map)
+ LlmTaskQueueOptions (table + orchestrator + registry +
retryBackoffMs + idleWakeupMs)
Public API:
- enqueue(task, input, opts) → string (returns the queued id)
- get(id), list(filter)
- retry(id), cancel(id), purge(olderThanMs)
- start(), stop() (idempotent processor lifecycle)
apps/mana/apps/web/src/lib/llm-queue.ts — web app singleton
- Dedicated `mana-llm-queue` Dexie database (separate from the
main `mana` IDB; see comment for the rationale: ephemeral
per-device state, no encryption needed, no sync needed, doesn't
belong in the long-frozen `mana` schema)
- Wires up the queue with llmOrchestrator + taskRegistry
- Exposes startLlmQueue() / stopLlmQueue() for the layout hook
apps/mana/apps/web/src/lib/llm-task-registry.ts
- Maps task names → task objects so the queue processor can
look up the implementation when pulling rows off the table.
Closures can't be persisted, so we round-trip via name.
- Currently registers extractDateTask + summarizeTextTask;
module-side tasks land here as we add them.
apps/mana/apps/web/src/routes/(app)/+layout.svelte
- startLlmQueue() in handleAuthReady's Phase A (auth-independent)
so guests + authenticated users both get the queue
- stopLlmQueue() in onDestroy as a fire-and-forget cleanup
Processor loop semantics (the heart of the implementation):
1. On start(), reclaim any 'running' rows from a crashed previous
session — reset them to 'pending'. The orphan recovery is the
reason a crash mid-task doesn't leave the queue stuck.
2. findNextRunnable() picks the highest-priority pending task whose
`notBefore` (retry-backoff timestamp) is in the past. Sort key:
priority desc, then enqueuedAt asc (FIFO within priority).
3. Mark the task running, increment attempts, look up the LlmTask
in the registry, hand it to orchestrator.run().
4. On success: mark done, store result + source + finishedAt.
5. On error:
- TierTooLowError or ProviderBlockedError → fail immediately,
no retry. These are not transient — the user's settings or
the content itself need to change.
- Anything else → if attempts < maxAttempts, reset to pending
with notBefore = now + retryBackoffMs (default 60s). Else
mark failed.
6. When no work is pending, sleep on a Promise that resolves when
either (a) someone calls enqueue() (which fires notifyWakeup),
or (b) idleWakeupMs elapses (default 30s, safety net for any
missed wakeup signal).
Module-side reactive reads use Dexie liveQuery directly on the queue
table — no special subscription API on the queue itself. This is
consistent with how every other Mana module reads its data, so the
mental model stays uniform:
const tags = useLiveQuery(
() => llmQueueDb.tasks
.where({ refType: 'note', refId, taskName: 'common.extractTags' })
.reverse().first(),
[refId]
);
Smoke test: a new "Queue" tab in /llm-test lets you enqueue the
existing extractDate / summarize tasks and watch the live state of
the queue table via liveQuery. The display includes per-row state
badge (pending/running/done/failed), tier source, attempt count,
input/output, and a "Done/failed löschen" button that exercises
purge().
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adding an app to a workbench scene threw DataCloneError. scenesState
is a $state array, so current.openApps was a Svelte 5 proxy and
spreading it into a new array left proxy entries inside; IndexedDB's
structured clone refuses to serialise those. Snapshot before handing
the array to patchScene / createScene so Dexie sees plain objects.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The lockfile had grown five (!) different vitest versions over time:
1.6.1, 2.1.9, 3.2.4, 4.1.2 and 4.1.3 — pulled in by various
packages that pinned outdated majors. The mismatch produced the
classic "createDOMElementFilter not found" startup crash because
hoisted @vitest/utils@3.x was loaded by the nested @vitest/runner@4.x.
Bumped every package.json that pinned an old vitest:
- apps/manavoxel/apps/web (^4.1.0 → ^4.1.2)
- apps/matrix/apps/web (^4.1.0 → ^4.1.2)
- apps/memoro/apps/server (^3.0.0 → ^4.1.2)
- apps/nutriphi/packages/shared (^2.1.8 → ^4.1.2)
- packages/qr-export (^3.0.5 → ^4.1.2)
- packages/shared-llm (^2.0.0 → ^4.1.2)
- packages/shared-storage (^4.1.0 → ^4.1.2)
- packages/spiral-db (^1.6.1 → ^4.1.2)
- packages/test-config (^3.0.0 → ^4.1.2)
- packages/wallpaper-generator (^3.0.5 → ^4.1.2)
After a clean pnpm-lock.yaml regenerate, every @vitest/* sub-package
resolves to a single version (4.1.3, picked by semver) — no more
duplicates between hoisted and nested node_modules.
Verified by running:
pnpm --filter @mana/web vitest run src/lib/data/sync.test.ts
→ 20/20 tests passing in 217ms
pnpm --filter @mana/web vitest run src/lib/data/time-blocks/recurrence.test.ts
→ 19/19 tests passing in 198ms
Pre-existing test failures in base-client.test.ts (German error
strings vs english assertions), dashboard.test.ts (widget count
drift), and content/help/index.test.ts (svelte-i18n locale not
initialised in test env) are unrelated and tracked separately.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>