Yesterday's postinstall fix (\`d1d37749f\`) removed the \`|| true\`
guards, which in turn exposed that \`pnpm run type-check\` at the
root had been red for a long time but nobody noticed. Several per-
package scripts were genuinely broken:
- \`@mana/test-config\`: \`vitest.config.base.ts\` and \`.svelte.ts\`
pass \`all: true\` to the coverage block. Vitest 4 removed that flag
(including uncovered files is now the default), so tsc reports
\`'all' does not exist in type 'CoverageOptions'\`. Removed both.
- \`@mana/credits\`: \`tsconfig.json\` include glob had
\`"src/**/*.svelte"\`, which makes tsc try to parse .svelte files
as TS source. It can't. Removed .svelte from include; added
\`"exclude": ["src/web/**"]\` — the web consumer layer is checked by
svelte-check in the apps that import it, not here.
- \`@mana/local-stt\` + \`@mana/local-llm\`: ship \`svelte.svelte.ts\`
files that use Svelte 5 runes (\`$state\` etc.). Plain tsc has no
rune support — \`$state\` is not a name it knows about. Both
packages' \`type-check\` scripts now explicitly skip with a message
pointing at svelte-check as the right tool. The rune code is still
type-checked by svelte-check when a consumer app runs \`pnpm check\`.
- \`@manavoxel/shared\`: was missing its \`tsconfig.json\` entirely,
so the \`type-check\` script ran tsc with no config, which dumped
the CLI help and exited non-zero. Added a minimal bundler-mode
tsconfig matching the pattern used by sibling packages.
\`pnpm run type-check\` now goes further than it has in months —
next failure is a real pre-existing Hono type mismatch in
\`services/mana-media/apps/api/src/routes/delivery.ts\` (Buffer vs
c.body signature), which is out of scope here and needs a proper
code fix, not a config fix.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
In commit c9e16243c (the gemma3:4b → gemma4:e4b switch) I sloppily
wrote in the ManaServerBackend docstring that mana-llm "routes them
to the local Ollama instance on the Mac Mini (running on the M4's
Metal GPU)". That is wrong AND it's the exact misconception I had
to debug-out-of earlier the same day.
The actual topology — already documented correctly in
docs/MAC_MINI_SERVER.md and docs/WINDOWS_GPU_SERVER_SETUP.md, I
just didn't read those before writing the docstring:
mana-llm container's OLLAMA_URL points at host.docker.internal:13434
→ ~/gpu-proxy.py (Python TCP forwarder, LaunchAgent on Mac Mini)
→ 192.168.178.11:11434 (LAN)
→ Ollama on the Windows GPU server (RTX 3090, 24 GB VRAM)
→ Inference
The Mac Mini's brew-installed Ollama binary is NOT on the inference
path. It's just a CLI for inspecting the proxied daemon. Today's
"why does the Mac Mini still have Ollama 0.15.4" puzzle has the
answer "because nothing on the Mac Mini actually runs inference, the
binary version was never load-bearing".
Two doc fixes:
1. packages/shared-llm/src/backends/mana-server.ts
Replace the lying docstring with the real topology, including a
pointer to the two MAC_MINI_SERVER.md / WINDOWS_GPU_SERVER_SETUP.md
sections that document it. Also note that gemma4:e4b is a
reasoning model that emits message.reasoning when given enough
tokens (cross-reference to remote.ts's fallback parser).
2. packages/local-llm/CLAUDE.md
Add a paragraph at the top explaining the difference between
"@mana/local-llm" (browser tier, on-device) and the @mana/shared-llm
"mana-server" / "cloud" tiers (services/mana-llm proxy → gpu-proxy.py
→ RTX 3090). This was implicit before — "not related to
services/mana-llm" — but didn't say where mana-server actually
goes. Future me reading the doc would still have to dig through
the docker-compose env to find out.
No code changes — only docstring + markdown.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The browser tier of @mana/local-llm was running entirely in the main
JS thread. With Gemma 4 E2B that meant ~50-200 ms of synchronous
tensor work per forward pass × ~150 forward passes per generation =
the UI froze for 10-30 seconds during a single chat reply. Scrolling,
clicks, animations all stopped.
Move the actual inference into a Dedicated Web Worker. The main
thread keeps a thin LocalLLMEngine proxy with the same public API
(load / unload / generate / prompt / extractJson / classify /
onStatusChange / isSupported), so existing callers — the /llm-test
page, the playground module, @mana/shared-llm's BrowserBackend, the
Svelte 5 reactive bindings — need NO changes.
File layout after the split:
src/engine.ts — main-thread proxy (lazy worker init,
postMessage protocol, pending request map,
status broadcast handling, convenience
wrappers for prompt/extractJson/classify)
src/worker.ts — Web Worker entry point (typed message
protocol, single LocalLLMEngineImpl instance,
forwards status changes back to main thread)
src/engine-impl.ts — the actual transformers.js engine (renamed
from the previous engine.ts contents). NOT
exported from index.ts — only the worker
imports it. Same two-step tokenization,
aggregated progress reporting, streaming
token handling as before; just running in
a different thread now.
Worker construction uses Vite's documented `new Worker(new URL(
'./worker.ts', import.meta.url), { type: 'module' })` pattern, which
makes Vite split worker.ts (and its transformers.js dep) into its
own bundle chunk at build time. The proxy is lazy-init: the Worker
constructor is never touched at module-import time, so SSR stays
clean (Worker doesn't exist on Node).
Message protocol (typed end-to-end):
Main → Worker:
{ id, type: 'load', modelKey: ModelKey }
{ id, type: 'unload' }
{ id, type: 'generate', opts: SerializableGenerateOptions }
{ id, type: 'isReady' }
Worker → Main:
{ id, type: 'result', data?: unknown }
{ id, type: 'error', message: string }
{ id, type: 'token', token: string } — streaming chunk
{ type: 'status', status: LoadingStatus } — broadcast
The proxy assigns a unique id per request, stores the resolve/reject
+ optional onToken callback in a Map<id, PendingRequest>, and routes
incoming responses by id. Status messages have no id and fire every
registered status listener — same UX as before, just one extra hop.
Streaming: the worker re-attaches the streaming callback on its
side. Each emitted token gets posted back as `{ id, type: 'token',
token }` and the proxy invokes the original `onToken` callback. The
final `result` arrives as a normal response and resolves the
Promise. From the caller's perspective generate() still feels
identical — same async iterable feel via onToken, same return value.
Worker termination on unload: transformers.js doesn't expose a
dispose API, so we terminate the worker after unload and create a
fresh one on the next load. This is the only reliable way to
release VRAM between model swaps.
CSP: no header changes needed. The worker is loaded from a
same-origin URL (Vite emits it as
/_app/immutable/workers/worker.[hash].js), so 'self' in script-src
already covers it. The blob: + cdn.jsdelivr.net + wasm-unsafe-eval
allowlists we added during the original WebLLM/transformers.js
bring-up still apply because the worker still runs the same ONNX
runtime that needed them.
DistributiveOmit type helper: TS's plain `Omit<Union, K>` collapses
discriminated unions to an intersection in some configurations,
which broke the type narrowing at the postRequest call sites for
each request variant. Adding a tiny `DistributiveOmit<T, K>` helper
fixes the type-check without restructuring the protocol.
What this commit deliberately does NOT do:
- Change the public API surface. The whole point is that callers
remain untouched.
- Add multi-tab worker coordination via SharedWorker or
BroadcastChannel. Each tab still spawns its own dedicated worker
with its own copy of the model in VRAM. Multi-tab dedup is
Phase 2.5/Phase 4 work — see the design doc summary in the
previous Phase 1 commit message.
- Add a persistent task queue. Fire-and-forget background tasks
are Phase 4.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add packages/local-llm/CLAUDE.md as the package-level reference for
browser-local LLM inference. The package went through a non-trivial
engine swap from WebLLM/Qwen to transformers.js/Gemma 4 E2B on
2026-04-08, and the bring-up surfaced enough sharp edges that the
next person (or AI agent) touching this code will save real time
having them written down in one place rather than re-discovering
them error by error.
Captured topics:
- What the package is, what library/model is currently used, and
the deliberate engine-agnostic API surface that lets future swaps
stay contained to this package.
- Why we chose transformers.js + Gemma 4 over staying on WebLLM
(MLC compilation lag for new model architectures) and what the
return path looks like once MLC ships Gemma 4 builds.
- The seven CSP directives that browser-local inference needs and
WHY each one is required:
* script-src: 'wasm-unsafe-eval', cdn.jsdelivr.net, blob:
* connect-src: huggingface.co + *.huggingface.co + cdn-lfs-*,
*.hf.co + cas-bridge.xethub.hf.co (XET CDN),
cdn.jsdelivr.net (for the WASM preload fetch)
Including the subtle "jsDelivr is needed in BOTH script-src and
connect-src" trap that produces identical-looking error messages
for two distinct underlying causes.
- The Vite SSR module-cache gotcha: CSP additions made in
packages/shared-utils/security-headers.ts do NOT hot-reload across
the workspace package boundary, while additions made directly in
apps/mana/apps/web/src/hooks.server.ts do. Includes the diagnostic
pattern (compare which additions show up in the next CSP error
vs which don't) and the workaround (move them into hooks.server.ts
via setSecurityHeaders options).
- The two-step tokenization pattern that's mandatory for
Gemma4Processor: apply_chat_template(tokenize:false) → string, then
processor.tokenizer(text, return_tensors:'pt'). The collapsed
apply_chat_template(return_dict:true) path looks shorter but
produces a malformed input shape and crashes model.generate() deep
inside the forward pass with "Cannot read properties of null
(reading 'dims')" — opaque from the call site.
- The transformers.js v4 quirk that model.generate() returns null
(not a tensor) when a TextStreamer is attached. The streamer is
the only stable text channel; the engine always attaches one and
uses the streamer's collected text as the canonical output, with
a chars/4 fallback for token counts.
- API surface (Svelte 5 example), how to add a new model to the
registry, deploy notes (no base image rebuild needed for local-llm
changes alone, but IS needed if shared-utils CSP defaults change),
browser cache semantics, and hard browser support requirements
(WebGPU, ~1.5–2 GB VRAM for E2B q4f16, no CPU/WASM fallback).
Also link to the new doc from the root CLAUDE.md Shared Packages
table so people land on it from the standard discovery path.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The previous attempt to fix the "Cannot read properties of null
(reading 'dims')" chat error was incomplete: I only stopped passing
the bogus return_tensor:'pt' option to apply_chat_template. The
underlying issue was that apply_chat_template's all-in-one mode
(return_dict:true) does not produce a proper Tensor-backed
{ input_ids, attention_mask } pair for multimodal-capable processors
like Gemma4Processor — it returns a shape that has no .dims on
input_ids, so model.generate() crashes deep inside the forward pass
the moment it tries to read the sequence length.
Switch to the documented two-step pattern from the Gemma 4 model
card: call apply_chat_template with tokenize:false to get the
formatted prompt as a plain string, then run that string through
processor.tokenizer with return_tensors:'pt' to get a proper Tensor
pair. The tokenizer's return_tensors option is the *Python*
convention and IS supported by transformers.js's Tokenizer class
(the API name collision between apply_chat_template's return_tensor
boolean and Tokenizer's return_tensors string is one of those nasty
spots where the JS port intentionally diverges from Python).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
First end-to-end Gemma 4 inference attempt threw "Cannot read
properties of null (reading 'dims')" the moment a chat message was
sent. Two bugs piled on top of each other:
1. apply_chat_template() was being called with `return_tensor: 'pt'`,
which is the Python `transformers` convention. transformers.js's
equivalent option is just a boolean (the default), and the string
'pt' is unrecognized — older versions silently ignored it, but the
v4 code path now produces a less predictable input shape when it
sees the unknown value. Drop it.
2. model.generate() in transformers.js v4 returns null (not a tensor)
when a streamer is attached. The previous engine code only attached
a streamer if the caller passed an `onToken` callback, then
unconditionally tried to slice the tensor return for token counting
— which crashed because the chat tab DOES pass onToken for live
streaming. The streamer collected the text fine, but generate()
returned null and our tensor read blew up.
Restructure so the streamer is always attached and is the canonical
text channel. The tensor return is now only used for token counting
when present, and falls back to a chars/4 estimate when it isn't, so
the /llm-test UI still shows roughly meaningful prompt/completion
counts on either v3 (returns tensor) or v4 (returns null with
streamer). The user-facing GenerateResult.content now always comes
from the streamer's accumulated string instead of decoding the
tensor's sliced suffix, which is more robust across versions.
Also wrap the model.generate() call in try/catch so that versions
of transformers.js that throw at end-of-streaming (after the
streamer has already delivered all tokens) don't lose the answer.
We only re-throw if the streamer collected nothing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two issues hit while loading Gemma 4 E2B in /llm-test for the first
time on a local dev server.
1. CSP script-src blocked cdn.jsdelivr.net.
@huggingface/transformers v4 lazy-loads the onnxruntime-web WASM
loader shim via a runtime dynamic `import()` from
cdn.jsdelivr.net/npm/onnxruntime-web@... at backend selection time
(the package itself is bundled, but the WASM-loader is fetched on
demand so the static bundle stays small). With the previous CSP the
import was blocked and "no available backend found" was the only
downstream error. Allowlist cdn.jsdelivr.net in the shared CSP
script-src so every Mana web app picks this up automatically.
2. Loading bar oscillated wildly during the model download.
transformers.js downloads many shards in parallel (config.json,
tokenizer.json, generation_config.json, model.onnx, model_data.bin,
…) and fires the progress callback per file. The previous engine
code reported the latest event verbatim, so the bar bounced
between whichever file happened to be progressing fastest.
Replace per-file reporting with a Map<file, {loaded, total}>
accumulator and emit an aggregated total on every event. The
denominator can grow as new files are discovered (causing brief
small dips), but both numerator and denominator are individually
monotonic, so the aggregate is much smoother. Also include a
human-readable byte count and file count in the status text:
Downloading model (47%, 240 MB / 510 MB, 8 files)
Pin completed files to 100% on the 'done' event so the final
aggregate visibly hits 100% before the loading→ready transition.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the entire @mana/local-llm engine with a transformers.js-based
implementation backed by Google's Gemma 4 E2B (released 2026-04-02).
The external API of LocalLLMEngine — load(), generate(), prompt(),
extractJson(), classify(), onStatusChange(), isSupported() — is
preserved 1:1, so the /llm-test page, the playground module, and the
Svelte 5 reactive bindings in svelte.svelte.ts need no changes
beyond updating the default model key.
Why the engine swap: MLC has not (and as of today still hasn't)
published Gemma 4 builds for WebLLM. The webml-community team and
HuggingFace's onnx-community already have Gemma 4 E2B running in
the browser via transformers.js + WebGPU, with a documented
Gemma4ForConditionalGeneration class shipped in @huggingface/transformers
v4.0.0. Going through the ONNX route gets us the latest Google model
six days after release instead of waiting on MLC compilation.
Trade-offs accepted (discussed before this commit):
- transformers.js is a more generic ONNX runtime, so per-token
throughput will be ~20-40% lower than WebLLM would deliver for the
same model size. For a 2B model on a modern WebGPU device that's
still well above interactive latency.
- The JS bundle gains ~2-3 MB (the ONNX runtime). Negligible compared
to the 500 MB model download.
- transformers.js v4 is brand new (released alongside Gemma 4) so the
Gemma4ForConditionalGeneration code path has very little battle
testing yet. The risk is partially offset by webml-community's
reference implementation.
What changed file by file:
- packages/local-llm/package.json: drop @mlc-ai/web-llm, add
@huggingface/transformers ^4.0.0; bump version 0.1.0 → 0.2.0; rewrite
description.
- packages/local-llm/src/types.ts: add `dtype` field to ModelConfig
('fp32' | 'fp16' | 'q8' | 'q4' | 'q4f16') so each model can request
the quantization that matches its uploaded ONNX shards.
- packages/local-llm/src/models.ts: replace the old Qwen 2.5 + Gemma 2
registry with a single `gemma-4-e2b` entry pointing at
onnx-community/gemma-4-E2B-it-ONNX with q4f16 quantization. Future
models can be added by appending entries — the /llm-test picker
reads MODELS dynamically and picks them up automatically.
- packages/local-llm/src/cache.ts: replace the WebLLM-specific
hasModelInCache helper with a generic Cache API probe that looks for
`https://huggingface.co/{model_id}/resolve/main/tokenizer.json` in
any open cache. tokenizer.json is small, downloaded first, and
always present, so its presence is a reliable proxy for "model has
been loaded before".
- packages/local-llm/src/engine.ts: full rewrite. Internally we now
hold a transformers.js model + processor pair (created via
AutoProcessor.from_pretrained + Gemma4ForConditionalGeneration.from_pretrained
with `device: 'webgpu'`), and translate our LoadingStatus union from
the library's `progress_callback` shape. generate() applies Gemma's
chat template via the processor, runs model.generate() with optional
TextStreamer for streaming, then slices the prompt tokens off the
output tensor to compute per-call usage. The convenience methods
(prompt, extractJson, classify) are unchanged because they only call
generate() under the hood.
- packages/local-llm/src/generate.ts and status.svelte.ts: deleted.
These were orphaned from a much earlier engine API (referenced
`getEngine()` / `subscribe()` / `LlmState` symbols that haven't
existed for a while) and were never re-exported from index.ts —
they only showed up because `tsc --noEmit` was crawling the src
tree. Their functionality lives in engine.ts + svelte.svelte.ts now.
- apps/mana/apps/web/package.json: swap the direct dep from
@mlc-ai/web-llm to @huggingface/transformers. This is the same
trick we used for the previous adapter-node externals warning —
having it as a direct dep makes adapter-node's Rollup pass treat
it as external automatically.
- apps/mana/apps/web/vite.config.ts: swap ssr.external entry from
@mlc-ai/web-llm to @huggingface/transformers. Add a comment
explaining the why so the next person doesn't wonder.
- apps/mana/apps/web/src/routes/(app)/llm-test/+page.svelte: change
the default selectedModel from 'qwen-2.5-1.5b' to 'gemma-4-e2b'.
All other model display strings come from the MODELS registry, so
this is the single hard-coded reference that needed updating.
- pnpm-lock.yaml: regenerated. Confirmed @mlc-ai/web-llm is gone (0
references) and @huggingface/transformers is in (4 references).
CSP: no header changes needed. We already opened connect-src for
huggingface.co + cdn-lfs.huggingface.co + raw.githubusercontent.com
when fixing the WebLLM blockers earlier today, and 'wasm-unsafe-eval'
is already in script-src — both transformers.js (ONNX runtime) and
WebLLM (MLC runtime) need that. If transformers.js spawns its
inference into a Web Worker via a blob URL we may need to add
`worker-src 'self' blob:` once we hit the first runtime test, but
the existing CSP should be enough for the synchronous path.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
WebLLM was blocked by connect-src — model config and weight shards live
on huggingface.co (+ cdn-lfs.* for LFS), and the WebGPU model_lib WASM
comes from raw.githubusercontent.com (binary-mlc-llm-libs). Also wires
Gemma 2 2B/9B into the model registry so /llm-test picks them up.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move hasModelInCache to local-llm package with dynamic import wrapper
so the browser-only dependency doesn't break server-side builds.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add generate.ts with streaming chat completions, JSON extraction, and
text classification helpers. Add status.svelte.ts with Svelte 5 runes
reactive wrapper for LLM engine state.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New shared package for browser-based LLM inference using Qwen 2.5 1.5B
via WebLLM. Includes Svelte 5 reactive stores, engine management, and
type definitions for local AI features without server roundtrips.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>