Two distinct bugs surfaced by the first browser-side end-to-end test
of the News module against the locally-managed cloudflared tunnel.
═══ 1. Onboarding loop on reload ═══
The news tables were originally added to db.version(1).stores(),
which violates Dexie's "never edit a published version" contract.
Existing browsers stuck at db.version(3) (after the body + who
upgrades) never trigger an upgrade for v1 changes, so the news tables
silently never get created on those IndexedDB instances. Writes to
preferencesTable.add() / .update() failed at the storage layer, the
preferences row was never persisted, and on reload usePreferences()
returned the DEFAULT_PREFERENCES fallback (onboardingCompleted: false)
which re-rendered the onboarding wizard.
Fix: move the five news tables out of db.version(1) into a fresh
db.version(4).stores({…}) block. Dexie sees the bumped version number
and runs the additive upgrade transaction on existing v3 IndexedDBs,
creating the missing tables. Brand-new IndexedDBs go straight to v4
and pick up the union of all four version blocks. Both paths now
have the news tables present.
═══ 2. /api/v1/news/feed → 401 Missing authorization header ═══
The news api.ts client was passing `credentials: 'include'` thinking
the cookie alone would carry auth through to mana-api. It does not —
apps/api's authMiddleware() reads the Authorization header and
ignores cookies. Every browser-side fetch returned 401, the feed
cache stayed empty, and the wizard's "Fertig" → ranked feed flow
silently failed.
Fix: add a small `authHeader()` helper that pulls the JWT from
authStore.getAccessToken() and attaches it as
`Authorization: Bearer …`, mirroring the pattern in
modules/planta/api.ts. Both `fetchFeed()` and `extractFromUrl()` now
go through it. Drops the cookie credential entirely since it was a
no-op anyway.
Also tidies a Svelte 5 `$props()` warning in modules/news/ListView.svelte
(empty destructure instead of binding to a `_props` const).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two unrelated bugs in the @mana/help package surface that together
accounted for ~40 type errors:
Broken component imports
Ten components inside packages/help/src/components/ were importing
from `'../types.js'` and `'./content'` — neither path resolves.
The actual files are at `../ui-types` (where FAQSectionProps,
FeaturesOverviewProps etc. live) and `../content` (where FAQItem,
FeatureItem, FAQCategory live). Fix the imports to point at the
real files. ESM resolution doesn't need `.js` suffixes when
TypeScript is feeding tsc, and the existing index.ts already
re-exports under the correct paths.
Net: -19 type errors across:
ChangelogEntry, ChangelogSection, ContactSection, FAQItem,
FAQSection, FeatureCard, FeaturesOverview, GettingStartedGuide,
HelpSearch, KeyboardShortcuts
content/help/index.ts SupportedLanguage cast
`getManaHelpContent()` was passing `currentLocale` (typed `string`)
into FAQ rows that expect a `SupportedLanguage` enum — 9 errors
from each FAQ row. Add a small `asSupportedLanguage()` guard that
validates the locale string against the union and falls back to
'de' for unknown values. Single source of truth lives next to
the function that needed it.
Net: -9 type errors.
Combined with the spiral-db dist rebuild (local-only, gitignored)
and the previous Observable migration commit, the total error count
drops from 418 → 115.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
SvelteKit types `$page.params.X` as `string | undefined` because the
runtime cannot prove a route param exists at the type level — even
if the route file lives at e.g. `[id]/+page.svelte` and TS knows the
folder name. Thirteen route files were passing the raw param into
functions that take `string`, producing 25 type errors of the shape:
Argument of type 'string | undefined' is not assignable to
parameter of type 'string'.
Fix: hoist the param into a local with `?? ''` at the top of the
script, then use the local everywhere downstream. Empty string is
a safe fallback because the consuming code (`useDeck('')`,
`getCollectionById([], '')`, etc.) all return null/undefined for
unknown ids — exactly what they'd do if the param were truly
missing at runtime, which can't happen given the matching route
folder.
Files touched (one param hoist each):
calendar/event/[id] eventId
cards/decks/[id] deckId
citycorners/.../locations/[id] citySlug + locId
citycorners/.../locations/[id]/edit citySlug + locId
gifts/redeem/[code] code
inventory/collections/[id] collectionId
inventory/collections/[id]/edit collectionId
inventory/items/[id] itemId
photos/albums/[id] albumId
picture/board/[id] boardId
storage/files/[folderId] folderId
zitare/lists/[id] listId (new local, replaces inline use)
g/[code] code
Net: -24 type errors. The lone remaining "string | undefined" error
is a different bug in inventory FieldDefinition typing — unrelated.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Seven module query files were calling raw `liveQuery(async () => ...)`
from dexie and returning the resulting Observable<T>. Consumer code in
the route .svelte files then read `.value` (or `.current`) on those
observables, which doesn't exist on the Dexie type — TypeScript flagged
38 errors and the call sites were silently relying on a runtime
property that only happens to work because the Svelte reactivity layer
re-evaluates the access.
Migration: switch each `useXxx()` hook to wrap with the existing
`useLiveQueryWithDefault` from `@mana/local-store/svelte`. The wrapper
returns `{ value, loading, error }` (with `value` synced to a `$state`
under the hood), so call sites can read `.value` reactively without
casts. Each hook now provides a typed default array so the wrapper
infers the right shape on first render.
Modules migrated:
- chat — useAllConversations, useArchivedConversations,
useAllTemplates, useConversationMessages
- citycorners — useAllCities, useAllLocations, useAllFavorites
- memoro — useAllMemos, useArchivedMemos, useMemoriesByMemo,
useAllMemoTags, useAllSpaces
- nutriphi — useAllMeals, useAllGoals, useAllFavorites
- presi — useAllDecks, useDeckSlides, useDeck
- questions — useAllCollections, useAllQuestions,
useAnswersByQuestion
- skilltree — useAllSkills, useAllActivities, useAllAchievements
Call sites cleaned up:
- chat/[id], memoro/[id]: removed inline `as { value: T[] }` casts
that were the workaround for the broken type
- nutriphi/{,add,goals,history}/+page.svelte: `.current ?? []` →
`.value` (the wrapper guarantees the default array, so the
nullish coalesce was always dead)
- questions/{,[id],new,collections}/+page.svelte: same `.current` →
`.value` migration
Net: -38 type errors, no behavior change. The wrappers continue to
subscribe to the same Dexie liveQuery under the hood; only the
ergonomic surface changed.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The DEFAULT_DAILY_VALUES constants are declared `as const` so each
field's type is a literal (e.g. `2000`, `50`). When the goals page
seeded its $state with these constants, TypeScript inferred the state
type as the literal — and any user-input number assignment then failed
type-check with "Type 'number' is not assignable to type '2000'".
The error was hidden until earlier today: the goals page also has the
same .current pre-existing pattern that the rest of the nutriphi
routes had, and tsc was short-circuiting on the .current error before
reaching the literal-type assignment. Now that queries.ts has been
moved to useLiveQueryWithDefault, .current is gone and the literal
typing surfaces.
Fix: explicitly type each $state as `<number>` so the literal widens
to a regular numeric state slot.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Ollama provider was completely ignoring `response_format` from the
incoming OpenAI-compatible request. Two consequences:
1. Clients that asked for `{"type":"json_object"}` or
`{"type":"json_schema",...}` got back JSON wrapped in
```json ... ``` markdown fences, because Ollama defaults to
conversational output.
2. Strict downstream parsers (Vercel AI SDK `generateObject`,
manual `JSON.parse`) failed to decode the response and threw,
even though the underlying JSON was valid inside the fences.
Fix: when response_format is set, translate it to Ollama's native
`format` field:
- `{"type":"json_object"}` → `format: "json"`
- `{"type":"json_schema","json_schema":{"schema":{...}}}`
→ `format: <the schema dict>` (Ollama 0.5+ supports full JSON
schemas in the format field)
Defensive belt-and-suspenders: a small `_strip_json_fences` helper
runs after the Ollama response is decoded and removes any leftover
```json ... ``` wrapping. Some older vision models still wrap
output in fences even when `format` is set; this catches them.
Streaming path is unchanged because the nutriphi/planta refactor uses
non-streaming `generateObject`. Streaming structured output with
Ollama deserves its own pass when someone actually needs it.
Discovered during the AI SDK + Zod refactor smoke test — neither the
old nor the new vision routes ever returned validated JSON locally
because of this bug. Production uses Google Gemini directly via
fallback so the issue was masked there.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Found while smoke-testing the AI SDK refactor: both nutriphi and planta
were calling `${MANA_LLM_URL}/api/v1/chat/completions` and passing
`gemini-2.0-flash` as the model name. Both wrong:
1. mana-llm exposes routes under /v1/, not /api/v1/. The original
pre-refactor code had the same bug — it predates this commit and
was apparently never noticed because the photo workflow was never
wired into the unified app's UI until last week. /api/v1 returned
404 against the live mana-llm container; now we hit /v1.
2. mana-llm's router parses model strings as `provider/model`
(services/mana-llm/src/providers/router.py:_parse_model). Without
a prefix, `gemini-2.0-flash` was being routed as
`ollama/gemini-2.0-flash` and only worked via the auto-fallback
to Google when ollama failed. Be explicit: `google/gemini-2.0-flash`
hits the Google provider directly and skips the failed-ollama
round-trip.
VISION_MODEL env var still wins over the default, so prod overrides
remain possible.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds scripts/validate-cloudflared-config.mjs — a node-only validator
that lint-staged runs whenever cloudflared-config.yml is staged. The
goal is to catch the same failure modes that
`cloudflared tunnel ingress validate` would catch on the server, but
without requiring cloudflared to be installed on every dev box.
Checks:
- YAML parses
- tunnel: is a uuid
- credentials-file: ends with .json and contains the tunnel id
(warning when it doesn't — likely an out-of-sync remnant from a
previous rebuild, exactly the failure mode that bit us in the
first locally-managed switch)
- ingress: is a non-empty array
- every rule except the last has both hostname AND service
- the LAST rule is the catch-all `service: http_status:NNN`
- no duplicate hostnames (the most common copy-paste mistake)
- service URLs look like http(s):// / ssh:// / http_status:NNN
/ unix:/ / hello_world
- hostnames are lowercase dot-separated DNS labels (no spaces, no
weird characters)
Wired into lint-staged.config.js with a single glob entry; the
existing eslint + prettier flow is unchanged.
Tested against the live cloudflared-config.yml (passes, 51 hostnames)
and a synthetic broken file (catches all 6 categories of error +
the credentials-file/tunnel id drift warning).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two improvements to scripts/mac-mini/rebuild-tunnel.sh based on what
the first prod run actually surfaced.
═══ 1. Apex domain auto-fix via Cloudflare API ═══
`cloudflared tunnel route dns` cannot route the apex of a zone
(error code 1003: "An A, AAAA, or CNAME record with that host already
exists"). The CLI has no command to delete those records. The first
rebuild left mana.how returning 530 because the script silently
failed to route it and we had to fix the apex manually in the
dashboard.
The new `apex_route_via_api()` helper:
- Detects apex hostnames by dot count (one dot → two-label name)
- Uses $CLOUDFLARE_API_TOKEN if available
- Resolves the zone id by name
- Deletes any existing A / AAAA / CNAME records on the apex
- Creates a fresh proxied CNAME pointing at <tunnel>.cfargotunnel.com
- Cloudflare's CNAME flattening at the apex makes this work
transparently
If $CLOUDFLARE_API_TOKEN is not set, the script logs a warning at the
top of step 6 and falls back to the old behavior (route fails, user
fixes the apex manually). The token needs Zone:DNS:Edit on the
target zone.
═══ 2. Smarter HTTP verification ═══
The first run reported "5 hosts down (404/000)" but those were all
backend services without a root handler — credits/media/llm/mana-api
all return 404 at `/` and 200 at `/health`. The verify pass was
flagging healthy services as down and made the rebuild look more
broken than it was.
New `probe_host()` tries `/health` first, falls back to `/` only if
/health returned 4xx, and prefers a 2xx/3xx root response over a 4xx
/health. `probe_is_down()` only counts 5xx and 000 (libcurl error)
as failures — anything in 1xx-4xx means the request reached the
origin and the tunnel routing is correct, which is the actual thing
the verify pass cares about. `probe_label()` adds a one-word health
summary so the verify log reads "200 ok" / "401 auth required" /
"404 routed (no handler)" / "530 tunnel error" instead of just bare
status codes.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After running scripts/mac-mini/rebuild-tunnel.sh, the old remotely-
managed tunnel bb0ea86d-... was deleted and a new locally-managed
tunnel 1435166a-... took its place. The script's in-place sed of
the repo file didn't actually persist (the server-side ~/.cloudflared/
config.yml was patched, but the repo file ended up identical to HEAD
because the dev box had a stale checkout that got pulled over).
This commit catches the repo file up to the new tunnel id so a fresh
clone + setup-cloudflared-service.sh run wires the right credentials
file from the start. cloudflared has been running fine on the new
tunnel id since the rebuild — it auto-resolved the credentials from
~/.cloudflared/cert.pem when the in-config tunnel id pointed at a
deleted tunnel — but the file should match reality regardless.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After the planta + nutriphi modules in apps/api started importing
shared Zod schemas from @mana/shared-types, the runtime crashed in
a restart loop with:
error: ENOENT reading "/app/apps/api/node_modules/@mana/shared-types"
Same root cause as the @mana/media-client gotcha already in this
Dockerfile: the build context only includes the workspace packages
that are explicitly COPYed, and shared-types was missed when it
became a transitive dependency.
Add the COPY line and rebuild. Also extend the comment block to
make the rule explicit ("when adding a new @mana/* import to any
apps/api module, add the package here too").
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds AI_SCHEMA_VERSION + AiResponseEnvelope<T> in @mana/shared-types so
every AI structured-output endpoint speaks { schemaVersion, data }.
Backend wraps via envelope() in each module routes.ts; frontend api.ts
unwraps via unwrapEnvelope<T>() which throws AiSchemaVersionMismatchError
on drift — actionable network-panel error instead of cascading
'field is undefined' bugs further down the stack.
Also adds providerOptions.anthropic.cacheControl on the system message
in nutriphi + planta routes via SYSTEM_CACHE_HINT. NO-OP today (Gemini
backend, ~50-token prompts under the 1024-token cache minimum) but
lights up automatically when mana-llm routes to Claude or prompts grow
past the threshold. ~5 lines per route, no risk.
System messages migrated from system: shorthand to a full messages[]
entry — the only way to attach providerOptions per-message in the AI SDK.
13 new tests in nutriphi/ai-schemas.test.ts cover the version constant,
the mismatch error shape, and Zod accept/reject for both schemas. Total
nutriphi + planta suite: 62/62.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The user asked "bist du kopernikus?" while playing Galileo. The
LLM correctly responded "Kopernikus? ... aber nicht meiner!" — and
then appended [IDENTITY_REVEALED] anyway. Game flipped to "won
in 2 messages" with Galileo's name revealed, even though the
guess was wrong.
This is gemma3:4b being lazy about the sentinel rule: any time the
user says "bist du <name>?", the model is biased toward emitting
the sentinel because the prompt mentions "errät den Namen". Weaker
LLMs in general struggle to follow strict negative instructions
when the trigger word is right there in the input.
Fix in three layers:
1. Server-side validation (the real safety net). When the LLM
emits [IDENTITY_REVEALED], independently verify that the user's
CURRENT message contains the canonical character name (or one
of its significant parts) using the same matchesName helper
the explicit /guess endpoint uses. If the LLM emitted but the
user didn't actually name this character, strip the sentinel,
log a who.sentinel_false_positive, and treat the reply as a
normal turn. The legit cases — user actually said the right
name — still flow through cleanly.
2. matchesName improvements. The previous logic only matched a
single-word guess against name parts; "bist du leonardo?" would
fall through and miss a real win. Rewritten to:
a) exact normalized match
b) guess contains the full name as substring
c) guess contains any significant name part as a WHOLE WORD
Plus a Set for the guessWords lookup so it's O(1) per part.
3. Tighter system prompt. Added explicit "Sentinel-Regel" section
with two FALSCH examples ("bist du Tesla?" while playing Edison,
"bist du ein Erfinder?") and two KORREKT examples. Doesn't fix
the false-positive rate at the model level but reduces it.
Layer 1 is the load-bearing one — even if the LLM emits the
sentinel for the wrong reason, the server gates the reveal on
ground truth.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The runbook for diagnosing why pending changes aren't flushing on
mana.how. Was sitting untracked in the repo root for the last week
of debugging; committing now that the debug surface it depends on
(window.__unifiedSync + getDebugInfo() + the surfaced silent
failures) actually exists.
Three steps in dependency order:
Schritt A — read _pendingChanges directly from IndexedDB to find
out which appIds and collections are stuck. Output drives the
appId choice for Schritt B.
Schritt B — manual POST against /sync/{appId} with the JWT from
localStorage. Status code mapping table tells you whether the
bug is server-side (4xx/5xx) or client-side (200 → sync engine
isn't running).
Schritt C — read window.__unifiedSync.getDebugInfo() (newly
exposed in this commit batch) to see channel state. Compare
knownAppIds against the Schritt A output: any appId with pending
rows but no channel will accumulate forever, and the new
console.warn from sync.ts will already be naming it explicitly.
Schritt B is the diagnostic key — everything else follows from
its status code.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The SYNC_DEBUG.md runbook tries to inspect window.unifiedSync from
DevTools to figure out why pending changes aren't flushing on
mana.how. The script can't work because (a) the unified sync
instance is never exposed on window and (b) the two most likely
failure modes — push for an unknown appId, getToken() returning
null — both `return` silently with no error, no telemetry, no
state change. The pending count climbs and there's nothing in
the console to point at the cause.
This commit makes those failures visible:
push() unknown appId
When a pending change lands for an appId that isn't in the
registered channels map (almost always a registry/migration
drift like renaming an appId without migrating the existing
pending rows) we now log a warning that names the offending
appId, lists the known ones for comparison, and emits a
push:error telemetry event with errorCategory='unknown-appid'.
The pending rows for that appId would otherwise accumulate
forever — same symptom as the SYNC_DEBUG report.
push() no token
getValidToken() can return null if the local exp check failed
and the refresh-on-online retry didn't yield a new token. This
was the silent path that was hardest to diagnose: the existing
health-check telemetry only fires after a successful fetch, so
there was no signal at all. We now log a warning, set
channel.lastError = 'no-token', flip status to 'error' and emit
push:error with errorCategory='no-token'.
sync-telemetry.ts
Widens the errorCategory union to include 'no-token' and
'unknown-appid' so the new emits type-check.
getDebugInfo()
New method on the createUnifiedSync return value. Returns a
flat, JSON-serializable snapshot of every channel's state
(status, online, clientId, serverUrl, channels[appId] with
lastError + timer flags, plus knownAppIds at top level) so the
SYNC_DEBUG runbook (Schritt C) can compare what the server
is being asked to sync vs. what's actually sitting in
_pendingChanges.
(app)/+layout.svelte
Exposes the live unified-sync instance on window.__unifiedSync
in the browser. Not a security concern: every method on the
returned object is also reachable via Dexie + a fresh fetch
from the same DevTools console, and a malicious user can't
escalate anything by poking at it. This is the global the
SYNC_DEBUG Schritt C script needs to exist.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two complementary improvements that take the body module from "works
in DE/EN" to "works for every Mana user" and surface the highest-
value cross-module integration the merged module unlocks.
i18n — finish the rollout
it/fr/es JSON files were already present from the initial body
drop but only had the original copy. Add the new keys introduced
by the quick-win commits last week:
- phase.{start,end,startNew}
- progression
- routines.{title,start,empty}
- exercisePicker.{title,pick,search,empty,create}
- muscle.* (13 muscle group labels)
- calorieWeight (used by the new chart below)
de.json + en.json get the calorieWeight key for the new section.
Translations are real (not machine-default fallbacks) so the
Body module is now first-class in all five supported locales.
CalorieWeightChart — Body × Nutriphi correlation
The whole point of having both modules in the same app is being
able to ask "did the cut work?" without exporting CSVs. This
component overlays daily calorie intake (summed across nutriphi
meals) against bodyweight readings over the last 8 weeks, with
an optional dashed target-weight line driven by the active phase.
Key design choices:
- Two y-axes auto-scaled independently (calories left, weight
right) so a 2000kcal swing and a 1kg swing both stay visible.
- Days without data are omitted from the path; the line draws
"M ... L" gaps so a missed weigh-in doesn't show as a hard
drop to zero.
- Target-weight overlay only renders when it falls inside the
visible weight range — clamping it to the edge would create
a meaningless boundary stripe.
- Cut-friendly delta colors: weight DOWN is green (you're on
track), weight UP is red. Calorie deltas use the same scheme
(down = restriction working).
- Pure SVG, no chart-lib dependency, same auto-scale primitive
we already use for WeightChart and ExerciseProgressionChart.
Cross-module read: new `useNutriphiMealsSince(date)` helper in
body/queries.ts — lives in body (not nutriphi) because the body
module owns the integration boundary, and putting the cross-table
read in one place keeps the import graph from getting circular if
nutriphi ever wants to reach back.
The hook decrypts the nutriphi `meals` table (already encrypted at
rest by the meals registry entry) and projects to a thin
MealWithNutrition shape for the chart. Decrypt cost on a few
hundred meal rows is negligible vs. the value of the chart.
Wired into the body layout as a 7th context (`bodyNutriphiMeals`)
with `dateNDaysAgo(56)` — 8 weeks covers a typical cut/bulk
cycle. ListView renders a new "Kalorien × Gewicht" card between
the Weight section and the Daily Check.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two related AI-infrastructure hardenings landing together because both
touch the same nutriphi/planta route definitions:
═══ 1. Wire-format schema versioning ═══
Adds AI_SCHEMA_VERSION + AiResponseEnvelope<T> in @mana/shared-types so
every AI structured-output endpoint speaks a single envelope dialect:
{ schemaVersion: '1', data: <validated object> }
Backend wraps via a small `envelope()` helper in each module's routes.ts;
frontend api.ts unwraps via `unwrapEnvelope<T>()` which throws an
AiSchemaVersionMismatchError if the server returns a version this
client wasn't compiled against.
Why this matters before launch:
- Catches stale-cache scenarios immediately ("client v1 talking to
server v2") with an actionable error in the network panel, not a
cascade of "field is undefined" bugs further down the stack
- Forces explicit version bumps when we make non-additive schema
changes — the bump rules are documented inline next to the constant
- Cheap to remove if it ever feels overkill: drop the envelope() call
on the backend and the unwrapEnvelope on the frontend, ~10 lines
═══ 2. Anthropic prompt-caching directive (forward-compat) ═══
Adds `providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' } } }`
on the system message in nutriphi + planta routes via a SYSTEM_CACHE_HINT
constant. This is a NO-OP today because:
- mana-llm currently routes to Gemini, not Claude
- Our system prompts are ~50 tokens, well under Anthropic's 1024-token
cache minimum
Kept anyway because it's ~5 lines per route and lights up automatically
when either condition flips (e.g. when we add per-user dietary preferences
as system context, pushing prompts past the threshold). The day we point
mana-llm at Claude Sonnet, every existing call site already has caching
enabled — no scavenger hunt through the routes.
System messages had to migrate from the `system:` shorthand to a full
messages[] entry to attach providerOptions, which is a tiny readability
loss but the only way to get per-message metadata into the AI SDK.
═══ Tests ═══
13 new cases in apps/mana/apps/web/.../nutriphi/ai-schemas.test.ts cover:
- AI_SCHEMA_VERSION presence + AiSchemaVersionMismatchError shape
- MealAnalysisSchema acceptance/rejection (confidence bounds, missing
nutrients, optional food fields, default empty arrays)
- PlantIdentificationSchema (every-field-optional design, defaults,
confidence range)
(Test file lives in the web app rather than packages/shared-types
because the latter has no test runner configured — adding vitest there
just for these would be overkill.)
Total nutriphi + planta suite: 62/62 passing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Onboarding's "Fertig" button was failing with two distinct errors:
1. Feed fetch hit `http://mana-api:3060/api/v1/news/feed` (the SSR-only
internal Docker hostname) and was blocked by CSP. The news client was
reading `$env/dynamic/public.PUBLIC_MANA_API_URL`, which on the client
resolves to whatever the SSR process had — i.e. the internal hostname.
Switched to the existing `getManaApiUrl()` helper, which on the client
reads `window.__PUBLIC_MANA_API_URL__` (set from
`PUBLIC_MANA_API_URL_CLIENT` = `https://mana-api.mana.how`).
2. `completeOnboarding` passed Svelte 5 `$state` proxy arrays directly
into the preferences store, which then handed them to Dexie's update
hook → `_pendingChanges.add` → `DataCloneError`. The picked arrays
are now snapshotted with `$state.snapshot()` at the call site, and
the store-side setters defensively spread their inputs so any future
caller is safe by default.
When the access token had aged out mid-game and the silent refresh
failed (auth.mana.how/api/v1/auth/refresh → 401), the who store
threw a raw "not authenticated" error and the PlayView showed a
gibberish red banner. Confusing because the navbar still shows the
user as logged in — the session cookie is intact, only the JWT is
gone — so the user has no clue what to do.
Match the base-client.ts pattern: when getAccessToken() returns
null OR the upstream returns 401, fire guestPrompt.requireAccount()
to surface the standard "Sitzung abgelaufen, neu anmelden" prompt
in the bottom-bar slot, then throw a German error string so the
inline error banner reads as "Sitzung abgelaufen — bitte neu
anmelden" instead of "not authenticated".
Hit by the developer mid-test on the first end-to-end live game on
production: the chat had been working for ~5 messages, then the
JWT expired and the game appeared to "die" with a cryptic message.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Drops the hand-written MealAnalysisResult / AnalyzedFood / NutritionData
interfaces in nutriphi/{api,types}.ts and the IdentifyResult interface
in planta/api.ts. They are now type aliases that re-export the inferred
types from @mana/shared-types — same types the backend validates against
at the boundary, so frontend and backend can no longer drift.
Net result is end-to-end type safety: a field rename in the shared
schema lights up red in both apps/api routes and apps/mana/apps/web
consumers in the same tsc pass. No more interface duplication, no more
manual sync.
Storage shapes (LocalMeal, LocalGoal, LocalFavorite) stay module-local
because they compose the shared NutritionData / AnalyzedFood with
storage-specific BaseRecord fields (id, userId, _fieldTimestamps,
deletedAt, etc.) that have no place in the wire format.
Tests: 29/29 nutriphi + 20/20 planta still green — the shapes are
identical, only the source of the type aliases changed.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replaces hand-rolled fetch + JSON.parse + cast-to-any with generateObject
from the AI SDK. The model is constrained to the shared Zod schemas in
@mana/shared-types, so the response is validated at the boundary instead
of trusting Gemini to emit the right shape.
Routes refactored:
- nutriphi/analysis/photo (image_url → multimodal `image:` content)
- nutriphi/analysis/text (free-text meal description)
- planta/analysis/identify (plant photo identification)
Why this is materially better than the old code:
- Runtime validation: if Gemini drifts, the AI SDK throws before the
response leaves the route. Frontend never sees malformed payloads.
- Provider-portable: createOpenAICompatible({ baseURL: MANA_LLM_URL })
keeps mana-llm as the central routing/auth/observability point. The
AI SDK speaks the OpenAI dialect to mana-llm. If we ever swap the
backend (e.g. claude-sonnet-4-6 for plant ID), it's a one-line model
name change.
- System prompts moved from a multi-line example-laden string to a
short instruction. The schema itself (with .describe() field hints)
now carries the structural contract that the JSON-by-example
paragraph used to encode. Token cost goes down, accuracy goes up.
- Drops manual fetch error handling (status checks, JSON.parse, cast)
in favour of try/catch around generateObject. Errors are typed.
mana-llm itself is unchanged — it's still the OpenAI-compatible proxy
in front of Gemini Vision. The AI SDK just gives us a typed client and
a schema-aware decoder on top of it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Introduces packages/shared-types/src/ai-schemas.ts as the single source
of truth for the wire format between mana-api and the unified Mana app.
Two schemas:
- MealAnalysisSchema (foods, totalNutrition, description, confidence,
warnings, suggestions) — consumed by nutriphi /analysis/photo and
/analysis/text routes
- PlantIdentificationSchema (scientificName, commonNames, confidence,
health/watering/light advice, generalTips) — consumed by planta
/analysis/identify
Both schemas include .describe() annotations on every field. The Vercel
AI SDK passes these through to the model as part of the structured-output
prompt, which materially improves accuracy on Gemini Vision (the model
sees both the field name AND the German-language hint about what to put
there).
Schemas use plain .optional() rather than .nullable() because
generateObject() guides the model with strict schema adherence — it
won't emit JSON null for missing fields, just omit them.
Deps wired up:
- apps/api: + ai@6, + @ai-sdk/openai-compatible@2, + @mana/shared-types
- apps/mana/apps/web: + zod (for z.infer of the shared schemas)
- packages/shared-types: + zod (for the schema definitions themselves)
All three on zod ^3.23 to stay in lockstep with the existing
apps/api zod usage.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The NPC reply rendered as a fully-white bubble with invisible
white-on-white text. Three bugs in the message-bubble markup,
all from copy-pasting Tailwind v3 patterns into a v4 codebase:
1. text-white-90 is not a valid class name in any Tailwind
version. The opacity goes after a slash: text-white/90.
2. bg-white + bg-opacity-5 is the v3 pattern. v4 dropped
bg-opacity-* and folded opacity into the color via
bg-white/5. Without it the bubble was solid white.
3. Combining 1 and 2: solid white background + invalid text
color → text inherited the parent's white → invisible.
Plus a Svelte-specific gotcha: class:bg-emerald-500/10={cond}
doesn't parse because Svelte's class: directive treats `/` as a
token. Use a class={...} string interpolation instead, which is
how the result banner now picks between the won and surrendered
backgrounds.
Also: rewrote the message bubble loop with an explicit
{#if msg.sender === 'user'}/{:else} branch instead of stacking
class:* directives. Less clever, more legible, and dodges the
class: + slash issue at the source.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
11 vitest cases covering the load-bearing parts of bodyStore that
would otherwise rot silently because they only fire on edge paths
(re-tap, phase switch, double-start). Same harness as
nutriphi/mutations.test.ts: fake-indexeddb + a MemoryKeyProvider
seeded with a fresh master key, plus mocks for the browser-only
globals the Dexie hooks reach for (funnel-tracking, triggers,
inline-suggest).
Coverage:
Encryption (registry round-trip)
- Exercise: name + notes wrapped, muscleGroup + equipment +
isPreset stay plaintext for the index/picker layer
- Set: weight + reps wrapped (numeric values get JSON-stringified
before encryption), workoutId + exerciseId + order + isWarmup
stay plaintext
upsertCheck idempotency
- Re-tapping the same date updates the existing row instead of
creating a second one (the bug this guards against would have
filled bodyChecks with one row per dot-tap on a slow day)
- Partial updates preserve prior fields when callers pass
undefined for the others
- Different dates get different rows
startPhase auto-close
- Opening a second phase closes the previous one's endDate
(so the "active phase" view always sees ≤ 1 open row)
- endPhase stamps endDate without soft-deleting the row
startWorkout single-active guard
- Returns the existing open workout instead of starting a
second one (would have silently double-tracked sets)
- After finishWorkout, a fresh start works again
logSet ordering
- Assigns sequential order indices within a workout
deleteWorkout cascade
- Soft-deletes the workout AND all its sets in one go
All 11 pass against the v2 schema (bodyExercises / bodyWorkouts /
bodySets / bodyChecks / bodyPhases) plus the registry encryption
allowlist landed in the previous body commits.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Five quick-win UI upgrades that take the body module from "skeleton
ListView" to "actually usable for daily training":
1. ExercisePicker modal (replaces the previous bare <select> in
WorkoutLogger). Search by name, filter chips per muscle group,
inline create-new-exercise. The big win is the per-row "Last:
80kg × 8 · vor 3 Tagen" hint — progressive overload becomes
"look at the number, add 2.5kg" instead of digging through
workout history. Recently-trained exercises bubble to the top
so the picker matches what you actually do most days.
2. RoutineManager. Three seed routines added to BODY_GUEST_SEED
(Full Body Starter, Upper Day, Lower Day) so a fresh user has
a one-tap "start" path. Inline form to save custom routines as
chips of selected exercises. Archive button per routine; edit
is deferred. Routines start a workout via the existing
bodyStore.startWorkout({ routineId, title }) shape.
3. PhaseManager replaces the previously read-only header pill with
a clickable control. Three states: idle (start button), opening
(kind picker + start/target weight inputs), active (color-coded
summary card with end button). The auto-close-on-switch logic
was already in bodyStore.startPhase, so this is pure UI plumbing.
4. ExerciseProgressionChart. Same auto-scaled SVG approach as
WeightChart but plots best estimated 1RM (Epley) per day for
one exercise. Falls back to the most-recently-trained exercise
when no explicit id is pinned, so the chart is never empty on
first open.
5. New query helpers feeding the above: getLastSetByExercise,
getE1rmTimeline (collapses multiple working sets in one session
to the daily best so the chart isn't noisy), and a coarse
relativeDays formatter for the picker's "vor 3 Tagen" hints.
ListView re-composed: removed the dead phase-pill CSS, added
PhaseManager + RoutineManager + ExerciseProgressionChart sections,
left WorkoutLogger / WeightChart / DailyCheckCard / RecentWorkouts
in place. i18n keys for the new copy added to body/de.json and
body/en.json (it/fr/es fall back to the components' inline default
strings until translated).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two related bugs that caused user messages to disappear into the
ether: optimistic insert succeeds but neither the user message nor
the NPC reply ever shows up in PlayView, and no errors hit the
console because nothing actually throws.
Bug 1 — createdAt was never set
-------------------------------
The Dexie creating-hook in apps/mana/apps/web/src/lib/data/database.ts
auto-stamps userId and __fieldTimestamps but does NOT auto-stamp
createdAt. Module stores have to set it themselves. Chat gets away
with it because its query uses a simple conversationId index and
the type converter falls back to "now" — but I had the who store
omit createdAt entirely.
Bug 2 — composite index hides rows with undefined createdAt
-----------------------------------------------------------
queries.ts used .where('[gameId+createdAt]').between(...) against
the [gameId+createdAt] composite. Dexie does NOT index rows where
any compound key component is undefined, so even though the insert
succeeded and the row was physically in the table, the range query
returned an empty list. The liveQuery effect re-fired but found
nothing → no UI update. Same issue inside sendMessage's history-
fetch step.
Fix:
1. Set createdAt explicitly on insert in whoGamesStore (both
user message and NPC reply, +1ms on the reply so it sorts
strictly after even when both inserts land in the same ms)
2. Switch queries to .where('gameId').equals(id) and sort in JS
— same pattern as chat's useConversationMessages, robust
against missing createdAt
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
In commit c9e16243c (the gemma3:4b → gemma4:e4b switch) I sloppily
wrote in the ManaServerBackend docstring that mana-llm "routes them
to the local Ollama instance on the Mac Mini (running on the M4's
Metal GPU)". That is wrong AND it's the exact misconception I had
to debug-out-of earlier the same day.
The actual topology — already documented correctly in
docs/MAC_MINI_SERVER.md and docs/WINDOWS_GPU_SERVER_SETUP.md, I
just didn't read those before writing the docstring:
mana-llm container's OLLAMA_URL points at host.docker.internal:13434
→ ~/gpu-proxy.py (Python TCP forwarder, LaunchAgent on Mac Mini)
→ 192.168.178.11:11434 (LAN)
→ Ollama on the Windows GPU server (RTX 3090, 24 GB VRAM)
→ Inference
The Mac Mini's brew-installed Ollama binary is NOT on the inference
path. It's just a CLI for inspecting the proxied daemon. Today's
"why does the Mac Mini still have Ollama 0.15.4" puzzle has the
answer "because nothing on the Mac Mini actually runs inference, the
binary version was never load-bearing".
Two doc fixes:
1. packages/shared-llm/src/backends/mana-server.ts
Replace the lying docstring with the real topology, including a
pointer to the two MAC_MINI_SERVER.md / WINDOWS_GPU_SERVER_SETUP.md
sections that document it. Also note that gemma4:e4b is a
reasoning model that emits message.reasoning when given enough
tokens (cross-reference to remote.ts's fallback parser).
2. packages/local-llm/CLAUDE.md
Add a paragraph at the top explaining the difference between
"@mana/local-llm" (browser tier, on-device) and the @mana/shared-llm
"mana-server" / "cloud" tiers (services/mana-llm proxy → gpu-proxy.py
→ RTX 3090). This was implicit before — "not related to
services/mana-llm" — but didn't say where mana-server actually
goes. Future me reading the doc would still have to dig through
the docker-compose env to find out.
No code changes — only docstring + markdown.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reconciles the in-repo cloudflared-config.yml with the actually-loaded
ingress map on the Mac Mini production tunnel — the previous repo file
was missing 30+ hostnames (per-app subdomains, mana-api, sync, llm,
media, credits, subscriptions, etc.) because it was last updated
before the unified Mana web app rollout. Adds the new mana-api.mana.how
ingress for apps/api on port 3060 so the unified backend has a public
client URL for the SvelteKit web app's PUBLIC_MANA_API_URL_CLIENT.
Drops the dead matrix.mana.how / element.mana.how routes — the matrix
subsystem was removed in 2514831a3 and those services no longer exist.
Adds scripts/mac-mini/sync-tunnel-config.sh — the one-command flow for
shipping a tunnel-config change: pull on the server, validate the
yaml, kickstart cloudflared via launchctl. setup-cloudflared-service.sh
already wires the launchd plist with --config <repo-path> pointing at
this file, so a fresh Mac Mini install + setup script + sync script
gives you a fully reproducible tunnel.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reasoning-style models (Gemma 4 E4B is the first one we use, but
DeepSeek R1, Gemini 2.5 thinking, etc. behave the same way) split
their output into two fields:
- message.content — the final answer
- message.reasoning — the chain-of-thought leading up to it
When the model is given too few max_tokens to finish reasoning AND
emit content, the response comes back with content="" and reasoning
populated with the half-finished thought. Verified empirically with
gemma4:e4b and `max_tokens: 10` on a "Sage Hi auf Deutsch in einem
Wort" prompt — content was "" while reasoning had "Here's a
thinking process to..." (cut off mid-thought).
For the title task this rarely matters because the system prompt is
directive enough to skip the thinking phase (verified: same gemma4:
e4b returns clean 7-token titles like "Sonnenstrahlen genießen
heute" with the standard system prompt + max_tokens 32). But it's
a real failure mode for any future task that uses a less-directive
prompt or hits a longer reasoning chain.
Defensive fix: prefer message.content first, fall back to
message.reasoning if content is empty. The fallback is a string-or-
nothing operation, no semantic interpretation — if the reasoning
field happens to contain a usable answer fragment, the caller's
cleanup chain (e.g. generateTitleTask's strip-quotes-and-dots
pipeline) will normalize it. If it's truly half-finished thought,
the caller's runRules fallback still kicks in via the existing
empty-result detection.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Builds the user-facing surface on top of the data layer landed in the
previous commit. After this commit the Body module is reachable at
/body and surfaces an at-a-glance tile on the customizable dashboard.
Components (lib/modules/body/components/):
- SetRow — inline editable set with weight/reps/RPE/warmup/delete.
Local $state mirrors the prop and re-syncs via $effect when the
parent re-emits the row through liveQuery.
- WorkoutLogger — active-session console. Groups sets by exercise,
pre-fills the next-set form from the most recent working set on
the same exercise so progressive overload is one tap.
- MeasurementForm — quick-log with type picker; unit auto-follows
(kg for weight/muscle, % for body fat, cm for circumferences).
- WeightChart — pure SVG line chart, no chart-lib dependency.
Auto-scales the y-axis with padding so flat-line periods don't
collapse to a single horizontal line.
- DailyCheckCard — 1-5 dot buttons for energy/sleep/soreness/mood,
upserts to bodyChecks per day so re-tapping overwrites today.
- RecentWorkouts — finished sessions with set count, total volume,
duration.
ListView.svelte composes everything into the main view: active
workout console when running (otherwise a "start" CTA), weight
chart + measurement form, today's daily check card, recent
workouts. Phase pill in the header (Cut/Bulk/Maintenance) with
color-coded background.
Route (routes/(app)/body/):
- +layout.svelte sets seven contexts via the useAllBody*() hooks
so child pages get observable streams without prop drilling.
- +page.svelte renders ListView.
i18n (lib/i18n/locales/body/):
- de/en/it/fr/es JSON files with title, subtitle, workout state,
measurement.* (10 types), check.* (4 fields), phase.* (4 kinds),
log/finish/start strings.
- Registered in lib/i18n/index.ts alongside the other module dicts.
Dashboard widget (lib/modules/body/widgets/BodyStatsWidget.svelte):
- Surfaces latest weight + delta vs the previous reading, plus
either the active workout (with today's set count + volume) or
a "start workout" CTA when idle.
- Reads bodyMeasurements / bodyWorkouts / bodySets directly via
liveQuery + decryptRecords (same pattern as NewsUnreadWidget).
- Wired into widget-registry.ts as 'body-stats', registered in
types/dashboard.ts WIDGET_REGISTRY with 💪 icon and the new
'body' requiredBackend tier.
- Strings added under dashboard.widgets.body_stats.* in all five
locales.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the unified Body module that merges what would otherwise be two
separate apps (fitness + bodylog) into one. The value lives in their
intersection: tracking lifts alongside bodyweight is what enables
real progressive-overload + recomp insights, and shared primitives
(charts, time series, units, photos) avoid duplicating UI surface.
This commit lands only the data layer + module registration so the
follow-up UI / route / dashboard widget can build on a stable
foundation.
Tables (db.version(2), already in place):
bodyExercises — exercise library (Squat, Bench, Deadlift, OHP,
Row, Pull-Up seeded as presets)
bodyRoutines — saved workout templates
bodyWorkouts — one logged training session
bodySets — set rows inside a workout, indexed [workoutId+order]
bodyMeasurements — weight + measurements over time, indexed [type+date]
bodyChecks — daily energy/sleep/soreness/mood self-rating,
upserted per day
bodyPhases — cut/bulk/maintenance/recomp phase markers, with
auto-close on phase change so the "active phase"
view always has at most one open row
Encryption (registry.ts): all 7 tables flipped to enabled. Health
data is GDPR Art. 9 special-category, so user-typed text + the
sensitive numeric fields (weight, reps, value, startWeight,
targetWeight, energy/sleep/soreness/mood) are wrapped. Indexed
columns (ids, FKs, ordering, dates, kind/type/equipment enums)
stay plaintext so the existing query layer keeps working without
decrypt-on-every-row.
Module wiring:
- bodyModuleConfig added to module-registry.ts
- Body app entry registered in shared-branding mana-apps.ts
(red→orange icon to set it apart from the green health-adjacent
modules and the pink cycles icon)
- APP_ICONS.body added (dumbbell + heart-pulse hybrid SVG)
Also captures the broader module-ideas brainstorm in
docs/future/MODULE_IDEAS.md and marks fitness + bodylog as merged
into the new body module.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
JSDOM throws CSS / parser errors from detached parse5 callbacks that
escape every try/catch in the call stack and even bun's
process.on('uncaughtException') handlers — leaving the daemon stuck
crash-looping past the first bad page in source #4 (heise) without
ever making forward progress.
Set FULL_TEXT_THRESHOLD_WORDS = 0 so we never call into Readability.
Sources that ship full RSS bodies (Tagesschau, Spiegel, BBC, …) are
unaffected. Title-only sources (Hacker News) keep the row with an
empty content field; the reader already falls back to "Original
öffnen ↗" in that case.
Re-enabling extraction in a worker thread is left for a follow-up.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The PlayView's send() catch sets a local `error` state which renders
as a small banner near the input — easy to miss when the chat area
is the first thing the eye looks at after pressing send. Add an
explicit console.error so the next time something goes wrong end
to end, the actual exception is one DevTools tab away instead of
"my message disappeared and I have no idea why".
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
JSDOM's CSS parser throws on plenty of real-world pages and the error
escapes every try/catch in the buildRow → ingestSource chain because
it fires from a parse5 callback that runs after JSDOM has returned.
In the prod container this killed the process on the first bad page,
docker restarted it, and it crash-looped on the same first source
forever — no progress past tech.
Two-layer fix: a silent VirtualConsole on every JSDOM instance to
swallow CSS / resource errors at the source, plus process-level
uncaughtException + unhandledRejection handlers that log and continue
so any future async escape can't kill the daemon either.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds a contextMenuActions entry to the nutriphi registerApp() block
matching the convention todo / calendar / contacts / habits / notes /
dreams / cycles all use: a Plus-icon "Neue Mahlzeit" action that
dispatches a window CustomEvent('mana:quick-action', { app: 'nutriphi',
action: 'new' }).
Note: there is currently no registered listener for mana:quick-action
in the codebase — every existing module dispatches it but nothing
consumes it yet (presumably waiting for a central handler in the
workbench shell). Adding the entry now keeps nutriphi consistent with
the convention so it will light up automatically once the listener
lands.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The who module landed with whoGames + whoMessages declared inside
db.version(1). That's wrong: existing browsers (every tester
including the developer's own) already had Dexie persisted at v1
with the OLD schema (no who tables). When the new bundle declared
v1 with a different schema, Dexie refused the schema diff and the
optimistic insert in whoGamesStore.sendMessage silently failed —
neither the user's message nor the server reply appeared in the
PlayView, even though the deck picker and game start worked
(those write whoGames which has the same schema-mismatch issue
but the failure is only visible once a chat starts).
The pre-launch cleanup doc says "edit version(1) directly until
launch", but in practice that bricks every developer's local
state on every additive change. The right rule is: bump the
version for additive table additions even pre-launch — Dexie
handles the additive case cleanly with no upgrade function.
This commit:
- Removes whoGames + whoMessages from db.version(1)
- Adds them to a new db.version(3) block (v2 was already taken
by the bodyExercises / bodyRoutines / etc. body module)
- Existing IndexedDB databases at v1 or v2 will run the
additive upgrade automatically on next page load. No data
loss, no upgrade function needed (no rows to migrate yet).
Also: add a console.error to PlayView's send() catch so future
sendMessage failures actually show up in DevTools instead of
only being visible as a tiny error banner near the input.
Fixes the "ich tippe eine frage und nichts passiert" symptom
the developer hit on the first end-to-end live test of the who
module on production.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds nutriphi to the unified quick-input registry so the global search
bar gains meal-aware behaviour whenever the user is on a /nutriphi route.
Adapter contract (mirrors planta / todo / calendar):
- onSearch: decrypts meals (description is in the encrypted allowlist)
and substring-matches by description, sorted newest-first, capped at 10
- onCreate: parses an optional meal-type prefix from the query
("frühstück: müsli mit beeren", "snack: apfel", english + ASCII
variants accepted) and falls back to suggestMealType() based on
time-of-day when no prefix is given
- onParseCreate: shows a preview line so users see which meal type
will be picked before they hit enter
Persistence goes through mealMutations.create — same code path the
workbench card uses, so encryption + sync work for free.
Tests: 13 cases covering parser branches (German + English prefixes,
case insensitivity, time-of-day fallback for the three meal windows,
edge cases like unknown prefixes, far-away colons, empty descriptions
after a prefix). Parser is exported to keep the test independent of
the adapter's network-touching hooks.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Was copied verbatim from mana-credits' template but not actually
imported anywhere in src/. Removing it lets the Docker build's bun
install resolve from npm only — workspace:* refs need the full
monorepo context which the Dockerfile doesn't copy.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two surprises came out of "why do we still use Gemma 3 instead of 4":
1. The hardcoded default in ManaServerBackend was `gemma3:4b`, which
was even smaller than mana-llm's actual server-side default of
`gemma3:12b`. My initial guess from docs/LOCAL_LLM_MODELS.md was
conservative.
2. The mana-llm OLLAMA_URL points at host.docker.internal:13434,
which is NOT the Mac Mini's local Ollama — it's a Python TCP
forwarder (~/gpu-proxy.py) that proxies to 192.168.178.11:11434
on the Windows GPU server. So title generation has been running
on the RTX 3090 the whole time, not on the M4 Metal GPU. The
Mac Mini's brew-installed ollama 0.15.4 wasn't even being used
for inference — only as a CLI to inspect the proxied Ollama.
To get to Gemma 4, both Ollama instances needed an upgrade:
- Mac Mini brew : 0.15.4 → 0.20.4 (cosmetic, the binary isn't on
the inference path; upgraded for consistency)
- GPU server : 0.18.2 → 0.20.4 via winget. Required restarting
the daemon via the OllamaServe scheduled task
that was already configured.
Then `ollama pull gemma4:e4b` on the GPU server (9.6 GB, ~10 min on
the LAN). Verified end-to-end via the proxy with a real chat
completion request to mana-llm — gemma4:e4b answered with a clean
4-word German title for a sample voice memo prompt:
prompt: "Erstelle einen kurzen 3-Wort Titel für: Es ist ein
schöner Tag heute am 9. April"
→ "Schöner Tag, neuntes April"
Changes in this commit:
packages/shared-llm/src/backends/mana-server.ts
- defaultModel: 'gemma3:4b' → 'gemma4:e4b'
- Updated docstring to explain why E4B is the right Mana-Server
tier default: 9.6 GB on disk, 128K context, "Effective 4B"
arch punches above its weight class for German prompts, and
the family stays consistent with the browser tier (Gemma 4
E2B is the smaller sibling) so the source label and prompt
behavior remain coherent across tiers.
apps/mana/apps/web/src/lib/modules/memoro/views/DetailView.svelte
- TITLE_SOURCE_LABELS map updated:
browser → "Auf deinem Gerät (Gemma 4 E2B)" (was "(Gemma 4)")
mana-server → "Mana-Server (Gemma 4 E4B)" (was "(gemma3:4b)")
- The label now reflects that BOTH the browser and the mana-server
tier are running Gemma 4 variants, which is more honest than
the previous mix.
Did NOT change:
- The Ollama OLLAMA_DEFAULT_MODEL env var in docker-compose.macmini.yml
(still gemma3:12b). That's the fallback for callers who don't
specify a model in their request. Our generate-title task always
sends an explicit model string, so it's unaffected. Bumping the
global default is a separate decision — it would change behavior
for the playground module and any other consumer that relies on
the implicit fallback.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the test that would have caught the inventar↔inventory drift
months earlier (commit 45790ffbb fixed the actual mismatch). Walks
both directions:
1. Every workbench-registered app must have a MANA_APPS entry, OR
be in the WORKBENCH_ONLY allowlist (currently `automations`,
`playground` — internal devtools we don't want in marketing).
2. Every MANA_APPS entry must be registered in the workbench, OR
be in the BRANDING_ONLY allowlist (`mana` itself, standalone
subdomains like `arcade`, "Coming Soon" placeholders like
`wisekeep`/`mail`/`events`, and modules whose workbench
integration is still pending like `guides`/`who`).
Plus a regression guard that fails loudly if anyone reintroduces
`inventar` as an id in either registry.
The point: every future drift between the two registries forces the
contributor to either fix it on the spot or explicitly classify the
new entry in one of the allowlists with a comment. No more silent
fail-open tier-gating.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The workbench card was read-only — users had to navigate to /nutriphi/add
to log anything. Now the card has a quick-add bar in the toolbar slot:
- Text input → Enter or send button → mealMutations.create() with
suggestMealType() (no AI round-trip; users get instant persistence
and can edit nutrition later from the detail page)
- 📷 button → file picker (capture=environment for mobile camera) →
photoMutations.uploadAndAnalyze → mealMutations.createFromPhoto with
the full Gemini result (foods + thumbnail + confidence)
- Toast on success ("📷 Mahlzeit hinzugefügt · KI 87%") and on error
Item rendering also got a small upgrade:
- Each row is now a link to /nutriphi/[id] (matches the rest of the
nutriphi pages now that the detail route exists)
- Thumbnail shown next to the row when present (uses photoThumbnailUrl
for bandwidth)
- 📷 indicator badge for photo-mode meals
Pre-existing bug fix in passing: the goals query was reading from the
non-existent table 'nutriphiGoals' instead of 'goals' (the actual table
name from module.config.ts), so the calorie target was never visible
in the workbench card. Switched to 'goals'.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Surfaces News in two extra entry points beyond the dedicated /news
route. The workbench ListView is a compact ranked-feed view designed
for the AppPage carousel slot — it boots the same feed-cache poll, runs
the same scoreArticle pipeline, but renders smaller cards and skips the
onboarding wizard (un-onboarded users get a CTA pointing them at /news
instead). The NewsUnreadWidget shows the top three ranked unread
articles on the dashboard, sharing the exact same engine inputs so the
ordering matches the main feed. WidgetType + WIDGET_REGISTRY get the
new 'news-unread' entry, and dashboard.widgets.news_unread is added to
all five locale files.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the seven (app)/news/* routes: layout that boots the feed-cache
poll, main page with the 3-step onboarding wizard and the ranked feed
with reaction buttons, dual-source reader at /news/[id], saved reading
list with category filter strip + inline category editor + 3 tabs
(unread/favorites/archive), /news/add for ad-hoc URL paste,
/news/preferences for topics/languages/weight reset, /news/sources
for per-source block toggles. Five locale JSON files (de/en/es/fr/it,
~60 keys each) for the eventual $_('news.…') refactor.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the local-first News module: 5 Dexie tables (newsArticles,
newsCategories, newsPreferences, newsReactions, newsCachedFeed) with
the cached pool intentionally outside the sync map, four mutation
stores (articles, categories, preferences, reactions, feed-cache),
typed DTOs + queries with decryption-aware liveQueries, the api.ts
client for /api/v1/news/{feed,extract}, and the pure feed-engine that
scores articles by recency × topicWeight × sourceWeight and applies
reaction-driven weight updates client-side.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the services/news-ingester Bun service that pulls 25 public RSS/JSON
feeds into news.curated_articles every 15 min, with Mozilla Readability
fallback for thin RSS bodies and 30-day retention. apps/api /feed is
rewritten to read from the new pool table directly instead of the
sync_changes hack, with topics/lang/since/limit/offset query params.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The workbench-registry app id 'inventar' did not match its
@mana/shared-branding MANA_APPS counterpart 'inventory', so the tier-
gating join in apps/web/src/lib/app-registry/registry.ts silently
failed for the inventory module — it fell into the "no MANA_APPS
entry, default visible" fallback and was effectively un-gated. The
codebase had also voted overwhelmingly for 'inventar' (53 files) vs
'inventory' (3 files in shared-branding), so the long-standing
mismatch was just bookkeeping debt waiting to bite.
Pre-release, no live data, so the cleanest fix is to align everything
on the English 'inventory':
- Workbench-registry id, module.config.ts appId, module folder, route
folder and i18n locale folder all renamed via git mv
- Standalone apps/inventar/ workspace package renamed
- All imports, store identifiers (InventarEvents → InventoryEvents,
INVENTAR_GUEST_SEED, inventarModuleConfig), i18n keys and href/goto
paths follow the rename
- The German display label "Inventar" is preserved everywhere it is a
user-visible string (page titles, i18n values, toast labels)
- Dexie table prefixes (invCollections, invItems, …) are unchanged
- Drive-by fix: ListView.svelte was querying non-existent
inventarCollections/inventarItems tables — corrected to the actual
invCollections/invItems names from module.config
- The "inventar ↔ inventory id mismatch" workaround comment in
registry.ts is removed since the mismatch no longer exists
module-registry.ts also picks up the user's parallel newsModuleConfig
addition because both edits land in the same import block — keeping
them split would have left the build in an inconsistent state.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The who module's chat endpoint was returning 502 to the browser
because mana-api called /api/v1/chat/completions on mana-llm and
got 404 — mana-llm exposes the OpenAI-compatible /v1/chat/completions
path with no /api/ prefix.
This is the same bug research had until commit 63a91e36a fixed its
path. The chat module (apps/api/src/modules/chat/routes.ts) still
has the wrong path — flagged as a follow-up.
Diagnostic from inside the mana-api container:
/v1/chat/completions → 422 (right path, empty body)
/api/v1/chat/completions → 404 (wrong path)
mana-api log line that flagged it:
who.llm_non_200 status:404
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Five new cases against fake-indexeddb covering the new update mutation:
- patches description and re-encrypts (verified via ENC_PREFIX wire
format check + absence of original AND new plaintext in the blob)
- patches numeric nutrition fields (stays plaintext for aggregation)
- partial update only touches the supplied fields (mealType change
does not zero out nutrition)
- bumps updatedAt
- throws on missing id
Total nutriphi suite is now 16/16 cases, ~50ms.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New /nutriphi/[id] route — the missing endpoint of the photo workflow.
Loads the meal via inline useLiveQueryWithDefault(loadMealById, ...) so
the closure captures page.params.id directly (planta DetailView pattern).
Detail page features:
- Full-resolution photo, click-to-expand lightbox modal
- All six nutrient cards with the same color tokens as the dashboard
- "Erkannte Bestandteile" — list of AI-identified foods (name +
quantity + kcal) so users can see what Gemini actually parsed
- Inline edit form (mealtype + description + 6 nutrient inputs),
persists via mealMutations.update
- "🔄 Erneut analysieren" for photo meals — calls analyze on the
stored URL and overwrites description + nutrition without
re-uploading the file
- Two-stage delete confirm
Add page (add/+page.svelte):
- Captures upload.thumbnailUrl + analysis.foods after KI analysis
- Persists both via the extended createFromPhoto signature
- Shows the foods breakdown card under the confidence badge so users
see the parse before saving (closes the trust gap on low-confidence
runs)
List pages (Heute + History):
- Switch to photoThumbnailUrl ?? photoUrl for the row image — saves
bandwidth on the most-rendered surface
- Each meal row is now a link to /nutriphi/[id]
- History row layout split into <a> + sibling delete button so the
delete click doesn't bubble through navigation
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Schema additions on LocalMeal:
- photoThumbnailUrl: pre-generated mana-media thumbnail URL, used in
list views to save bandwidth (full photoUrl stays for the detail
view + lightbox)
- foods: AnalyzedFood[] (name / quantity / calories) — Gemini Vision
already returns this breakdown but the previous flow threw it away
- new AnalyzedFood type exported from the barrel
Encryption registry:
- meals encrypted allowlist now includes 'foods' (food names are
user content; aes.ts JSON-stringifies arrays before wrap, so an
array value works the same as a string)
- registry comment updated to enumerate which photo fields stay
plaintext and why
New mutation: mealMutations.update(id, dto) for inline meal edits.
Patches only the supplied fields, runs encryptRecord on the partial
update so encrypted columns stay encrypted, then re-decrypts the merged
row to return a plaintext snapshot.
queries.ts: new loadMealById(id) helper used by the detail page's
inline useLiveQueryWithDefault wrapper (matches the planta DetailView
pattern of capturing the route param directly in the closure).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>