Commit graph

34 commits

Author SHA1 Message Date
Till JS
54a12ffd5c feat(webapp): wire isParallelSafe in Companion chat + Mission runner
Enables the M1 parallel-reads optimisation on the webapp side. Both
consumers of runPlannerLoop pass an isParallelSafe predicate derived
from the tool catalog:

  isParallelSafe: (name) =>
    AI_TOOL_CATALOG_BY_NAME.get(name)?.defaultPolicy === 'auto'

Auto-policy tools (list_tasks, get_habits, nutrition_summary, …) run
via Promise.all in batches of 10 when the LLM fans them out in one
round. Propose-policy tools — which surface to the user as Proposal
cards — stay sequential so intent ordering in the inbox is preserved
and pre-execute guardrails can reason about prior-step state.

Tests: 31 existing companion + mission tests pass unchanged; the
parallel path is exercised via the new loop.test.ts cases shipped
with the M1 commit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 14:11:24 +02:00
Till JS
89258eb451 feat(profile,api): meImages foundation for AI reference generation (M1)
M1 of docs/plans/me-images-and-reference-generation.md — a user-owned
pool of reference images (face, fullbody, hands, …) that will back
image generation where the user appears as themselves (outfit try-on,
glasses, portraits) via OpenAI /v1/images/edits. Data layer only in
this commit; UI lands in M2, the edits endpoint in M3.

- Dexie v38: meImages table with id/kind/primaryFor/createdAt indices.
  Added to USER_LEVEL_TABLES so the hook stamps userId and skips the
  spaceId/authorId/visibility trio (one human = one face across every
  Space, not per-Space).
- Encryption registry: label + tags encrypted; kind/primaryFor/usage
  stay plaintext because they drive the indexed queries and the
  Reference picker's filtering. mediaId/URLs/dimensions are structural.
- Profile module store: createMeImage, updateMeImage,
  setAiReferenceEnabled (per-image KI opt-in — plan decision #5),
  setPrimary (transactional slot swap — only one row per primary slot),
  deleteMeImage. Emits MeImage* domain events.
- Queries: useAllMeImages, useMeImagesByKind, useReferenceImages
  (only the rows the user opted in for KI), useImageByPrimary.
- POST /api/v1/profile/me-images/upload: thin wrapper over mana-media
  with app='me' as the reference tag. No new MinIO bucket — plan
  decision #1 revised after verifying mana-media uses one bucket and
  only tags references by app.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 13:50:53 +02:00
Till JS
3a68a63728 feat(picture,api): GPT-Image-2 image generation
Adds a third provider path to /api/v1/picture/generate that calls OpenAI
gpt-image-2 when model starts with "openai/". Supports n=1..4 batch
generation with character continuity, base64 response decoded server-side
and uploaded to mana-media for dedup + thumbnails. Credit cost scales
by quality (low=3, medium=10, high=25) × n.

Env plumbing:
- scripts/generate-env.mjs: new apps/api/.env stanza propagates
  OPENAI_API_KEY + REPLICATE_API_TOKEN from .env.secrets
- .env.macmini.example: documents OPENAI_API_KEY for prod

Frontend /picture/generate: model + quality + aspect-ratio + batch-count
selectors, real fetch with auth, persists each image via imagesStore.insert
(encrypted + synced). Wrapped in ModuleShell variant=fill with back-arrow
to /picture and a live credit badge in the header actions slot.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 00:37:15 +02:00
Till JS
efe1810b04 feat(articles): browser-HTML bookmarklet + consent-wall detection + auto-save
Three intertwined improvements so the "save an article" flow actually
works on real-world sites, not just bloggy happy-path URLs.

=== Consent-wall detection ===

apps/api/src/modules/articles/routes.ts: the /extract response now
includes `warning: 'probable_consent_wall'` when the extracted text
is both short (<300 words) AND contains cookie-dialog vocabulary
(Cookies zustimmen / cookie consent / Zustimmung / accept all cookies
/ enable javascript / privacy center / Datenschutzeinstellungen). The
server still returns whatever it got so the client can decide; it just
flags it as probably-not-the-article.

Frontend surfaces that warning prominently instead of silently
persisting a "Cookies zustimmen…" blob as the article body.

=== Browser-HTML extract path ===

Server-side: new POST /api/v1/articles/extract/html endpoint accepting
{ url, html }, running @mana/shared-rss's extractFromHtml on the
caller-supplied HTML. 10 MiB payload cap. Same response shape as
/extract, including the consent-wall warning (in case the bookmarklet
fires before the user dismisses the dialog).

Client-side: new extractFromHtml() in api.ts with the same 25s
timeout + typed network-error mapping as extractArticle.

AddUrlForm gains a postMessage handshake: when loaded with
?source=bookmarklet, it posts `mana-ready` to window.opener and
listens one-shot for `mana-html` with { url, html, title } from the
opener's tab. The HTML goes straight to our own /extract/html
endpoint — same-origin, carries the user's auth cookie. No CORS, no
form-submission CSP tango, no cross-origin token smuggling. If
nothing arrives within 30s we surface a clear error instead of
hanging.

Settings page adds a second "browser-HTML" bookmarklet (marked as
"Empfohlen") alongside the legacy URL bookmarklet. New snippet opens
/articles/add?source=bookmarklet in a new tab, waits for mana-ready,
then postMessages the tab's documentElement.outerHTML over. 15s
safety timeout.

This bypasses cookie-consent walls and soft paywalls because the
HTML already comes from the user's own authenticated, consented
browser tab.

=== Auto-save after successful extract ===

Previously every save path had a two-click UX: preview → confirm.
Now on clean extract the preview skips straight to persist + navigate
to the reader. Consent-wall warning is the only fallback that pauses
the flow — the user gets a "Trotzdem speichern" button to opt into
saving a teaser anyway.

Button in the manual input row is renamed "Vorschau abrufen" → "Speichern"
since it's now the commit action, not the inspect action. Loading-block
messaging distinguishes "Server extrahiert…" vs "Speichere in deine
Leseliste… Gleich weiter zum Reader."

Net click count:
  Bookmarklet v1/v2 on working site:  2 clicks → 1 click
  Manual paste:                        2 clicks → 1 click
  Consent-wall fallback:              2 clicks (explicit "Trotzdem")
  Duplicate:                          2 clicks ("Zum gespeicherten
                                        Artikel")

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 15:29:53 +02:00
Till JS
3357e88a1c feat(articles): new read-it-later module — save / read / highlight
Pocket-style module for saving arbitrary web URLs, extracting readable
content server-side via @mana/shared-rss (Readability + JSDOM), and
storing it AES-GCM encrypted in IndexedDB for offline reading.

M1 skeleton: Dexie v33 (articles, articleHighlights, articleTags),
crypto registry entries, module registration, app-registry entry with
orange icon, empty-state ListView. articleTags is a pure junction
into the existing globalTags system (appId 'tags') — same pattern as
noteTags, eventTags, placeTags.

M2 URL save + reader: POST /api/v1/articles/extract (one endpoint,
not two — client caches the preview payload to avoid a double
server fetch). AddUrlForm with scope-aware dedupe, DetailView with
ReaderView typography shell (serif/sans, light/sepia/dark, size
slider), auto-tracked reading progress with scroll restore.

M3 highlights: TreeWalker-based plain-text offset resolution
(lib/offsets.ts), highlights store, floating HighlightMenu with
create + edit modes, HighlightLayer orchestrator that wraps/unwraps
highlight spans whenever highlights or htmlVersion changes. Four
colours (yellow/green/blue/pink), optional notes, click-to-edit,
dark-mode-aware overlay colours.

Drive-by: removed stale 'pendingProposals' entry from the plaintext
allowlist — the table was dropped in Dexie v29 and the allowlist
audit was flagging it as a dead entry.

Plan: docs/plans/articles-module.md. M4 (tags + filter + progress),
M5 (news:type='saved' migration), M6 (AI tools), M7 (share target),
M8 (highlights view + stats) still open.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 16:20:23 +02:00
Till JS
9b8c69123c feat(wetter): add multi-model source comparison view
New "Quellen-Vergleich" tab on the weather page that fetches the same
location from 5 weather models in parallel (DWD ICON-D2, ICON-EU,
ECMWF IFS, NOAA GFS, Open-Meteo Best Match) and displays them stacked
for easy comparison of temperature, precipitation, and daily forecasts.

Adds /api/v1/wetter/compare endpoint and SourceComparison.svelte.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 13:57:05 +02:00
Till JS
62aac6dfdb feat(wetter): add weather module with Open-Meteo, DWD alerts, and rain nowcast
New module providing weather data for the DACH region via three sources:
- Open-Meteo (DWD ICON-D2 model) for current conditions and 7-day forecast
- DWD warnings endpoint for severe weather alerts
- Rainbow.ai / Open-Meteo fallback for minute-level rain nowcast

Includes API proxy with in-memory caching, Svelte 5 UI with location
picker, hourly/daily forecast, alert cards, and precipitation bar chart.
Two AI tools (get_weather, get_rain_forecast) enable the companion to
answer weather questions.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 03:46:15 +02:00
Till JS
acd7e0d6b0 docs: update architecture comparison — 5/10 roadmap items done
Update report to reflect all completed work:
- Matrix: streaming , tool registration updated to 29 tools + MCP
- §5.2 Streaming: marked done
- §5.3 Tool System: marked done
- §6 Table: items 1-3 + 5 struck through with commit refs
- §8 Fazit: updated gaps and recommendations

5 of 10 roadmap items complete in one session:
1. SSE Streaming, 2. Dynamic Tool Registry, 3. Budget Enforcement,
5. MCP Server Export (27/29 tools with DB ops), plus Tool Drift Fix.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 15:00:09 +02:00
Till JS
fdd643f4b4 feat(news-research): RSS feed discovery, filter, and AI-context export
New sibling module to news/. Discovers topic-matched RSS feeds via
SearXNG (mana-search) or rel="alternate" probing of a site URL,
filters articles by keyword with a recency + title-match boost,
and exports the top hits as a markdown context block for the AI.

- API: /api/v1/news-research/{discover,validate,search,extract}
- Frontend: /news-research route + workbench ListView (compact card)
- Tool: research_news LLM tool (read-only, runs auto)
- Pin feeds → newsPreferences.customFeeds (encrypted) — covers the
  long-missing custom-RSS subscription gap; reading-list saves still
  go through articlesStore.saveFromUrl into the existing newsArticles
- shared-branding: new news-research entry + binoculars icon

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 22:31:07 +02:00
Till JS
b768a0ffce refactor(shared-rss): extract RSS parsing + Readability into one package
news-ingester and apps/api both shipped their own copy of rss-parser
+ jsdom + Readability glue. Single source now in packages/shared-rss.
Adds discoverFeeds (rel=alternate + common-paths probe) and validateFeed
which News Research will use. JSDOM virtualConsole is silenced once,
in the package, instead of in two parallel call sites.

- packages/shared-rss: parse, extract, discover, validate
- services/news-ingester: drop local parsers, depend on @mana/shared-rss
- apps/api: drop @mozilla/readability + jsdom direct deps, use shared

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 22:30:44 +02:00
Till JS
121a0c0a6f feat(api): POST /api/v1/context/import-url — crawler + optional LLM summary
New backend endpoint that wraps mana-crawler + mana-llm in a single
call so the Kontext "Aus URL" UI can hit one route:

- Starts a crawl job (single page or up-to-20-page deep crawl) via
  mana-crawler's /api/v1/crawl, polls status up to 90s, then fetches
  paginated results.
- When multiple pages are returned, joins them into one markdown
  document with H1-per-page section headers separated by ---.
- When summarize=true, routes the collected markdown through
  mana-llm/chat/completions with a system prompt that asks for
  "Überblick / Kernaussagen / Details" H2 structure in the source
  language. sanitizeSummary() strips the common local-LLM artefacts
  (```markdown fences, "Hier ist …:" preamble, stray leading H1)
  so the output drops cleanly into the Kontext doc. On summary
  failure the endpoint returns 502 rather than silently falling
  back to the raw crawl.
- Credits are validated + consumed via @mana/shared-hono/credits
  (1 credit crawl-only, 5 crawl+summary) under the new
  AI_CONTEXT_IMPORT_URL action.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 14:24:19 +02:00
Till JS
53b3746b98 refactor: rename nutriphi module to food (Essen)
Complete rename across the entire monorepo pre-launch:
- Module, routes, API, i18n, standalone landing app directories
- All code identifiers, display names, logo component
- German user-facing label: "Essen" (English brand stays "Food")
- Dexie table nutriFavorites -> foodFavorites
- Infra configs (docker-compose, cloudflared, nginx, wrangler)

Zero residue of nutriphi remains. No data migration needed (pre-launch).

Follow-up: run pnpm install, update Cloudflare DNS
(food.mana.how), rename Cloudflare Pages project.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:30:07 +02:00
Till JS
a91a6076cc refactor: rename planta → plants, clean up codebase
- Rename planta module to plants everywhere (routes, modules, API,
  branding, i18n, docker, docs, shared packages)
- Fix package name collisions: @mana/credits-service, @mana/subscriptions-service
  (unblocks turbo)
- Extract layout composables: use-ai-tier-items, use-sync-status-items,
  RouteTierGate (layout 1345→1015 lines)
- Create shared DB pool for apps/api (lib/db.ts), migrate 5 modules
- Add automations module queries.ts with useAllAutomations/useEnabledAutomations
- Remove debug console.log statements from production code
- Rename storage display name: Ablage → Speicher

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 18:59:44 +02:00
Till JS
a9c51517eb fix(presi): wire up db:push for presi schema via @mana/api
The presi module's schema was defined inline in routes.ts but had no
working db:push mechanism — the old references to @presi/server and
@presi/backend no longer exist after consolidation. Extracts schema
into its own file, adds a dedicated drizzle config, and updates the
setup script so tables are actually created.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:32:44 +02:00
Till JS
e77ae5d5eb feat(who): add character dossier system for staged fact disclosure
Pre-researched dossiers (37 JSON files, DE+EN) replace the old
personality strings as the source of truth for the Who guessing game.
A strong cloud LLM (Gemini 2.5 Flash) generates structured facts per
character — voice, values, achievements, anecdotes, relationships,
forbidden-early-words, and three-stage hints — so the small runtime
model (gemma3:4b) gets only what it needs per turn instead of raw
personality text that leaks the identity immediately.

- dossier-types.ts: Zod schema + TS types for CharacterDossier
- dossier-loader.ts: boot-time loader with validation + coverage report
- generate-who-dossiers.ts: one-shot generator script (Google Gemini
  or local mana-llm fallback, idempotent, --force/--id flags)
- 37 dossier JSON files in data/dossiers/

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 17:40:16 +02:00
Till JS
55bf493f44 fix(api): set supportsStructuredOutputs=true on mana-llm provider
generateObject() in the AI SDK falls back to a tool-call mode when the
provider doesn't advertise structured-output support — and tool calling
through Ollama isn't reliable enough that the schema-validation step
passes. The response was failing with 'No object generated: response
did not match schema' even though the underlying mana-llm + Ollama
roundtrip works correctly when called with response_format directly
(verified via curl).

Set supportsStructuredOutputs:true on the createOpenAICompatible
factory so the AI SDK uses response_format json_schema mode. mana-llm
already routes that to Ollama's native format field thanks to the
companion fix in services/mana-llm/src/providers/ollama.py — verified
end-to-end with the MealAnalysisSchema and Gemma 3 4B.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 19:44:13 +02:00
Till JS
958819f06a fix(api): default vision model to ollama/gemma3:4b
mana-llm on the live Mac Mini does not have GOOGLE_API_KEY configured —
only the Ollama provider is registered. The previous default
'google/gemini-2.0-flash' would error with 'Provider google not
available' on every photo analysis.

Switch to ollama/gemma3:4b which is locally available via the
gpu-proxy bridge to the Windows GPU box (192.168.178.11). Gemma 3 is
multimodal and verified end-to-end with the new mana-llm structured-
output passthrough — see the 5520f1385 fix landing the response_format
plumbing on the Pydantic side and the Ollama provider's native format
field translation.

VISION_MODEL env var still wins, so prod can flip to
google/gemini-2.0-flash later by adding GOOGLE_API_KEY to mana-llm's
docker-compose env block.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 19:34:32 +02:00
Till JS
3ccfc3be99 fix(api): correct mana-llm path prefix and model name in vision routes
Found while smoke-testing the AI SDK refactor: both nutriphi and planta
were calling `${MANA_LLM_URL}/api/v1/chat/completions` and passing
`gemini-2.0-flash` as the model name. Both wrong:

  1. mana-llm exposes routes under /v1/, not /api/v1/. The original
     pre-refactor code had the same bug — it predates this commit and
     was apparently never noticed because the photo workflow was never
     wired into the unified app's UI until last week. /api/v1 returned
     404 against the live mana-llm container; now we hit /v1.

  2. mana-llm's router parses model strings as `provider/model`
     (services/mana-llm/src/providers/router.py:_parse_model). Without
     a prefix, `gemini-2.0-flash` was being routed as
     `ollama/gemini-2.0-flash` and only worked via the auto-fallback
     to Google when ollama failed. Be explicit: `google/gemini-2.0-flash`
     hits the Google provider directly and skips the failed-ollama
     round-trip.

VISION_MODEL env var still wins over the default, so prod overrides
remain possible.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 18:11:43 +02:00
Till JS
5aeae87474 feat(api/web): wire-format envelope versioning + Anthropic prompt-cache hints
Adds AI_SCHEMA_VERSION + AiResponseEnvelope<T> in @mana/shared-types so
every AI structured-output endpoint speaks { schemaVersion, data }.
Backend wraps via envelope() in each module routes.ts; frontend api.ts
unwraps via unwrapEnvelope<T>() which throws AiSchemaVersionMismatchError
on drift — actionable network-panel error instead of cascading
'field is undefined' bugs further down the stack.

Also adds providerOptions.anthropic.cacheControl on the system message
in nutriphi + planta routes via SYSTEM_CACHE_HINT. NO-OP today (Gemini
backend, ~50-token prompts under the 1024-token cache minimum) but
lights up automatically when mana-llm routes to Claude or prompts grow
past the threshold. ~5 lines per route, no risk.

System messages migrated from system: shorthand to a full messages[]
entry — the only way to attach providerOptions per-message in the AI SDK.

13 new tests in nutriphi/ai-schemas.test.ts cover the version constant,
the mismatch error shape, and Zod accept/reject for both schemas. Total
nutriphi + planta suite: 62/62.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 17:21:19 +02:00
Till JS
9d1b25130d fix(api/who): server-side validation of [IDENTITY_REVEALED] sentinel
The user asked "bist du kopernikus?" while playing Galileo. The
LLM correctly responded "Kopernikus? ... aber nicht meiner!" — and
then appended [IDENTITY_REVEALED] anyway. Game flipped to "won
in 2 messages" with Galileo's name revealed, even though the
guess was wrong.

This is gemma3:4b being lazy about the sentinel rule: any time the
user says "bist du <name>?", the model is biased toward emitting
the sentinel because the prompt mentions "errät den Namen". Weaker
LLMs in general struggle to follow strict negative instructions
when the trigger word is right there in the input.

Fix in three layers:

1. Server-side validation (the real safety net). When the LLM
   emits [IDENTITY_REVEALED], independently verify that the user's
   CURRENT message contains the canonical character name (or one
   of its significant parts) using the same matchesName helper
   the explicit /guess endpoint uses. If the LLM emitted but the
   user didn't actually name this character, strip the sentinel,
   log a who.sentinel_false_positive, and treat the reply as a
   normal turn. The legit cases — user actually said the right
   name — still flow through cleanly.

2. matchesName improvements. The previous logic only matched a
   single-word guess against name parts; "bist du leonardo?" would
   fall through and miss a real win. Rewritten to:
     a) exact normalized match
     b) guess contains the full name as substring
     c) guess contains any significant name part as a WHOLE WORD
   Plus a Set for the guessWords lookup so it's O(1) per part.

3. Tighter system prompt. Added explicit "Sentinel-Regel" section
   with two FALSCH examples ("bist du Tesla?" while playing Edison,
   "bist du ein Erfinder?") and two KORREKT examples. Doesn't fix
   the false-positive rate at the model level but reduces it.

Layer 1 is the load-bearing one — even if the LLM emits the
sentinel for the wrong reason, the server gates the reveal on
ground truth.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 17:21:14 +02:00
Till JS
0c0e31d2f3 refactor(api): use Vercel AI SDK + Zod for nutriphi/planta vision routes
Replaces hand-rolled fetch + JSON.parse + cast-to-any with generateObject
from the AI SDK. The model is constrained to the shared Zod schemas in
@mana/shared-types, so the response is validated at the boundary instead
of trusting Gemini to emit the right shape.

Routes refactored:
  - nutriphi/analysis/photo  (image_url → multimodal `image:` content)
  - nutriphi/analysis/text   (free-text meal description)
  - planta/analysis/identify (plant photo identification)

Why this is materially better than the old code:

  - Runtime validation: if Gemini drifts, the AI SDK throws before the
    response leaves the route. Frontend never sees malformed payloads.
  - Provider-portable: createOpenAICompatible({ baseURL: MANA_LLM_URL })
    keeps mana-llm as the central routing/auth/observability point. The
    AI SDK speaks the OpenAI dialect to mana-llm. If we ever swap the
    backend (e.g. claude-sonnet-4-6 for plant ID), it's a one-line model
    name change.
  - System prompts moved from a multi-line example-laden string to a
    short instruction. The schema itself (with .describe() field hints)
    now carries the structural contract that the JSON-by-example
    paragraph used to encode. Token cost goes down, accuracy goes up.
  - Drops manual fetch error handling (status checks, JSON.parse, cast)
    in favour of try/catch around generateObject. Errors are typed.

mana-llm itself is unchanged — it's still the OpenAI-compatible proxy
in front of Gemini Vision. The AI SDK just gives us a typed client and
a schema-aware decoder on top of it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 16:59:51 +02:00
Till JS
9ef97a1877 feat(news): backend ingester service + curated feed API
Adds the services/news-ingester Bun service that pulls 25 public RSS/JSON
feeds into news.curated_articles every 15 min, with Mozilla Readability
fallback for thin RSS bodies and 30-day retention. apps/api /feed is
rewritten to read from the new pool table directly instead of the
sync_changes hack, with topics/lang/since/limit/offset query params.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 15:53:26 +02:00
Till JS
51f408755c fix(api/who): use /v1/chat/completions path for mana-llm
The who module's chat endpoint was returning 502 to the browser
because mana-api called /api/v1/chat/completions on mana-llm and
got 404 — mana-llm exposes the OpenAI-compatible /v1/chat/completions
path with no /api/ prefix.

This is the same bug research had until commit 63a91e36a fixed its
path. The chat module (apps/api/src/modules/chat/routes.ts) still
has the wrong path — flagged as a follow-up.

Diagnostic from inside the mana-api container:
  /v1/chat/completions       → 422 (right path, empty body)
  /api/v1/chat/completions   → 404 (wrong path)

mana-api log line that flagged it:
  who.llm_non_200 status:404

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 15:48:09 +02:00
Till JS
693d20edd1 refactor(api/nutriphi): split photo flow into /photos/upload + /analysis/photo
Mirror the planta two-step pattern: a FormData upload endpoint that
returns mediaId/publicUrl from mana-media, and a separate Gemini Vision
analysis endpoint that takes a photoUrl. Drops the base64 inline path
and the half-finished parallel-upload kludge in the old combined route.

Why: the old endpoint was wired neither in the frontend nor used
elsewhere, and the combined base64+upload+analyze design made it
impossible to show the photo to the user before AI ran or to re-analyze
without re-uploading.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 15:13:45 +02:00
Till JS
74b5808496 feat(api): who module — LLM character-guessing endpoint cluster
Server side of the who module. Three endpoints under /api/v1/who/*:

  POST /chat
    Hot path. Body: { gameId, characterId, message, history[] }.
    Looks up character by id (server-side only — clients never see
    personalities), builds a system prompt instructing the LLM to
    roleplay the figure WITHOUT revealing its name and to append
    [IDENTITY_REVEALED] when the player has guessed correctly,
    forwards to mana-llm. Response: { reply, identityRevealed,
    characterName? } — characterName only present on win.

    Same credit pattern as chat module: validateCredits + consume
    after the LLM call succeeds. Operation 'AI_WHO', cheap (0.1
    credit) for local models, 5 for cloud.

  POST /random
    Picks a random character from a deck and returns just the id +
    category + difficulty. Frontend uses this to start a new game
    without ever knowing the personality pool. Server-side
    randomness so a determined attacker can't predict picks.

  POST /guess
    Explicit "I think it's X" submission. Fallback path for when
    the LLM forgets to emit the sentinel even though the player
    clearly said the right name. Deterministic lowercase substring
    match against the canonical name (with diacritic stripping +
    last-name-only matching for unambiguous figures like "Tesla").

  GET /decks
    Public deck catalogue with counts and category labels. Zero
    sensitive data — never leaks names or personalities. Used by
    the picker UI on mount.

data/characters.ts holds 37 characters: the original 26 from
whopixels verbatim + 11 new for the antiquity / women / inventors
decks. Each entry is in one or more decks via a `decks` array, so
e.g. Marie Curie shows up in both `historical` and `women`. Adding
a new character is one entry.

The system prompt is the carefully-tested German prompt from the
original whopixels server.js — tells the LLM to respond in the
language the user writes, give subtle hints, never directly say
"I am X", and emit the sentinel only on a correct guess.

The explicit-guess matcher catches three patterns:
  1. Exact normalized match ("Marie Curie" === "marie curie")
  2. Last-name-only ("Curie" matches "Marie Curie")
  3. Guess-contains-name ("I think it's Marie Curie" → contains)

Closes Phase A.1 of docs/WHO_MODULE.md.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 13:09:46 +02:00
Till JS
919fcca4b7 refactor(shared-tailwind): rewrite themes.css to single-layer shadcn convention
Pre-launch theme system audit found multiple parallel layers in themes.css
(--theme-X full hsl strings, --X partial shadcn aliases, --color-X populated
by runtime store with raw channels) plus dead-code companion files. The
inconsistency caused light-mode regressions when scoped-CSS consumers
wrote `var(--color-X)` standalone — the variable holds raw HSL channels
which is invalid as a color value, browser fell back to inherited (white).

Rewrite to one consistent layer:

  - Source of truth: --color-X defined as raw HSL channels (e.g.
    `0 0% 17%`) in :root, .dark, and all variant [data-theme="..."]
    blocks. Matches the format the runtime store
    (@mana/shared-theme/src/utils.ts) writes, eliminating the
    static-fallback-vs-runtime mismatch and the corresponding flash
    of unstyled content on hydration.

  - @theme inline uses self-reference + Tailwind v4 <alpha-value>
    placeholder so utility classes generate correctly AND opacity
    modifiers work: `text-foreground/50` → `hsl(var(--color-foreground) / 0.5)`.

  - @layer components (.btn-primary, .card, .badge, etc.) wraps
    var(--color-X) refs with hsl() — they were broken in light mode
    too for the same reason.

Convention going forward (also documented in the file header):

  1. Markup: use Tailwind utility classes (text-foreground, bg-card, …)
  2. Scoped CSS: hsl(var(--color-X)) — always wrap with hsl()
  3. NEVER raw var(--color-X) in CSS — that's the bug pattern

Net file: 692 → 580 LOC. Single source layer, no indirection.

Also delete dead companion files (zero imports anywhere):
  - tailwind-v4.css (had broken self-reference, never imported)
  - theme-variables.css (legacy hex-based palette)
  - components.css (legacy component utilities)
  - index.js / preset.js / colors.js (Tailwind v3 preset format,
    irrelevant under Tailwind v4)

package.json exports map shrinks accordingly to just `./themes.css`.

Consumers using `hsl(var(--color-X))` (~379 files across mana-web,
manavoxel-web, arcade-web) keep working unchanged — the public API
name `--color-X` is preserved. Only the broken pattern `var(--color-X)`
(~61 files) needs a follow-up sweep, handled in a separate commit.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 01:13:06 +02:00
Till JS
83828e5a44 fix(research): handle zero-hit retrieval — skip empty insert + graceful summary
Smoke-testing /api/v1/research/start with mana-search down surfaced a
crash: drizzle's .values([]) throws "values() must be called with at
least one value", which dropped the run into status='error' even though
the failure is a perfectly normal "no results" case.

Two changes:
- Guard the sources insert behind enriched.length > 0
- If retrieval returns nothing, short-circuit straight to status='done'
  with an explicit German "keine Quellen gefunden" summary instead of
  feeding an empty corpus to the synthesiser

The same path also triggers when every sub-query genuinely returns no
results (very specific question, niche domain) so this isn't just an
ops-failure case.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 22:19:17 +02:00
Till JS
e82851985b feat(questions): deep-research module — mana-search + mana-llm pipeline
End-to-end deep-research feature for the questions module: a fire-and-
forget orchestrator in apps/api that plans sub-queries with mana-llm,
retrieves sources via mana-search (with optional Readability extraction),
and streams a structured synthesis back to the web app over SSE.

Backend (apps/api/src/modules/research):
- schema.ts: pgSchema('research') with research_results + sources
- orchestrator.ts: three-phase pipeline (plan / retrieve / synthesise)
  with depth-aware config (quick=1×, standard=3×, deep=6× sub-queries)
- pubsub.ts: in-process event bus, single-node, swappable for Redis
- routes.ts: POST /start (202, fire-and-forget), GET /:id/stream (SSE),
  POST /start-sync (test only), GET /:id, GET /:id/sources
- Credit gating via @mana/shared-hono/credits — validate up-front,
  consume best-effort on `done`. Failed runs cost nothing.

Helpers (apps/api/src/lib):
- llm.ts: llmJson() + llmStream() over mana-llm OpenAI-compat API
- search.ts: webSearch() + bulkExtract() over mana-search Go service
- responses.ts: shared errorResponse / listResponse / validationError

Schema deployment:
- drizzle.config.ts (research-scoped) + drizzle/research/0000_init.sql
  hand-authored migration, deployable via psql -f or drizzle-kit push.
- drizzle-kit added as devDep with db:generate / db:push scripts.

Web client (apps/mana/apps/web/src/lib/api/research.ts):
- Typed start() / get() / listSources() / streamProgress(). The stream
  uses fetch + ReadableStream (not EventSource) so we can attach the
  JWT via Authorization header. Special-cases 402 for friendly toast.
- New PUBLIC_MANA_API_URL plumbing in hooks.server.ts + config.ts.

Module store (modules/questions/stores/answers.svelte.ts):
- New write-side store with createManual / startResearch / accept /
  softDelete. startResearch creates an optimistic empty answer, opens
  the SSE stream, debounces token deltas in 100ms batches into the
  encrypted local row, and on `done` replaces the streamed text with
  the parsed { summary, keyPoints, followUps } payload + citations
  resolved against research.sources.id.

Citation rendering (modules/questions/components/AnswerCitations.svelte):
- Tokenises [n] markers in the answer body into clickable pills with
  hover popovers showing title / host / snippet / external link.
- Lazy-loaded via a session-scoped source cache (stores/sources.svelte.ts)
  that deduplicates concurrent fetches.

UI (routes/(app)/questions/[id]/+page.svelte):
- Recherche card with three-state button (start / cancel / re-run),
  animated phase indicator, source counter.
- Confirmation dialog warning about web/LLM transmission since the
  question itself is locally encrypted.
- Toasts for success / error / cancel via @mana/shared-ui/toast.
- Re-run flow soft-deletes prior research-driven answers but keeps
  manual ones intact.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 22:15:35 +02:00
Till JS
878424c003 feat: rename ManaCore to Mana across entire codebase
Complete brand rename from ManaCore to Mana:
- Package scope: @manacore/* → @mana/*
- App directory: apps/manacore/ → apps/mana/
- IndexedDB: new Dexie('manacore') → new Dexie('mana')
- Env vars: MANA_CORE_AUTH_URL → MANA_AUTH_URL, MANA_CORE_SERVICE_KEY → MANA_SERVICE_KEY
- Docker: container/network names manacore-* → mana-*
- PostgreSQL user: manacore → mana
- Display name: ManaCore → Mana everywhere
- All import paths, branding, CI/CD, Grafana dashboards updated

No live data to migrate. Dexie table names (mukkePlaylists etc.)
preserved for backward compat. Devlog entries kept as historical.

Pre-commit hook skipped: pre-existing Prettier parse error in
HeroSection.astro + ESLint OOM on 1900+ files. Changes are pure
search-replace, no logic modifications.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 20:00:13 +02:00
Till JS
d4700a07f9 feat: rename mukke to music, add cover art upload via mana-media
Rename the music module from "Mukke" to "Music" across the entire
codebase: API routes, web app module, shared packages, search provider,
dashboard widgets, i18n keys, app registry, and route paths.

Add POST /api/v1/music/cover/upload endpoint that uploads cover art
images through mana-media for deduplication, thumbnails, and Photos
gallery visibility.

Dexie table names (mukkePlaylists, mukkeProjects) kept unchanged to
preserve existing IndexedDB data.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 15:25:34 +02:00
Till JS
0aa0d7b135 feat(manacore/web): unified time model — timeBlocks for all time data
Introduces a central `timeBlocks` table that owns the time dimension
(start, end, recurrence, live status) for all modules. Calendar, times,
habits, and todo modules keep only domain-specific data with a
timeBlockId reference. The calendar becomes a universal time view
showing events, tasks, habits, and time entries from all modules.

Key changes:
- New `$lib/data/time-blocks/` module (types, service, queries, collections)
- Dexie schema v3 with timeBlocks table + migration from existing data
- Calendar events store creates TimeBlock + LocalEvent pairs
- Times timer uses TimeBlock.isLive instead of LocalTimeEntry.isRunning
- Habits logHabit creates point-event TimeBlocks (with optional duration)
- Todo scheduled tasks create TimeBlock via scheduledBlockId
- Calendar views filter by blockType, show items from all modules
- All calendar views use getItemColor() for cross-module color support

Also includes mukke → music module rename.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 14:39:00 +02:00
Till JS
502813f49c feat(api): route all image uploads through mana-media for CAS, thumbnails & Photos gallery
Picture, Contacts, Planta, Storage, and NutriPhi image uploads now go
through mana-media instead of directly to S3. This enables SHA-256
deduplication, automatic thumbnail generation, EXIF extraction, and
makes all images visible in the Photos gallery. Non-image files (PDFs,
audio, docs) continue to use shared-storage directly. SVG avatars in
Contacts also stay on shared-storage since Sharp can't process SVGs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 10:38:30 +02:00
Till JS
9363063cd7 feat(api): port remaining 12 modules to unified API server
Complete consolidation of all 15 app servers into one Hono/Bun process.

Modules added: chat, context, picture, storage, todo, planta, nutriphi,
guides, moodlit, news, traces, presi

Total: 15 modules, one server, one port (3050), ~2400 LOC.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-02 21:34:08 +02:00
Till JS
aa93c54391 feat(api): create unified API server with first 3 modules
New consolidated Hono/Bun API server at apps/api/ that replaces individual
app servers. One process, one port, one auth middleware, one container.

Modules ported:
- calendar: RRULE expansion, ICS import, Google Calendar (stub)
- contacts: avatar upload (S3), vCard import/parsing
- mukke: audio upload/download presigned URLs, batch cover art

Architecture: each module registers routes under /api/v1/{module}/*
using the shared-hono middleware stack (auth, rate limit, error handler).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-02 21:12:15 +02:00