New "Quellen-Vergleich" tab on the weather page that fetches the same
location from 5 weather models in parallel (DWD ICON-D2, ICON-EU,
ECMWF IFS, NOAA GFS, Open-Meteo Best Match) and displays them stacked
for easy comparison of temperature, precipitation, and daily forecasts.
Adds /api/v1/wetter/compare endpoint and SourceComparison.svelte.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Weather data is public — no user-specific data involved. Move the
wetter route registration above authMiddleware() so requests don't
require a JWT token. Rate limiting still applies.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New module providing weather data for the DACH region via three sources:
- Open-Meteo (DWD ICON-D2 model) for current conditions and 7-day forecast
- DWD warnings endpoint for severe weather alerts
- Rainbow.ai / Open-Meteo fallback for minute-level rain nowcast
Includes API proxy with in-memory caching, Svelte 5 UI with location
picker, hourly/daily forecast, alert cards, and precipitation bar chart.
Two AI tools (get_weather, get_rain_forecast) enable the companion to
answer weather questions.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Complete tool handler coverage for the MCP server:
Todo: complete_tasks_by_title
Calendar: create_event (with timeBlock)
Notes: update_note, append_to_note, add_tag_to_note
Places: create_place, visit_place, get_places
Drink: log_drink, get_drink_progress, undo_drink
Food: log_meal, nutrition_summary
Journal: create_journal_entry
Habits: create_habit, log_habit (get_habits improved)
News: save_news_article
27 of 29 tools now have real implementations. Remaining 2
(research_news, get_current_location) need external service
calls that aren't available in the API server context.
Also updates architecture comparison report to mark MCP as done.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mount an MCP (Model Context Protocol) server at /api/v1/mcp in the
unified Hono API. External clients like Claude Desktop, Cursor, and
VS Code Copilot can discover and call all 29 Mana tools via the
standard MCP protocol.
Architecture:
- WebStandardStreamableHTTPServerTransport for Bun/Hono compatibility
- AI_TOOL_CATALOG → MCP tool definitions with JSON Schema (via Zod)
- Stateful sessions with Mcp-Session-Id header
- Auth via existing authMiddleware (JWT or API key)
Phase 1 scope: tools/list returns all 29 tools with schemas,
tools/call acknowledges with descriptive messages. Phase 2 will add
actual DB reads/writes via sync_changes.
Usage:
Claude Desktop config:
{"mcpServers": {"mana": {"url": "http://localhost:3060/api/v1/mcp"}}}
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New sibling module to news/. Discovers topic-matched RSS feeds via
SearXNG (mana-search) or rel="alternate" probing of a site URL,
filters articles by keyword with a recency + title-match boost,
and exports the top hits as a markdown context block for the AI.
- API: /api/v1/news-research/{discover,validate,search,extract}
- Frontend: /news-research route + workbench ListView (compact card)
- Tool: research_news LLM tool (read-only, runs auto)
- Pin feeds → newsPreferences.customFeeds (encrypted) — covers the
long-missing custom-RSS subscription gap; reading-list saves still
go through articlesStore.saveFromUrl into the existing newsArticles
- shared-branding: new news-research entry + binoculars icon
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
news-ingester and apps/api both shipped their own copy of rss-parser
+ jsdom + Readability glue. Single source now in packages/shared-rss.
Adds discoverFeeds (rel=alternate + common-paths probe) and validateFeed
which News Research will use. JSDOM virtualConsole is silenced once,
in the package, instead of in two parallel call sites.
- packages/shared-rss: parse, extract, discover, validate
- services/news-ingester: drop local parsers, depend on @mana/shared-rss
- apps/api: drop @mozilla/readability + jsdom direct deps, use shared
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New backend endpoint that wraps mana-crawler + mana-llm in a single
call so the Kontext "Aus URL" UI can hit one route:
- Starts a crawl job (single page or up-to-20-page deep crawl) via
mana-crawler's /api/v1/crawl, polls status up to 90s, then fetches
paginated results.
- When multiple pages are returned, joins them into one markdown
document with H1-per-page section headers separated by ---.
- When summarize=true, routes the collected markdown through
mana-llm/chat/completions with a system prompt that asks for
"Überblick / Kernaussagen / Details" H2 structure in the source
language. sanitizeSummary() strips the common local-LLM artefacts
(```markdown fences, "Hier ist …:" preamble, stray leading H1)
so the output drops cleanly into the Kontext doc. On summary
failure the endpoint returns 502 rather than silently falling
back to the raw crawl.
- Credits are validated + consumed via @mana/shared-hono/credits
(1 credit crawl-only, 5 crawl+summary) under the new
AI_CONTEXT_IMPORT_URL action.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The presi module's schema was defined inline in routes.ts but had no
working db:push mechanism — the old references to @presi/server and
@presi/backend no longer exist after consolidation. Extracts schema
into its own file, adds a dedicated drizzle config, and updates the
setup script so tables are actually created.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pre-researched dossiers (37 JSON files, DE+EN) replace the old
personality strings as the source of truth for the Who guessing game.
A strong cloud LLM (Gemini 2.5 Flash) generates structured facts per
character — voice, values, achievements, anecdotes, relationships,
forbidden-early-words, and three-stage hints — so the small runtime
model (gemma3:4b) gets only what it needs per turn instead of raw
personality text that leaks the identity immediately.
- dossier-types.ts: Zod schema + TS types for CharacterDossier
- dossier-loader.ts: boot-time loader with validation + coverage report
- generate-who-dossiers.ts: one-shot generator script (Google Gemini
or local mana-llm fallback, idempotent, --force/--id flags)
- 37 dossier JSON files in data/dossiers/
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
generateObject() in the AI SDK falls back to a tool-call mode when the
provider doesn't advertise structured-output support — and tool calling
through Ollama isn't reliable enough that the schema-validation step
passes. The response was failing with 'No object generated: response
did not match schema' even though the underlying mana-llm + Ollama
roundtrip works correctly when called with response_format directly
(verified via curl).
Set supportsStructuredOutputs:true on the createOpenAICompatible
factory so the AI SDK uses response_format json_schema mode. mana-llm
already routes that to Ollama's native format field thanks to the
companion fix in services/mana-llm/src/providers/ollama.py — verified
end-to-end with the MealAnalysisSchema and Gemma 3 4B.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
mana-llm on the live Mac Mini does not have GOOGLE_API_KEY configured —
only the Ollama provider is registered. The previous default
'google/gemini-2.0-flash' would error with 'Provider google not
available' on every photo analysis.
Switch to ollama/gemma3:4b which is locally available via the
gpu-proxy bridge to the Windows GPU box (192.168.178.11). Gemma 3 is
multimodal and verified end-to-end with the new mana-llm structured-
output passthrough — see the 5520f1385 fix landing the response_format
plumbing on the Pydantic side and the Ollama provider's native format
field translation.
VISION_MODEL env var still wins, so prod can flip to
google/gemini-2.0-flash later by adding GOOGLE_API_KEY to mana-llm's
docker-compose env block.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Found while smoke-testing the AI SDK refactor: both nutriphi and planta
were calling `${MANA_LLM_URL}/api/v1/chat/completions` and passing
`gemini-2.0-flash` as the model name. Both wrong:
1. mana-llm exposes routes under /v1/, not /api/v1/. The original
pre-refactor code had the same bug — it predates this commit and
was apparently never noticed because the photo workflow was never
wired into the unified app's UI until last week. /api/v1 returned
404 against the live mana-llm container; now we hit /v1.
2. mana-llm's router parses model strings as `provider/model`
(services/mana-llm/src/providers/router.py:_parse_model). Without
a prefix, `gemini-2.0-flash` was being routed as
`ollama/gemini-2.0-flash` and only worked via the auto-fallback
to Google when ollama failed. Be explicit: `google/gemini-2.0-flash`
hits the Google provider directly and skips the failed-ollama
round-trip.
VISION_MODEL env var still wins over the default, so prod overrides
remain possible.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds AI_SCHEMA_VERSION + AiResponseEnvelope<T> in @mana/shared-types so
every AI structured-output endpoint speaks { schemaVersion, data }.
Backend wraps via envelope() in each module routes.ts; frontend api.ts
unwraps via unwrapEnvelope<T>() which throws AiSchemaVersionMismatchError
on drift — actionable network-panel error instead of cascading
'field is undefined' bugs further down the stack.
Also adds providerOptions.anthropic.cacheControl on the system message
in nutriphi + planta routes via SYSTEM_CACHE_HINT. NO-OP today (Gemini
backend, ~50-token prompts under the 1024-token cache minimum) but
lights up automatically when mana-llm routes to Claude or prompts grow
past the threshold. ~5 lines per route, no risk.
System messages migrated from system: shorthand to a full messages[]
entry — the only way to attach providerOptions per-message in the AI SDK.
13 new tests in nutriphi/ai-schemas.test.ts cover the version constant,
the mismatch error shape, and Zod accept/reject for both schemas. Total
nutriphi + planta suite: 62/62.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The user asked "bist du kopernikus?" while playing Galileo. The
LLM correctly responded "Kopernikus? ... aber nicht meiner!" — and
then appended [IDENTITY_REVEALED] anyway. Game flipped to "won
in 2 messages" with Galileo's name revealed, even though the
guess was wrong.
This is gemma3:4b being lazy about the sentinel rule: any time the
user says "bist du <name>?", the model is biased toward emitting
the sentinel because the prompt mentions "errät den Namen". Weaker
LLMs in general struggle to follow strict negative instructions
when the trigger word is right there in the input.
Fix in three layers:
1. Server-side validation (the real safety net). When the LLM
emits [IDENTITY_REVEALED], independently verify that the user's
CURRENT message contains the canonical character name (or one
of its significant parts) using the same matchesName helper
the explicit /guess endpoint uses. If the LLM emitted but the
user didn't actually name this character, strip the sentinel,
log a who.sentinel_false_positive, and treat the reply as a
normal turn. The legit cases — user actually said the right
name — still flow through cleanly.
2. matchesName improvements. The previous logic only matched a
single-word guess against name parts; "bist du leonardo?" would
fall through and miss a real win. Rewritten to:
a) exact normalized match
b) guess contains the full name as substring
c) guess contains any significant name part as a WHOLE WORD
Plus a Set for the guessWords lookup so it's O(1) per part.
3. Tighter system prompt. Added explicit "Sentinel-Regel" section
with two FALSCH examples ("bist du Tesla?" while playing Edison,
"bist du ein Erfinder?") and two KORREKT examples. Doesn't fix
the false-positive rate at the model level but reduces it.
Layer 1 is the load-bearing one — even if the LLM emits the
sentinel for the wrong reason, the server gates the reveal on
ground truth.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replaces hand-rolled fetch + JSON.parse + cast-to-any with generateObject
from the AI SDK. The model is constrained to the shared Zod schemas in
@mana/shared-types, so the response is validated at the boundary instead
of trusting Gemini to emit the right shape.
Routes refactored:
- nutriphi/analysis/photo (image_url → multimodal `image:` content)
- nutriphi/analysis/text (free-text meal description)
- planta/analysis/identify (plant photo identification)
Why this is materially better than the old code:
- Runtime validation: if Gemini drifts, the AI SDK throws before the
response leaves the route. Frontend never sees malformed payloads.
- Provider-portable: createOpenAICompatible({ baseURL: MANA_LLM_URL })
keeps mana-llm as the central routing/auth/observability point. The
AI SDK speaks the OpenAI dialect to mana-llm. If we ever swap the
backend (e.g. claude-sonnet-4-6 for plant ID), it's a one-line model
name change.
- System prompts moved from a multi-line example-laden string to a
short instruction. The schema itself (with .describe() field hints)
now carries the structural contract that the JSON-by-example
paragraph used to encode. Token cost goes down, accuracy goes up.
- Drops manual fetch error handling (status checks, JSON.parse, cast)
in favour of try/catch around generateObject. Errors are typed.
mana-llm itself is unchanged — it's still the OpenAI-compatible proxy
in front of Gemini Vision. The AI SDK just gives us a typed client and
a schema-aware decoder on top of it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the services/news-ingester Bun service that pulls 25 public RSS/JSON
feeds into news.curated_articles every 15 min, with Mozilla Readability
fallback for thin RSS bodies and 30-day retention. apps/api /feed is
rewritten to read from the new pool table directly instead of the
sync_changes hack, with topics/lang/since/limit/offset query params.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The who module's chat endpoint was returning 502 to the browser
because mana-api called /api/v1/chat/completions on mana-llm and
got 404 — mana-llm exposes the OpenAI-compatible /v1/chat/completions
path with no /api/ prefix.
This is the same bug research had until commit 63a91e36a fixed its
path. The chat module (apps/api/src/modules/chat/routes.ts) still
has the wrong path — flagged as a follow-up.
Diagnostic from inside the mana-api container:
/v1/chat/completions → 422 (right path, empty body)
/api/v1/chat/completions → 404 (wrong path)
mana-api log line that flagged it:
who.llm_non_200 status:404
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mirror the planta two-step pattern: a FormData upload endpoint that
returns mediaId/publicUrl from mana-media, and a separate Gemini Vision
analysis endpoint that takes a photoUrl. Drops the base64 inline path
and the half-finished parallel-upload kludge in the old combined route.
Why: the old endpoint was wired neither in the frontend nor used
elsewhere, and the combined base64+upload+analyze design made it
impossible to show the photo to the user before AI ran or to re-analyze
without re-uploading.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Server side of the who module. Three endpoints under /api/v1/who/*:
POST /chat
Hot path. Body: { gameId, characterId, message, history[] }.
Looks up character by id (server-side only — clients never see
personalities), builds a system prompt instructing the LLM to
roleplay the figure WITHOUT revealing its name and to append
[IDENTITY_REVEALED] when the player has guessed correctly,
forwards to mana-llm. Response: { reply, identityRevealed,
characterName? } — characterName only present on win.
Same credit pattern as chat module: validateCredits + consume
after the LLM call succeeds. Operation 'AI_WHO', cheap (0.1
credit) for local models, 5 for cloud.
POST /random
Picks a random character from a deck and returns just the id +
category + difficulty. Frontend uses this to start a new game
without ever knowing the personality pool. Server-side
randomness so a determined attacker can't predict picks.
POST /guess
Explicit "I think it's X" submission. Fallback path for when
the LLM forgets to emit the sentinel even though the player
clearly said the right name. Deterministic lowercase substring
match against the canonical name (with diacritic stripping +
last-name-only matching for unambiguous figures like "Tesla").
GET /decks
Public deck catalogue with counts and category labels. Zero
sensitive data — never leaks names or personalities. Used by
the picker UI on mount.
data/characters.ts holds 37 characters: the original 26 from
whopixels verbatim + 11 new for the antiquity / women / inventors
decks. Each entry is in one or more decks via a `decks` array, so
e.g. Marie Curie shows up in both `historical` and `women`. Adding
a new character is one entry.
The system prompt is the carefully-tested German prompt from the
original whopixels server.js — tells the LLM to respond in the
language the user writes, give subtle hints, never directly say
"I am X", and emit the sentinel only on a correct guess.
The explicit-guess matcher catches three patterns:
1. Exact normalized match ("Marie Curie" === "marie curie")
2. Last-name-only ("Curie" matches "Marie Curie")
3. Guess-contains-name ("I think it's Marie Curie" → contains)
Closes Phase A.1 of docs/WHO_MODULE.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pre-launch theme system audit found multiple parallel layers in themes.css
(--theme-X full hsl strings, --X partial shadcn aliases, --color-X populated
by runtime store with raw channels) plus dead-code companion files. The
inconsistency caused light-mode regressions when scoped-CSS consumers
wrote `var(--color-X)` standalone — the variable holds raw HSL channels
which is invalid as a color value, browser fell back to inherited (white).
Rewrite to one consistent layer:
- Source of truth: --color-X defined as raw HSL channels (e.g.
`0 0% 17%`) in :root, .dark, and all variant [data-theme="..."]
blocks. Matches the format the runtime store
(@mana/shared-theme/src/utils.ts) writes, eliminating the
static-fallback-vs-runtime mismatch and the corresponding flash
of unstyled content on hydration.
- @theme inline uses self-reference + Tailwind v4 <alpha-value>
placeholder so utility classes generate correctly AND opacity
modifiers work: `text-foreground/50` → `hsl(var(--color-foreground) / 0.5)`.
- @layer components (.btn-primary, .card, .badge, etc.) wraps
var(--color-X) refs with hsl() — they were broken in light mode
too for the same reason.
Convention going forward (also documented in the file header):
1. Markup: use Tailwind utility classes (text-foreground, bg-card, …)
2. Scoped CSS: hsl(var(--color-X)) — always wrap with hsl()
3. NEVER raw var(--color-X) in CSS — that's the bug pattern
Net file: 692 → 580 LOC. Single source layer, no indirection.
Also delete dead companion files (zero imports anywhere):
- tailwind-v4.css (had broken self-reference, never imported)
- theme-variables.css (legacy hex-based palette)
- components.css (legacy component utilities)
- index.js / preset.js / colors.js (Tailwind v3 preset format,
irrelevant under Tailwind v4)
package.json exports map shrinks accordingly to just `./themes.css`.
Consumers using `hsl(var(--color-X))` (~379 files across mana-web,
manavoxel-web, arcade-web) keep working unchanged — the public API
name `--color-X` is preserved. Only the broken pattern `var(--color-X)`
(~61 files) needs a follow-up sweep, handled in a separate commit.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
End-to-end testing surfaced a 404 from the synth path. mana-llm
(services/mana-llm/src/main.py) mounts the OpenAI-compatible API at
/v1/* — there's no /api prefix.
The first quick-depth e2e run only worked because the planner is
skipped on quick (it just uses the question itself), so llmJson never
fired; only llmStream did, and the streaming path also used the wrong
prefix but the test happened to land before this was caught.
The other apps/api modules (chat, guides, context, traces) all use the
wrong /api/v1/ path too — that's a separate, pre-existing bug to be
addressed in their own commits.
Verified by re-running a standard-depth research run end-to-end against
mana-llm pointed at the GPU server's ollama with gemma3:4b/12b: plan +
retrieve + extract + synth all succeed.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Smoke-testing /api/v1/research/start with mana-search down surfaced a
crash: drizzle's .values([]) throws "values() must be called with at
least one value", which dropped the run into status='error' even though
the failure is a perfectly normal "no results" case.
Two changes:
- Guard the sources insert behind enriched.length > 0
- If retrieval returns nothing, short-circuit straight to status='done'
with an explicit German "keine Quellen gefunden" summary instead of
feeding an empty corpus to the synthesiser
The same path also triggers when every sub-query genuinely returns no
results (very specific question, niche domain) so this isn't just an
ops-failure case.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
End-to-end deep-research feature for the questions module: a fire-and-
forget orchestrator in apps/api that plans sub-queries with mana-llm,
retrieves sources via mana-search (with optional Readability extraction),
and streams a structured synthesis back to the web app over SSE.
Backend (apps/api/src/modules/research):
- schema.ts: pgSchema('research') with research_results + sources
- orchestrator.ts: three-phase pipeline (plan / retrieve / synthesise)
with depth-aware config (quick=1×, standard=3×, deep=6× sub-queries)
- pubsub.ts: in-process event bus, single-node, swappable for Redis
- routes.ts: POST /start (202, fire-and-forget), GET /:id/stream (SSE),
POST /start-sync (test only), GET /:id, GET /:id/sources
- Credit gating via @mana/shared-hono/credits — validate up-front,
consume best-effort on `done`. Failed runs cost nothing.
Helpers (apps/api/src/lib):
- llm.ts: llmJson() + llmStream() over mana-llm OpenAI-compat API
- search.ts: webSearch() + bulkExtract() over mana-search Go service
- responses.ts: shared errorResponse / listResponse / validationError
Schema deployment:
- drizzle.config.ts (research-scoped) + drizzle/research/0000_init.sql
hand-authored migration, deployable via psql -f or drizzle-kit push.
- drizzle-kit added as devDep with db:generate / db:push scripts.
Web client (apps/mana/apps/web/src/lib/api/research.ts):
- Typed start() / get() / listSources() / streamProgress(). The stream
uses fetch + ReadableStream (not EventSource) so we can attach the
JWT via Authorization header. Special-cases 402 for friendly toast.
- New PUBLIC_MANA_API_URL plumbing in hooks.server.ts + config.ts.
Module store (modules/questions/stores/answers.svelte.ts):
- New write-side store with createManual / startResearch / accept /
softDelete. startResearch creates an optimistic empty answer, opens
the SSE stream, debounces token deltas in 100ms batches into the
encrypted local row, and on `done` replaces the streamed text with
the parsed { summary, keyPoints, followUps } payload + citations
resolved against research.sources.id.
Citation rendering (modules/questions/components/AnswerCitations.svelte):
- Tokenises [n] markers in the answer body into clickable pills with
hover popovers showing title / host / snippet / external link.
- Lazy-loaded via a session-scoped source cache (stores/sources.svelte.ts)
that deduplicates concurrent fetches.
UI (routes/(app)/questions/[id]/+page.svelte):
- Recherche card with three-state button (start / cancel / re-run),
animated phase indicator, source counter.
- Confirmation dialog warning about web/LLM transmission since the
question itself is locally encrypted.
- Toasts for success / error / cancel via @mana/shared-ui/toast.
- Re-run flow soft-deletes prior research-driven answers but keeps
manual ones intact.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rename the music module from "Mukke" to "Music" across the entire
codebase: API routes, web app module, shared packages, search provider,
dashboard widgets, i18n keys, app registry, and route paths.
Add POST /api/v1/music/cover/upload endpoint that uploads cover art
images through mana-media for deduplication, thumbnails, and Photos
gallery visibility.
Dexie table names (mukkePlaylists, mukkeProjects) kept unchanged to
preserve existing IndexedDB data.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Introduces a central `timeBlocks` table that owns the time dimension
(start, end, recurrence, live status) for all modules. Calendar, times,
habits, and todo modules keep only domain-specific data with a
timeBlockId reference. The calendar becomes a universal time view
showing events, tasks, habits, and time entries from all modules.
Key changes:
- New `$lib/data/time-blocks/` module (types, service, queries, collections)
- Dexie schema v3 with timeBlocks table + migration from existing data
- Calendar events store creates TimeBlock + LocalEvent pairs
- Times timer uses TimeBlock.isLive instead of LocalTimeEntry.isRunning
- Habits logHabit creates point-event TimeBlocks (with optional duration)
- Todo scheduled tasks create TimeBlock via scheduledBlockId
- Calendar views filter by blockType, show items from all modules
- All calendar views use getItemColor() for cross-module color support
Also includes mukke → music module rename.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Picture, Contacts, Planta, Storage, and NutriPhi image uploads now go
through mana-media instead of directly to S3. This enables SHA-256
deduplication, automatic thumbnail generation, EXIF extraction, and
makes all images visible in the Photos gallery. Non-image files (PDFs,
audio, docs) continue to use shared-storage directly. SVG avatars in
Contacts also stay on shared-storage since Sharp can't process SVGs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Complete consolidation of all 15 app servers into one Hono/Bun process.
Modules added: chat, context, picture, storage, todo, planta, nutriphi,
guides, moodlit, news, traces, presi
Total: 15 modules, one server, one port (3050), ~2400 LOC.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New consolidated Hono/Bun API server at apps/api/ that replaces individual
app servers. One process, one port, one auth middleware, one container.
Modules ported:
- calendar: RRULE expansion, ICS import, Google Calendar (stub)
- contacts: avatar upload (S3), vCard import/parsing
- mukke: audio upload/download presigned URLs, batch cover art
Architecture: each module registers routes under /api/v1/{module}/*
using the shared-hono middleware stack (auth, rate limit, error handler).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>