First milestone of the unlisted-share rollout plan (docs/plans/
unlisted-sharing.md). Adds the server-side infrastructure that backs
`visibility='unlisted'` — previously the flag was stamped locally but
led nowhere. After this commit, a token points at an actual snapshot
the SSR share-page will render (M8.3+).
Scope: backend only. No client-side publish/revoke calls yet, no
share-route, no UI. That lands in M8.2/M8.3. Anyone hitting the
endpoints manually with curl can exercise the full publish-fetch-
revoke cycle.
Changes:
- New pgSchema `unlisted` with table `snapshots`:
token (pk, 32-char base64url)
user_id, space_id, collection, record_id, blob (jsonb)
created_at, updated_at, expires_at (nullable), revoked_at
Partial unique index on (user_id, collection, record_id) WHERE
revoked_at IS NULL so one record has at most one active token.
Partial btree on expires_at for the cron-cleanup path.
- Hand-authored SQL migration `apps/api/drizzle/unlisted/0000_init.sql`
(manual-apply per the repo's feedback_api_hand_authored_migrations
memory). Already applied to the local mana_platform.
- Drizzle schema `apps/api/src/modules/unlisted/schema.ts`. All id
fields are `text` not uuid — Better-Auth nanoids aren't UUIDs, same
trap we hit with the website module's publish bug.
- mana-api module `apps/api/src/modules/unlisted/`:
POST /api/v1/unlisted/:collection/:recordId (auth)
Body: { spaceId, blob, expiresAt? }. Re-publish reuses the
existing active token (by (user,collection,record) lookup); a
revoke-then-republish mints a fresh token row. Response includes
a fully-qualified share URL built from Origin/Referer/env.
DELETE /api/v1/unlisted/:collection/:recordId (auth)
Soft-revoke. Idempotent — already-revoked returns
{ revoked: 0 } cleanly so client stores can call it
unconditionally on setVisibility-away.
GET /api/v1/unlisted/public/:token (public)
Rate-limited 20/min/token + 60/min/ip so token enumeration is
impractical. 404 for unknown, 410 Gone for revoked or expired.
Cache-Control: private, max-age=60 + X-Robots-Tag: noindex for
SEO isolation. Returns { token, collection, blob, createdAt,
updatedAt, expiresAt }.
- ALLOWED_COLLECTIONS hardcoded allowlist in POST handler
(events, libraryEntries, places — the M8.3+M8.4 scope). Unknown
collection -> 400 COLLECTION_NOT_ALLOWED. Keeps the schema honest
about what the server accepts.
- drizzle.config extended to include the new schema in managed
migrations.
- index.ts wires unlistedPublicRoutes pre-auth (before
authMiddleware) and unlistedRoutes post-auth.
Verified:
- Migration applied to mana_platform — `unlisted.snapshots` exists
with both partial indexes.
- pnpm run type-check (api): clean
- pnpm run validate:all: theme-tokens, theme-parity, crypto-registry,
encrypted-tools all green
- URL build uses Origin/Referer before the env fallback so dev
(http://localhost:5173) and prod (https://mana.how) both work
without env churn.
Next: M8.2 — shared-privacy client helper + SharedLinkControls
component.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User wählt einen bestehenden Text (Tagebuch-Eintrag, Notiz oder
Bibliotheks-Review), das Modell schlägt eine geordnete
Panel-Sequenz vor (prompt + optional caption + dialogue pro Panel),
der User prüft/editiert und feuert Batch-Gen mit sourceInput-
Tagging — damit wird `useStoriesByInput` später cross-referenzieren
können ("Welche Comics sind aus diesem Journal-Eintrag entstanden?").
Backend:
- POST /api/v1/comic/storyboard (Hono route) nimmt style +
sourceText + panelCount (+ optional storyContext / sourceModule)
und ruft llmJson() mit einem response_format=json_object-Prompt
an mana-llm. System-Prompt instruiert das Modell auf eine exakte
{panels: [{prompt, caption?, dialogue?}]}-Shape, Rules wie
"keine Style-Instruktionen" (kommen aus dem Story-Prefix
downstream) und "kein Panel-Nummerieren".
- Defense-in-depth Coerce auf der Response: Panel ohne prompt
wird gefiltert, Strings werden gecappt (caption/dialogue 200,
prompt 800), Zahl der Panels auf panelCount geclampt.
- Model via COMIC_STORYBOARD_MODEL env var überschreibbar;
Default ollama/gemma3:4b wie writing (lokal + billig).
- Beide Erfolgs- und Fehler-Pfade mit logger.info /
logger.error + userId + sourceModule für Observability.
- Route registriert in apps/api/src/index.ts als /api/v1/comic.
Client:
- api/storyboard.ts: suggestPanels({style, sourceText, panelCount,
storyContext?, sourceModule?}) — thin fetch-Wrapper + Error-Messaging
für 402 / 502 / no-panels-Responses.
- ReferenceInputPicker: Tabs über Journal / Notizen / Bibliothek
(die drei inhalts-dichtesten Quellen), pro Tab Live-Query +
Suche + Entry-Liste. Click emittiert {module, entryId, label,
sourceText} — label ist der Display-Name für die
"Gequellt aus…"-Chip, sourceText ist bereits decrypted (Queries
liefern plaintext zurück). Bibliotheks-Einträge ohne Review
sind disabled (kein Text = nichts zu rendern).
- StoryboardSuggester: 4-Schritt-Flow (pick-source →
generating-plan → review-plan → rendering). Schritt 3 ist der
eigentliche Editor: jede Claude-Zeile ist editierbar (Prompt,
Caption, Dialog) mit Trash-Button; Quality + Format-Toggle
teilen sich M3-Batch-Style. "Generieren" ruft parallel
runPanelGenerate() via Promise.allSettled mit
sourceInput={module, entryId} im panelMeta, alle Panels gehen
durch den identischen M2-HTTP-Pfad.
- DetailView bekommt einen dritten Editor-Modus "ai" neben
"single" und "batch" — eine Sparkle-Button-CTA öffnet den
Suggester.
Kein Writing-Draft / Calendar-Event-Input in dieser Runde —
Drafts brauchen Version-Chain-Resolve, Events sind meist zu dünn
an Prosa. Follow-up wenn gewünscht (rein additiv: Tab + Hook).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Server:
- New llmText() helper in apps/api/src/lib/llm.ts for plain-text
(non-streaming) completions with token-usage reporting.
- POST /api/v1/writing/generations (Hono + requireTier('beta'))
accepts system+user prompts, forwards to mana-llm (default model
ollama/gemma3:4b), returns raw output + model + tokenUsage. The
endpoint is stateless — draft/version bookkeeping is entirely
client-side so the same route serves refinement calls later.
Client:
- writing/api.ts — Bearer-authed fetch client (follows the food/
news-research pattern).
- writing/utils/prompt-builder.ts — pure builder turning a briefing
(+ optional style preset / extracted principles) into a system+user
pair. Forbids preamble / sign-off / meta commentary so the output is
ready to paste into a version.
- writing/stores/generations.svelte.ts — orchestrates the full flow:
queued → running → call → new LocalDraftVersion → pointer flip →
succeeded. On failure leaves the current version untouched with the
error on the generation record. Emits WritingDraftGenerationStarted /
WritingDraftVersionCreated / WritingDraftGenerationFailed events.
UI:
- Generate button in DetailView.svelte (label flips "Generate" / "Neu
generieren" based on whether the draft already has content).
- GenerationStatus.svelte strip surfaces queued / running / failed with
model + duration badges; succeeded generations auto-disappear because
the new version is already live via the currentVersionId pointer.
M3 is synchronous and non-streaming by design. M7 adds mission-based
long-form with streaming + outline stage + reference injection. M6 will
reuse the same /generations endpoint for selection-refinement prompts.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
M1 of docs/plans/wardrobe-module.md — pure data layer + backend plumbing,
zero UI (that's M2). A user can now hold a digital wardrobe per space:
brand merch, club Trikots, family Kleiderschrank, team Kostüme, practice
Dresscode, and personal closet all live as separate pools under the same
Dexie tables, space-scoped like tags/scenes/agents after Phase 2c.
Data model — two tables, no join:
- wardrobeGarments (Dexie v41): single clothing items / accessories.
Indexed on `category` + `createdAt` + `isArchived`. Encrypted:
name/brand/color/size/material/tags/notes. Plaintext: category,
mediaIds, counters, timestamps — all indexed or structural.
`mediaIds[0]` is the primary photo used for try-on; additional
ids are alternate views (back, detail) for M7.
- wardrobeOutfits (Dexie v41): named compositions referencing
garment ids. Encrypted: name/description/tags. Plaintext:
garmentIds (FK array), occasion (closed enum — useful for
undecrypted filtering), season, booleans, lastTryOn snapshot.
- picture.images gains `wardrobeOutfitId?: string | null` as a
plaintext back-reference. Try-on results land in the Picture
gallery like any other generation; the outfit detail view
queries them via this id rather than maintaining a third table.
Space scope:
- `wardrobe` added to all five explicit allowlists in shared-types/
spaces.ts (personal is wildcard, no edit needed). Each space type
gets a one-line comment explaining the real-world use case.
- App registry: `wardrobe` entry in shared-branding/mana-apps.ts
with a rose→fuchsia gradient icon (T-shirt on hanger silhouette),
color #e11d48, tier 'beta', status 'beta'.
- Module registry: wardrobeModuleConfig imported + appended to
MODULE_CONFIGS so SYNC_APP_MAP picks it up automatically.
Backend:
- MAX_REFERENCE_IMAGES bumped 4 → 8 in picture/generate-with-
reference (plus the client-side default in ReferenceImagePicker).
Justified with a comment: face + body + top + bottom + shoes +
outerwear + 2 accessories = 8. Cost doesn't scale with ref count
(OpenAI bills per output), so the bump is a pure capability
expansion with no credit-side risk.
- New POST /api/v1/wardrobe/garments/upload wraps uploadImageToMedia
with app='wardrobe'. Registered under /api/v1/wardrobe in index.ts.
Pattern 1:1 with the profile/me-images/upload endpoint; tier-gating
falls out of wardrobe NOT being in RESOURCE_MODULES (tier='guest'
works — consistent with picture's plain CRUD).
Stores emit domain events (WardrobeGarmentAdded, WardrobeOutfitCreated,
WardrobeOutfitTryOn, etc.) so later mana-ai missions can observe
activity without polling.
No UI in this commit. M2 (Garments-Grundlayer) wires the route + grid
+ upload-zone; M3 the Outfit composer; M4 the Try-On integration.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Enables the M1 parallel-reads optimisation on the webapp side. Both
consumers of runPlannerLoop pass an isParallelSafe predicate derived
from the tool catalog:
isParallelSafe: (name) =>
AI_TOOL_CATALOG_BY_NAME.get(name)?.defaultPolicy === 'auto'
Auto-policy tools (list_tasks, get_habits, nutrition_summary, …) run
via Promise.all in batches of 10 when the LLM fans them out in one
round. Propose-policy tools — which surface to the user as Proposal
cards — stay sequential so intent ordering in the inbox is preserved
and pre-execute guardrails can reason about prior-step state.
Tests: 31 existing companion + mission tests pass unchanged; the
parallel path is exercised via the new loop.test.ts cases shipped
with the M1 commit.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
M1 of docs/plans/me-images-and-reference-generation.md — a user-owned
pool of reference images (face, fullbody, hands, …) that will back
image generation where the user appears as themselves (outfit try-on,
glasses, portraits) via OpenAI /v1/images/edits. Data layer only in
this commit; UI lands in M2, the edits endpoint in M3.
- Dexie v38: meImages table with id/kind/primaryFor/createdAt indices.
Added to USER_LEVEL_TABLES so the hook stamps userId and skips the
spaceId/authorId/visibility trio (one human = one face across every
Space, not per-Space).
- Encryption registry: label + tags encrypted; kind/primaryFor/usage
stay plaintext because they drive the indexed queries and the
Reference picker's filtering. mediaId/URLs/dimensions are structural.
- Profile module store: createMeImage, updateMeImage,
setAiReferenceEnabled (per-image KI opt-in — plan decision #5),
setPrimary (transactional slot swap — only one row per primary slot),
deleteMeImage. Emits MeImage* domain events.
- Queries: useAllMeImages, useMeImagesByKind, useReferenceImages
(only the rows the user opted in for KI), useImageByPrimary.
- POST /api/v1/profile/me-images/upload: thin wrapper over mana-media
with app='me' as the reference tag. No new MinIO bucket — plan
decision #1 revised after verifying mana-media uses one bucket and
only tags references by app.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pocket-style module for saving arbitrary web URLs, extracting readable
content server-side via @mana/shared-rss (Readability + JSDOM), and
storing it AES-GCM encrypted in IndexedDB for offline reading.
M1 skeleton: Dexie v33 (articles, articleHighlights, articleTags),
crypto registry entries, module registration, app-registry entry with
orange icon, empty-state ListView. articleTags is a pure junction
into the existing globalTags system (appId 'tags') — same pattern as
noteTags, eventTags, placeTags.
M2 URL save + reader: POST /api/v1/articles/extract (one endpoint,
not two — client caches the preview payload to avoid a double
server fetch). AddUrlForm with scope-aware dedupe, DetailView with
ReaderView typography shell (serif/sans, light/sepia/dark, size
slider), auto-tracked reading progress with scroll restore.
M3 highlights: TreeWalker-based plain-text offset resolution
(lib/offsets.ts), highlights store, floating HighlightMenu with
create + edit modes, HighlightLayer orchestrator that wraps/unwraps
highlight spans whenever highlights or htmlVersion changes. Four
colours (yellow/green/blue/pink), optional notes, click-to-edit,
dark-mode-aware overlay colours.
Drive-by: removed stale 'pendingProposals' entry from the plaintext
allowlist — the table was dropped in Dexie v29 and the allowlist
audit was flagging it as a dead entry.
Plan: docs/plans/articles-module.md. M4 (tags + filter + progress),
M5 (news:type='saved' migration), M6 (AI tools), M7 (share target),
M8 (highlights view + stats) still open.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The JWT already carried a `tier` claim but nothing on the server read it
— AuthGate enforcement was client-only, so a valid JWT could hit paid
LLM/research endpoints regardless of the user's access tier.
- shared-hono authMiddleware now extracts `tier` into `c.userTier`,
defaulting unknown/missing claims to `public` (never silently grants
higher access).
- New `requireTier(minTier)` middleware + `hasTier`/`getTierLevel`
helpers. Tier hierarchy (guest < public < beta < alpha < founder) is
mirrored locally to avoid pulling the Svelte-facing shared-branding
package into Bun services.
- Applied `requireTier('beta')` as defense-in-depth on resource-heavy
apps/api modules (chat, context, food, guides, news-research, picture,
plants, research, traces, who) and the MCP endpoint. Pure CRUD modules
stay auth-only — access there is gated by ownership, not tier.
- DEV_BYPASS_AUTH now injects `userTier` (defaults to founder, override
via DEV_USER_TIER).
- Authentication guideline documents the pattern + test suite covers
hierarchy, passes-at-minimum, and rejection paths.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Weather data is public — no user-specific data involved. Move the
wetter route registration above authMiddleware() so requests don't
require a JWT token. Rate limiting still applies.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New module providing weather data for the DACH region via three sources:
- Open-Meteo (DWD ICON-D2 model) for current conditions and 7-day forecast
- DWD warnings endpoint for severe weather alerts
- Rainbow.ai / Open-Meteo fallback for minute-level rain nowcast
Includes API proxy with in-memory caching, Svelte 5 UI with location
picker, hourly/daily forecast, alert cards, and precipitation bar chart.
Two AI tools (get_weather, get_rain_forecast) enable the companion to
answer weather questions.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Mount an MCP (Model Context Protocol) server at /api/v1/mcp in the
unified Hono API. External clients like Claude Desktop, Cursor, and
VS Code Copilot can discover and call all 29 Mana tools via the
standard MCP protocol.
Architecture:
- WebStandardStreamableHTTPServerTransport for Bun/Hono compatibility
- AI_TOOL_CATALOG → MCP tool definitions with JSON Schema (via Zod)
- Stateful sessions with Mcp-Session-Id header
- Auth via existing authMiddleware (JWT or API key)
Phase 1 scope: tools/list returns all 29 tools with schemas,
tools/call acknowledges with descriptive messages. Phase 2 will add
actual DB reads/writes via sync_changes.
Usage:
Claude Desktop config:
{"mcpServers": {"mana": {"url": "http://localhost:3060/api/v1/mcp"}}}
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New sibling module to news/. Discovers topic-matched RSS feeds via
SearXNG (mana-search) or rel="alternate" probing of a site URL,
filters articles by keyword with a recency + title-match boost,
and exports the top hits as a markdown context block for the AI.
- API: /api/v1/news-research/{discover,validate,search,extract}
- Frontend: /news-research route + workbench ListView (compact card)
- Tool: research_news LLM tool (read-only, runs auto)
- Pin feeds → newsPreferences.customFeeds (encrypted) — covers the
long-missing custom-RSS subscription gap; reading-list saves still
go through articlesStore.saveFromUrl into the existing newsArticles
- shared-branding: new news-research entry + binoculars icon
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Server side of the who module. Three endpoints under /api/v1/who/*:
POST /chat
Hot path. Body: { gameId, characterId, message, history[] }.
Looks up character by id (server-side only — clients never see
personalities), builds a system prompt instructing the LLM to
roleplay the figure WITHOUT revealing its name and to append
[IDENTITY_REVEALED] when the player has guessed correctly,
forwards to mana-llm. Response: { reply, identityRevealed,
characterName? } — characterName only present on win.
Same credit pattern as chat module: validateCredits + consume
after the LLM call succeeds. Operation 'AI_WHO', cheap (0.1
credit) for local models, 5 for cloud.
POST /random
Picks a random character from a deck and returns just the id +
category + difficulty. Frontend uses this to start a new game
without ever knowing the personality pool. Server-side
randomness so a determined attacker can't predict picks.
POST /guess
Explicit "I think it's X" submission. Fallback path for when
the LLM forgets to emit the sentinel even though the player
clearly said the right name. Deterministic lowercase substring
match against the canonical name (with diacritic stripping +
last-name-only matching for unambiguous figures like "Tesla").
GET /decks
Public deck catalogue with counts and category labels. Zero
sensitive data — never leaks names or personalities. Used by
the picker UI on mount.
data/characters.ts holds 37 characters: the original 26 from
whopixels verbatim + 11 new for the antiquity / women / inventors
decks. Each entry is in one or more decks via a `decks` array, so
e.g. Marie Curie shows up in both `historical` and `women`. Adding
a new character is one entry.
The system prompt is the carefully-tested German prompt from the
original whopixels server.js — tells the LLM to respond in the
language the user writes, give subtle hints, never directly say
"I am X", and emit the sentinel only on a correct guess.
The explicit-guess matcher catches three patterns:
1. Exact normalized match ("Marie Curie" === "marie curie")
2. Last-name-only ("Curie" matches "Marie Curie")
3. Guess-contains-name ("I think it's Marie Curie" → contains)
Closes Phase A.1 of docs/WHO_MODULE.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
End-to-end deep-research feature for the questions module: a fire-and-
forget orchestrator in apps/api that plans sub-queries with mana-llm,
retrieves sources via mana-search (with optional Readability extraction),
and streams a structured synthesis back to the web app over SSE.
Backend (apps/api/src/modules/research):
- schema.ts: pgSchema('research') with research_results + sources
- orchestrator.ts: three-phase pipeline (plan / retrieve / synthesise)
with depth-aware config (quick=1×, standard=3×, deep=6× sub-queries)
- pubsub.ts: in-process event bus, single-node, swappable for Redis
- routes.ts: POST /start (202, fire-and-forget), GET /:id/stream (SSE),
POST /start-sync (test only), GET /:id, GET /:id/sources
- Credit gating via @mana/shared-hono/credits — validate up-front,
consume best-effort on `done`. Failed runs cost nothing.
Helpers (apps/api/src/lib):
- llm.ts: llmJson() + llmStream() over mana-llm OpenAI-compat API
- search.ts: webSearch() + bulkExtract() over mana-search Go service
- responses.ts: shared errorResponse / listResponse / validationError
Schema deployment:
- drizzle.config.ts (research-scoped) + drizzle/research/0000_init.sql
hand-authored migration, deployable via psql -f or drizzle-kit push.
- drizzle-kit added as devDep with db:generate / db:push scripts.
Web client (apps/mana/apps/web/src/lib/api/research.ts):
- Typed start() / get() / listSources() / streamProgress(). The stream
uses fetch + ReadableStream (not EventSource) so we can attach the
JWT via Authorization header. Special-cases 402 for friendly toast.
- New PUBLIC_MANA_API_URL plumbing in hooks.server.ts + config.ts.
Module store (modules/questions/stores/answers.svelte.ts):
- New write-side store with createManual / startResearch / accept /
softDelete. startResearch creates an optimistic empty answer, opens
the SSE stream, debounces token deltas in 100ms batches into the
encrypted local row, and on `done` replaces the streamed text with
the parsed { summary, keyPoints, followUps } payload + citations
resolved against research.sources.id.
Citation rendering (modules/questions/components/AnswerCitations.svelte):
- Tokenises [n] markers in the answer body into clickable pills with
hover popovers showing title / host / snippet / external link.
- Lazy-loaded via a session-scoped source cache (stores/sources.svelte.ts)
that deduplicates concurrent fetches.
UI (routes/(app)/questions/[id]/+page.svelte):
- Recherche card with three-state button (start / cancel / re-run),
animated phase indicator, source counter.
- Confirmation dialog warning about web/LLM transmission since the
question itself is locally encrypted.
- Toasts for success / error / cancel via @mana/shared-ui/toast.
- Re-run flow soft-deletes prior research-driven answers but keeps
manual ones intact.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rename the music module from "Mukke" to "Music" across the entire
codebase: API routes, web app module, shared packages, search provider,
dashboard widgets, i18n keys, app registry, and route paths.
Add POST /api/v1/music/cover/upload endpoint that uploads cover art
images through mana-media for deduplication, thumbnails, and Photos
gallery visibility.
Dexie table names (mukkePlaylists, mukkeProjects) kept unchanged to
preserve existing IndexedDB data.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Complete consolidation of all 15 app servers into one Hono/Bun process.
Modules added: chat, context, picture, storage, todo, planta, nutriphi,
guides, moodlit, news, traces, presi
Total: 15 modules, one server, one port (3050), ~2400 LOC.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New consolidated Hono/Bun API server at apps/api/ that replaces individual
app servers. One process, one port, one auth middleware, one container.
Modules ported:
- calendar: RRULE expansion, ICS import, Google Calendar (stub)
- contacts: avatar upload (S3), vCard import/parsing
- mukke: audio upload/download presigned URLs, batch cover art
Architecture: each module registers routes under /api/v1/{module}/*
using the shared-hono middleware stack (auth, rate limit, error handler).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>