feat(notes): isSpaceContext flag replaces kontext module (Option B)

Retire the kontext module entirely; the per-Space standing-context
document is now a regular Note flagged with `isSpaceContext: true`.
Daily use ("URL → Notiz") moves to the notes module as a first-class
action; the same primitive is reused by the (planned) Brand/Firma-Space
onboarding wizard to seed a Space-context Note from a URL.

Why: kontext was inconsistent — its UI was a URL-crawler that wrote
to userContext.freeform (profile module), while its kontextDoc table
+ AI-Mission-Runner auto-injection was a write-only shell with no
real editor. One concept (Notes) now carries both ad-hoc noting and
Space-context, with mutex (max 1 flagged Note per Space).

Notes module:
- types: add `isSpaceContext?: boolean` to LocalNote + Note
- queries: add `useSpaceContextNote()` (the active Space's flagged note)
- store: `markAsSpaceContext(id | null)` with mutex sweep across Space
- ListView: "Aus URL importieren" inline form (URL + crawl-mode +
  KI-Zusammenfassung toggle); "Als Space-Kontext markieren" /
  "Space-Kontext lösen" context-menu item; ★-Badge on flagged notes
- new api.ts: `crawlUrl()` client for POST /api/v1/notes/import-url

Notes API (apps/api):
- new modules/notes/routes.ts with /import-url (ported from kontext;
  same crawler + LLM summary pipeline, NOTES_IMPORT_URL credit op)
- mount at /api/v1/notes; add 'notes' to RESOURCE_MODULES (beta+ tier)
- delete modules/context (UI-less /ai/generate + /ai/estimate had no
  consumers; /import-url moved to notes)
- packages/credits: rename AI_CONTEXT_GENERATION → NOTES_IMPORT_URL

AI Mission Runner:
- default-resolvers: drop kontextResolver + kontextIndexer; the
  notesIndexer flags `isSpaceContext` notes with "★ " prefix and
  bubbles them to the top of the picker
- writing reference-resolver: `kind: 'kontext'` now reads the flagged
  Note via scope-scan instead of the kontextDoc table; tests updated
- writing ReferencePicker: useSpaceContextNote replaces useKontextDoc
- AiDebugBlock + MissionGrantDialog + ai-missions ListView: drop
  'kontextDoc' from ENCRYPTED_SERVER_TABLES set
- ai-agents ListView: drop 'kontext' from POLICY_MODULES

Profile module:
- ContextFreeform.svelte: switch import from kontext/api to notes/api
  (the URL-crawl is the same primitive; it still writes to
  userContext.freeform — only the import path changed)

Dexie:
- v58: notes index gains `isSpaceContext`; kontextDoc table dropped

Kontext module deletion:
- delete apps/mana/apps/web/src/lib/modules/kontext/ entirely
- delete (app)/kontext/ route
- drop registerApp + Scroll icon from app-registry/apps.ts
- drop kontext entry from help-content
- drop kontextModuleConfig from data/module-registry.ts
- drop kontextDoc from crypto registry

mana-auth:
- bootstrap-singletons: drop bootstrapSpaceSingletons function entirely
  (kontextDoc was the only per-Space singleton); userContext bootstrap
  unchanged
- better-auth.config: drop kontextDoc bootstrap call from personal-space
  hook + organizationHooks.afterCreateOrganization
- me-bootstrap: drop per-space bootstrap loop; response shape kept
  (always-empty `spaces: {}`) for backwards-compat with older clients

Note: the still-existing legacy `context` module (CMS-style docs/spaces,
unrelated to kontext) is left in place; its cleanup landed on the
articles-bulk-import branch and is out of scope for this PR.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Till JS 2026-04-29 00:06:34 +02:00
parent 054b9e5beb
commit 8fbdc6db77
37 changed files with 496 additions and 983 deletions

View file

@ -28,7 +28,7 @@ import { calendarRoutes } from './modules/calendar/routes';
import { contactsRoutes } from './modules/contacts/routes';
import { musicRoutes } from './modules/music/routes';
import { chatRoutes } from './modules/chat/routes';
import { contextRoutes } from './modules/context/routes';
import { notesRoutes } from './modules/notes/routes';
import { pictureRoutes } from './modules/picture/routes';
import { profileRoutes } from './modules/profile/routes';
import { wardrobeRoutes } from './modules/wardrobe/routes';
@ -94,10 +94,10 @@ app.use('/api/*', authMiddleware());
// their own records.
const RESOURCE_MODULES = [
'chat',
'context',
'food',
'guides',
'news-research',
'notes',
'picture',
'plants',
'research',
@ -121,7 +121,7 @@ app.route('/api/v1/calendar', calendarRoutes);
app.route('/api/v1/contacts', contactsRoutes);
app.route('/api/v1/music', musicRoutes);
app.route('/api/v1/chat', chatRoutes);
app.route('/api/v1/context', contextRoutes);
app.route('/api/v1/notes', notesRoutes);
app.route('/api/v1/picture', pictureRoutes);
app.route('/api/v1/profile', profileRoutes);
app.route('/api/v1/wardrobe', wardrobeRoutes);

View file

@ -1,8 +1,11 @@
/**
* Context module AI text generation + token estimation
* Ported from apps/context/apps/server
* Notes module server-side helpers.
*
* CRUD for spaces/documents handled by mana-sync.
* Today: a single `POST /import-url` endpoint that crawls a URL via
* mana-crawler and optionally summarises the result with mana-llm. The
* client treats the response as the body of a new Note (title +
* markdown content). The same endpoint is reused by the (planned)
* Brand/Firma-Space onboarding wizard to seed the Space-context note.
*/
import { Hono } from 'hono';
@ -16,8 +19,6 @@ const DEFAULT_SUMMARY_MODEL = MANA_LLM.FAST_TEXT;
const routes = new Hono<{ Variables: AuthVariables }>();
// ─── URL Import (crawler → optional LLM summary → document) ──
const DEEP_MAX_PAGES = 20;
const CRAWL_POLL_INTERVAL_MS = 1500;
const CRAWL_TIMEOUT_MS = 90_000;
@ -25,20 +26,16 @@ const CRAWL_TIMEOUT_MS = 90_000;
/**
* Local LLMs love to wrap Markdown in ```markdown fences or prepend
* a "Hier ist die Zusammenfassung:" preamble. Strip those so the
* output renders correctly when dropped into the Kontext document.
* output renders correctly when dropped into a Note body.
*/
function sanitizeSummary(raw: string): string {
let s = raw.trim();
// Strip a leading ```markdown / ```md / ``` fence and its closing ```.
const fenceMatch = s.match(/^```(?:markdown|md)?\s*\n([\s\S]*?)\n?```\s*$/i);
if (fenceMatch) s = fenceMatch[1].trim();
// Drop a single-line preamble that ends with a colon (LLM chatter).
const lines = s.split('\n');
if (lines.length > 2 && /^[^#\n].{0,80}:\s*$/.test(lines[0].trim())) {
s = lines.slice(1).join('\n').trim();
}
// Demote a solitary leading H1 to H2 so it doesn't clash with our
// section header that the frontend prepends.
s = s.replace(/^#\s+/, '## ');
return s;
}
@ -73,7 +70,7 @@ routes.post('/import-url', async (c) => {
}
const creditCost = summarize ? 5 : 1;
const validation = await validateCredits(userId, 'AI_CONTEXT_IMPORT_URL', creditCost);
const validation = await validateCredits(userId, 'NOTES_IMPORT_URL', creditCost);
if (!validation.hasCredits) {
return c.json(
{
@ -147,7 +144,7 @@ routes.post('/import-url', async (c) => {
{
role: 'system',
content:
'Du bist ein Assistent, der Web-Inhalte in strukturierte Kontext-Dokumente zusammenfasst. ' +
'Du bist ein Assistent, der Web-Inhalte in strukturierte Notiz-Dokumente zusammenfasst. ' +
'Antworte ausschließlich in sauberem Markdown. Gliedere in H2-Abschnitte: ' +
'"## Überblick", "## Kernaussagen", "## Details". Nutze die Sprache der Quelle. ' +
'Schreibe die Antwort direkt, ohne Einleitung ("Hier ist…"), ohne Schlussformel, ' +
@ -175,7 +172,7 @@ routes.post('/import-url', async (c) => {
await consumeCredits(
userId,
'AI_CONTEXT_IMPORT_URL',
'NOTES_IMPORT_URL',
creditCost,
`URL import (${mode}${summarize ? ' + summary' : ''})`
);
@ -194,74 +191,4 @@ routes.post('/import-url', async (c) => {
}
});
// ─── AI Generation (server-only: mana-llm) ──────────────────
routes.post('/ai/generate', async (c) => {
const userId = c.get('userId');
const { prompt, documents, model, maxTokens } = await c.req.json();
if (!prompt) return c.json({ error: 'prompt required' }, 400);
// Validate credits
const validation = await validateCredits(userId, 'AI_CONTEXT_GENERATE', 5);
if (!validation.hasCredits) {
return c.json(
{ error: 'Insufficient credits', required: 5, available: validation.availableCredits },
402
);
}
try {
// Build messages with document context
const messages: Array<{ role: string; content: string }> = [];
if (documents?.length) {
const contextText = documents
.map((d: { title: string; content: string }) => `--- ${d.title} ---\n${d.content}`)
.join('\n\n');
messages.push({
role: 'system',
content: `Verwende diese Dokumente als Kontext:\n\n${contextText}`,
});
}
messages.push({ role: 'user', content: prompt });
const res = await fetch(`${LLM_URL}/api/v1/chat/completions`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
messages,
model: model || MANA_LLM.FAST_TEXT,
max_tokens: maxTokens || 2000,
}),
});
if (!res.ok) return c.json({ error: 'AI generation failed' }, 502);
const data = await res.json();
const content = data.choices?.[0]?.message?.content || '';
const tokensUsed = data.usage?.total_tokens || 0;
// Consume credits
await consumeCredits(userId, 'AI_CONTEXT_GENERATE', 5, `AI generation (${tokensUsed} tokens)`);
return c.json({ content, tokensUsed, model: model || MANA_LLM.FAST_TEXT });
} catch (_err) {
return c.json({ error: 'Generation failed' }, 500);
}
});
routes.post('/ai/estimate', async (c) => {
const { prompt, documents } = await c.req.json();
const charCount =
(prompt?.length || 0) +
(documents || []).reduce(
(sum: number, d: { content: string }) => sum + (d.content?.length || 0),
0
);
const estimatedTokens = Math.ceil(charCount / 4);
return c.json({ estimatedTokens, estimatedCost: 5 });
});
export { routes as contextRoutes };
export { routes as notesRoutes };