feat(brain): add Companion Chat module with LLM tool calling

Phase 5 of the Companion Brain. Introduces the Companion Chat that
ties together all previous phases into a conversational interface.

Module (modules/companion/):
- types.ts: LocalConversation + LocalMessage with tool call/result fields
- collections.ts: companionConversations + companionMessages tables
- stores/chat.svelte.ts: conversation + message CRUD
- queries.ts: reactive useConversations() + useMessages()
- engine.ts: chat orchestration — builds system prompt from Context
  Document, sends to local LLM (Gemma via @mana/local-llm), handles
  tool calls via JSON extraction + executeTool(), supports multi-round
  tool calling (max 3 rounds)

UI:
- CompanionChat.svelte: message list, streaming output, tool result
  display, keyboard submit (Enter)
- /companion route: sidebar with conversation list + chat area

Also updates the architecture plan with Phase 1-4 completion status.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Till JS 2026-04-13 21:23:38 +02:00
parent 1e992d3c92
commit 46db527f8c
9 changed files with 1100 additions and 113 deletions

View file

@ -0,0 +1,5 @@
import { db } from '$lib/data/database';
import type { LocalConversation, LocalMessage } from './types';
export const conversationTable = db.table<LocalConversation>('companionConversations');
export const messageTable = db.table<LocalMessage>('companionMessages');

View file

@ -0,0 +1,337 @@
<!--
CompanionChat — Chat interface for the Mana Companion Brain.
Displays conversation messages, handles user input, streams LLM
responses, and shows tool execution results inline.
-->
<script lang="ts">
import { onMount, tick } from 'svelte';
import { PaperPlaneRight, Robot, User, Lightning, CircleNotch } from '@mana/shared-icons';
import { chatStore } from '../stores/chat.svelte';
import { runCompanionChat, isCompanionAvailable } from '../engine';
import { useMessages } from '../queries';
import { useDaySnapshot } from '$lib/data/projections/day-snapshot';
import { useStreaks } from '$lib/data/projections/streaks';
import type { LocalConversation, LocalMessage } from '../types';
interface Props {
conversation: LocalConversation;
}
let { conversation }: Props = $props();
const messages = useMessages(conversation.id);
const day = useDaySnapshot();
const streaks = useStreaks();
let inputText = $state('');
let sending = $state(false);
let streamingText = $state('');
let messagesEndEl = $state<HTMLDivElement | null>(null);
async function scrollToBottom() {
await tick();
messagesEndEl?.scrollIntoView({ behavior: 'smooth' });
}
$effect(() => {
if (messages.value.length > 0) scrollToBottom();
});
async function handleSend() {
const text = inputText.trim();
if (!text || sending) return;
inputText = '';
sending = true;
streamingText = '';
// Add user message
await chatStore.addMessage(conversation.id, 'user', text);
await scrollToBottom();
try {
const result = await runCompanionChat(
text,
messages.value,
day.value,
streaks.value,
(token) => {
streamingText += token;
}
);
// Add tool results as separate messages
for (const tc of result.toolCalls) {
await chatStore.addMessage(conversation.id, 'assistant', '', {
toolCall: { name: tc.name, params: tc.params },
});
await chatStore.addMessage(conversation.id, 'tool_result', tc.result.message, {
toolResult: tc.result,
});
}
// Add final assistant message
if (result.content) {
await chatStore.addMessage(conversation.id, 'assistant', result.content);
}
} catch (err) {
const msg = err instanceof Error ? err.message : 'Fehler bei der Verarbeitung';
await chatStore.addMessage(conversation.id, 'assistant', `Fehler: ${msg}`);
} finally {
sending = false;
streamingText = '';
await scrollToBottom();
}
}
function handleKeydown(e: KeyboardEvent) {
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
handleSend();
}
}
</script>
<div class="companion-chat">
<div class="messages">
{#each messages.value as msg (msg.id)}
<div
class="message"
class:user={msg.role === 'user'}
class:assistant={msg.role === 'assistant'}
class:tool={msg.role === 'tool_result'}
>
<div class="message-icon">
{#if msg.role === 'user'}
<User size={16} weight="bold" />
{:else if msg.role === 'tool_result'}
<Lightning size={16} weight="bold" />
{:else}
<Robot size={16} weight="bold" />
{/if}
</div>
<div class="message-content">
{#if msg.toolCall}
<span class="tool-badge">{msg.toolCall.name}</span>
{/if}
{#if msg.toolResult}
<span
class="tool-result"
class:success={msg.toolResult.success}
class:error={!msg.toolResult.success}
>
{msg.content}
</span>
{:else}
{msg.content}
{/if}
</div>
</div>
{/each}
{#if sending && streamingText}
<div class="message assistant">
<div class="message-icon">
<Robot size={16} weight="bold" />
</div>
<div class="message-content streaming">{streamingText}</div>
</div>
{/if}
<div bind:this={messagesEndEl}></div>
</div>
<div class="input-area">
<textarea
class="chat-input"
bind:value={inputText}
onkeydown={handleKeydown}
placeholder="Nachricht an Companion..."
disabled={sending}
rows={1}
></textarea>
<button class="send-btn" onclick={handleSend} disabled={sending || !inputText.trim()}>
{#if sending}
<CircleNotch size={18} weight="bold" />
{:else}
<PaperPlaneRight size={18} weight="bold" />
{/if}
</button>
</div>
</div>
<style>
.companion-chat {
display: flex;
flex-direction: column;
height: 100%;
max-height: calc(100dvh - var(--bottom-chrome-height, 80px) - 6rem);
}
.messages {
flex: 1;
overflow-y: auto;
padding: 1rem;
display: flex;
flex-direction: column;
gap: 0.75rem;
}
.message {
display: flex;
gap: 0.5rem;
max-width: 85%;
}
.message.user {
align-self: flex-end;
flex-direction: row-reverse;
}
.message.assistant {
align-self: flex-start;
}
.message.tool {
align-self: flex-start;
font-size: 0.8125rem;
}
.message-icon {
flex-shrink: 0;
width: 28px;
height: 28px;
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
color: hsl(var(--color-muted-foreground));
background: hsl(var(--color-muted) / 0.3);
}
.message.user .message-icon {
background: hsl(var(--color-primary) / 0.15);
color: hsl(var(--color-primary));
}
.message.tool .message-icon {
background: hsl(var(--color-warning, 45 93% 47%) / 0.15);
color: hsl(var(--color-warning, 45 93% 47%));
}
.message-content {
padding: 0.625rem 0.875rem;
border-radius: 1rem;
font-size: 0.875rem;
line-height: 1.5;
white-space: pre-wrap;
word-break: break-word;
}
.message.user .message-content {
background: hsl(var(--color-primary));
color: hsl(var(--color-primary-foreground));
border-bottom-right-radius: 0.25rem;
}
.message.assistant .message-content {
background: hsl(var(--color-muted) / 0.3);
color: hsl(var(--color-foreground));
border-bottom-left-radius: 0.25rem;
}
.message.tool .message-content {
background: hsl(var(--color-muted) / 0.15);
color: hsl(var(--color-muted-foreground));
border-radius: 0.5rem;
padding: 0.375rem 0.625rem;
}
.streaming {
animation: pulse 1.5s ease-in-out infinite;
}
@keyframes pulse {
0%,
100% {
opacity: 1;
}
50% {
opacity: 0.7;
}
}
.tool-badge {
display: inline-block;
font-size: 0.6875rem;
font-weight: 600;
background: hsl(var(--color-primary) / 0.1);
color: hsl(var(--color-primary));
padding: 0.125rem 0.5rem;
border-radius: 9999px;
margin-bottom: 0.25rem;
}
.tool-result.success {
color: hsl(var(--color-success, 142 71% 45%));
}
.tool-result.error {
color: hsl(var(--color-error, 0 84% 60%));
}
.input-area {
display: flex;
gap: 0.5rem;
padding: 0.75rem 1rem;
border-top: 1px solid hsl(var(--color-border));
background: hsl(var(--color-card));
}
.chat-input {
flex: 1;
padding: 0.625rem 0.875rem;
border-radius: 1rem;
border: 1.5px solid hsl(var(--color-border));
background: hsl(var(--color-background));
color: hsl(var(--color-foreground));
font-size: 0.875rem;
resize: none;
outline: none;
font-family: inherit;
transition: border-color 0.15s;
}
.chat-input:focus {
border-color: hsl(var(--color-primary));
}
.chat-input:disabled {
opacity: 0.6;
}
.send-btn {
flex-shrink: 0;
width: 40px;
height: 40px;
border-radius: 50%;
border: none;
background: hsl(var(--color-primary));
color: hsl(var(--color-primary-foreground));
cursor: pointer;
display: flex;
align-items: center;
justify-content: center;
transition: all 0.15s;
}
.send-btn:hover:not(:disabled) {
filter: brightness(1.1);
}
.send-btn:disabled {
opacity: 0.5;
cursor: not-allowed;
}
</style>

View file

@ -0,0 +1,176 @@
/**
* Companion Chat Engine Orchestrates LLM + Context Document + Tool Calling.
*
* Flow:
* 1. Build system prompt from Context Document (projections + streaks)
* 2. Collect conversation history
* 3. Send to LLM with tool schemas
* 4. If LLM returns tool_use execute tool feed result back repeat
* 5. Return final assistant message
*
* Currently uses @mana/local-llm directly (Gemma, browser-local).
* Tool calling is simulated via JSON extraction since Gemma doesn't
* natively support function calling the system prompt instructs the
* model to output JSON when it wants to call a tool.
*/
import { generate, getLocalLlmStatus, loadLocalLlm } from '@mana/local-llm';
import { generateContextDocument } from '$lib/data/projections/context-document';
import { getToolsForLlm, executeTool } from '$lib/data/tools';
import type { DaySnapshot, StreakInfo } from '$lib/data/projections/types';
import type { LocalMessage } from './types';
import type { ToolResult } from '$lib/data/tools/types';
const MAX_TOOL_ROUNDS = 3;
interface EngineResult {
content: string;
toolCalls: { name: string; params: Record<string, unknown>; result: ToolResult }[];
}
function buildSystemPrompt(day: DaySnapshot, streaks: StreakInfo[]): string {
const context = generateContextDocument(day, streaks);
const toolSchemas = getToolsForLlm();
const toolList = toolSchemas.map((t) => `- ${t.name}: ${t.description}`).join('\n');
return `Du bist der Mana Companion — ein hilfreicher persoenlicher Assistent.
Du hast Zugriff auf die Daten und Aktionen des Nutzers ueber verschiedene Module.
${context}
## Verfuegbare Aktionen
${toolList}
## Tool-Aufruf Format
Wenn du eine Aktion ausfuehren willst, antworte mit einem JSON-Block:
\`\`\`tool
{"name": "tool_name", "params": {"key": "value"}}
\`\`\`
Du kannst pro Antwort EINEN Tool-Aufruf machen. Nach dem Ergebnis kannst du weiter antworten.
Wenn du keine Aktion ausfuehren willst, antworte einfach mit Text.
## Verhalten
- Antworte auf Deutsch
- Sei kurz und hilfreich
- Nutze die Kontext-Daten um relevante Vorschlaege zu machen
- Wenn der Nutzer etwas loggen will, nutze das passende Tool
- Ermutige den Nutzer bei Fortschritt und Streaks`;
}
function extractToolCall(
text: string
): { name: string; params: Record<string, unknown>; before: string; after: string } | null {
const toolBlockRegex = /```tool\s*\n?([\s\S]*?)\n?```/;
const match = text.match(toolBlockRegex);
if (!match) return null;
try {
const parsed = JSON.parse(match[1]) as { name: string; params: Record<string, unknown> };
if (!parsed.name) return null;
const before = text.slice(0, match.index).trim();
const after = text.slice((match.index ?? 0) + match[0].length).trim();
return { name: parsed.name, params: parsed.params ?? {}, before, after };
} catch {
return null;
}
}
function messagesToLlm(
messages: LocalMessage[]
): { role: 'user' | 'assistant' | 'system'; content: string }[] {
return messages
.filter((m) => m.role !== 'tool_result')
.map((m) => ({
role:
m.role === 'tool_result' ? ('user' as const) : (m.role as 'user' | 'assistant' | 'system'),
content: m.content,
}));
}
/**
* Send a message to the Companion and get a response.
*
* @param userMessage - The user's input text
* @param history - Previous messages in this conversation
* @param day - Current DaySnapshot projection
* @param streaks - Current streak info
* @param onToken - Streaming callback for progressive UI updates
*/
export async function runCompanionChat(
userMessage: string,
history: LocalMessage[],
day: DaySnapshot,
streaks: StreakInfo[],
onToken?: (token: string) => void
): Promise<EngineResult> {
// Ensure local LLM is loaded
const status = getLocalLlmStatus();
if (status.current.state !== 'ready') {
await loadLocalLlm();
}
const systemPrompt = buildSystemPrompt(day, streaks);
const toolCalls: EngineResult['toolCalls'] = [];
// Build message chain
const llmMessages: { role: 'user' | 'assistant' | 'system'; content: string }[] = [
{ role: 'system', content: systemPrompt },
...messagesToLlm(history),
{ role: 'user', content: userMessage },
];
let finalContent = '';
for (let round = 0; round <= MAX_TOOL_ROUNDS; round++) {
const result = await generate({
messages: llmMessages,
temperature: 0.7,
maxTokens: 1024,
onToken: round === 0 ? onToken : undefined, // Only stream first round
});
const text = result.content;
const toolCall = extractToolCall(text);
if (!toolCall) {
finalContent = text;
break;
}
// Execute the tool
const toolResult = await executeTool(toolCall.name, toolCall.params);
toolCalls.push({ name: toolCall.name, params: toolCall.params, result: toolResult });
// Build response text from before/after the tool block
const parts = [toolCall.before, toolCall.after].filter(Boolean);
// Feed tool result back into conversation
llmMessages.push({
role: 'assistant',
content: text,
});
llmMessages.push({
role: 'user',
content: `Tool-Ergebnis fuer ${toolCall.name}: ${toolResult.message}${toolResult.data ? `\nDaten: ${JSON.stringify(toolResult.data)}` : ''}`,
});
// If this was the last round, use what we have
if (round === MAX_TOOL_ROUNDS) {
finalContent = parts.join('\n\n') || `Aktion ausgefuehrt: ${toolResult.message}`;
}
}
return { content: finalContent, toolCalls };
}
/**
* Check if the Companion Chat is available (LLM loaded or loadable).
*/
export function isCompanionAvailable(): boolean {
const status = getLocalLlmStatus();
return status.current.state === 'ready' || status.current.state === 'idle';
}

View file

@ -0,0 +1,4 @@
export { chatStore } from './stores/chat.svelte';
export { runCompanionChat, isCompanionAvailable } from './engine';
export { useConversations, useMessages } from './queries';
export type { LocalConversation, LocalMessage } from './types';

View file

@ -0,0 +1,25 @@
/**
* Companion Queries Reactive reads for conversations and messages.
*/
import { useLiveQueryWithDefault } from '@mana/local-store/svelte';
import { conversationTable, messageTable } from './collections';
import type { LocalConversation, LocalMessage } from './types';
export function useConversations() {
return useLiveQueryWithDefault<LocalConversation[]>(async () => {
const all = await conversationTable.toArray();
return all.filter((c) => !c.deletedAt).sort((a, b) => b.updatedAt.localeCompare(a.updatedAt));
}, []);
}
export function useMessages(conversationId: string) {
return useLiveQueryWithDefault<LocalMessage[]>(async () => {
if (!conversationId) return [];
const msgs = await messageTable
.where('conversationId')
.equals(conversationId)
.sortBy('createdAt');
return msgs;
}, []);
}

View file

@ -0,0 +1,78 @@
/**
* Companion Chat Store Manages conversations, messages, and LLM interaction.
*
* Uses the Context Document as system prompt and the Tool Layer for
* function calling. Currently wired to @mana/local-llm (Gemma, browser-local).
* Can be upgraded to the LLM orchestrator for multi-tier support.
*/
import { conversationTable, messageTable } from '../collections';
import type { LocalConversation, LocalMessage } from '../types';
// ── Conversation CRUD ───────────────────────────────
export const chatStore = {
async createConversation(title?: string): Promise<LocalConversation> {
const now = new Date().toISOString();
const conv: LocalConversation = {
id: crypto.randomUUID(),
title: title ?? 'Neues Gespraech',
createdAt: now,
updatedAt: now,
};
await conversationTable.add(conv);
return conv;
},
async renameConversation(id: string, title: string): Promise<void> {
await conversationTable.update(id, {
title,
updatedAt: new Date().toISOString(),
});
},
async deleteConversation(id: string): Promise<void> {
await conversationTable.update(id, {
deletedAt: new Date().toISOString(),
updatedAt: new Date().toISOString(),
});
},
// ── Messages ──────────────────────────────────────
async addMessage(
conversationId: string,
role: LocalMessage['role'],
content: string,
extra?: {
toolCall?: LocalMessage['toolCall'];
toolResult?: LocalMessage['toolResult'];
}
): Promise<LocalMessage> {
const msg: LocalMessage = {
id: crypto.randomUUID(),
conversationId,
role,
content,
toolCall: extra?.toolCall,
toolResult: extra?.toolResult,
createdAt: new Date().toISOString(),
};
await messageTable.add(msg);
// Touch conversation updatedAt
await conversationTable.update(conversationId, {
updatedAt: msg.createdAt,
});
return msg;
},
async updateMessageContent(id: string, content: string): Promise<void> {
await messageTable.update(id, { content });
},
async getMessages(conversationId: string): Promise<LocalMessage[]> {
return messageTable.where('conversationId').equals(conversationId).sortBy('createdAt');
},
};

View file

@ -0,0 +1,30 @@
/**
* Companion Chat types.
*/
export interface LocalConversation {
id: string;
title: string;
createdAt: string;
updatedAt: string;
deletedAt?: string;
}
export interface LocalMessage {
id: string;
conversationId: string;
role: 'user' | 'assistant' | 'system' | 'tool_result';
content: string;
/** Tool call info (for assistant messages that invoke a tool) */
toolCall?: {
name: string;
params: Record<string, unknown>;
};
/** Tool result (for tool_result messages) */
toolResult?: {
success: boolean;
message: string;
data?: unknown;
};
createdAt: string;
}

View file

@ -0,0 +1,284 @@
<script lang="ts">
import { onMount } from 'svelte';
import { Robot, Plus, Trash } from '@mana/shared-icons';
import CompanionChat from '$lib/modules/companion/components/CompanionChat.svelte';
import { chatStore } from '$lib/modules/companion/stores/chat.svelte';
import { useConversations } from '$lib/modules/companion/queries';
import type { LocalConversation } from '$lib/modules/companion/types';
const conversations = useConversations();
let activeConversation = $state<LocalConversation | null>(null);
onMount(async () => {
// Auto-create or resume last conversation
if (conversations.value.length > 0) {
activeConversation = conversations.value[0];
}
});
// When conversations load, select the first one
$effect(() => {
if (!activeConversation && conversations.value.length > 0) {
activeConversation = conversations.value[0];
}
});
async function handleNewConversation() {
const conv = await chatStore.createConversation();
activeConversation = conv;
}
async function handleDeleteConversation(id: string) {
await chatStore.deleteConversation(id);
if (activeConversation?.id === id) {
activeConversation = conversations.value.find((c) => c.id !== id) ?? null;
}
}
</script>
<svelte:head>
<title>Companion - Mana</title>
</svelte:head>
<div class="companion-page">
<!-- Sidebar -->
<div class="sidebar">
<div class="sidebar-header">
<div class="sidebar-title">
<Robot size={20} weight="bold" />
<span>Companion</span>
</div>
<button class="new-btn" onclick={handleNewConversation} title="Neues Gespraech">
<Plus size={16} weight="bold" />
</button>
</div>
<div class="conversation-list">
{#each conversations.value as conv (conv.id)}
<button
class="conversation-item"
class:active={activeConversation?.id === conv.id}
onclick={() => (activeConversation = conv)}
>
<span class="conv-title">{conv.title}</span>
<!-- svelte-ignore a11y_no_static_element_interactions -->
<span
class="conv-delete"
role="button"
tabindex="-1"
onclick={(e) => {
e.stopPropagation();
handleDeleteConversation(conv.id);
}}
onkeydown={(e) => {
if (e.key === 'Enter') {
e.stopPropagation();
handleDeleteConversation(conv.id);
}
}}
title="Loeschen"
>
<Trash size={12} />
</span>
</button>
{/each}
{#if conversations.value.length === 0}
<p class="empty-hint">Noch keine Gespraeche. Starte mit dem + Button.</p>
{/if}
</div>
</div>
<!-- Chat Area -->
<div class="chat-area">
{#if activeConversation}
{#key activeConversation.id}
<CompanionChat conversation={activeConversation} />
{/key}
{:else}
<div class="empty-state">
<Robot size={48} weight="thin" />
<h2>Mana Companion</h2>
<p>
Dein persoenlicher Assistent. Frag nach deinem Tag, lass Tasks erstellen oder Getraenke
loggen.
</p>
<button class="start-btn" onclick={handleNewConversation}> Gespraech starten </button>
</div>
{/if}
</div>
</div>
<style>
.companion-page {
display: flex;
height: calc(100dvh - var(--bottom-chrome-height, 80px) - 4rem);
gap: 1px;
background: hsl(var(--color-border));
border-radius: 1rem;
overflow: hidden;
}
.sidebar {
width: 240px;
flex-shrink: 0;
background: hsl(var(--color-card));
display: flex;
flex-direction: column;
}
@media (max-width: 639px) {
.sidebar {
display: none;
}
}
.sidebar-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 0.75rem;
border-bottom: 1px solid hsl(var(--color-border));
}
.sidebar-title {
display: flex;
align-items: center;
gap: 0.5rem;
font-weight: 600;
font-size: 0.9375rem;
color: hsl(var(--color-foreground));
}
.new-btn {
width: 28px;
height: 28px;
border-radius: 50%;
border: none;
background: hsl(var(--color-primary) / 0.1);
color: hsl(var(--color-primary));
cursor: pointer;
display: flex;
align-items: center;
justify-content: center;
transition: all 0.15s;
}
.new-btn:hover {
background: hsl(var(--color-primary) / 0.2);
}
.conversation-list {
flex: 1;
overflow-y: auto;
padding: 0.5rem;
}
.conversation-item {
width: 100%;
display: flex;
align-items: center;
justify-content: space-between;
padding: 0.5rem 0.75rem;
border-radius: 0.5rem;
border: none;
background: transparent;
color: hsl(var(--color-foreground));
cursor: pointer;
font-size: 0.8125rem;
text-align: left;
transition: all 0.15s;
}
.conversation-item:hover {
background: hsl(var(--color-surface-hover));
}
.conversation-item.active {
background: hsl(var(--color-primary) / 0.1);
color: hsl(var(--color-primary));
}
.conv-title {
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.conv-delete {
flex-shrink: 0;
opacity: 0;
border: none;
background: none;
color: hsl(var(--color-muted-foreground));
cursor: pointer;
padding: 0.125rem;
border-radius: 0.25rem;
display: flex;
transition: all 0.15s;
}
.conversation-item:hover .conv-delete {
opacity: 1;
}
.conv-delete:hover {
color: hsl(var(--color-error));
}
.empty-hint {
font-size: 0.75rem;
color: hsl(var(--color-muted-foreground));
text-align: center;
padding: 1rem;
}
.chat-area {
flex: 1;
background: hsl(var(--color-background));
display: flex;
flex-direction: column;
}
.empty-state {
flex: 1;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
gap: 0.75rem;
color: hsl(var(--color-muted-foreground));
padding: 2rem;
text-align: center;
}
.empty-state h2 {
font-size: 1.25rem;
font-weight: 600;
color: hsl(var(--color-foreground));
margin: 0;
}
.empty-state p {
max-width: 320px;
font-size: 0.875rem;
line-height: 1.5;
}
.start-btn {
padding: 0.625rem 1.25rem;
border-radius: 9999px;
border: none;
background: hsl(var(--color-primary));
color: hsl(var(--color-primary-foreground));
font-size: 0.875rem;
font-weight: 500;
cursor: pointer;
transition: all 0.15s;
}
.start-btn:hover {
filter: brightness(1.1);
}
</style>

View file

@ -1073,99 +1073,105 @@ Generiertes Ritual:
---
## 12. Neue Dateien & Ordnerstruktur
## 12. Dateien & Ordnerstruktur
✅ = implementiert, ⬜ = ausstehend
```
apps/mana/apps/web/src/lib/
data/
events/
event-bus.ts — EventBus Singleton
event-store.ts — Persistenz in _events Tabelle
emit.ts — Helper fuer Module
types.ts — DomainEvent, EventMeta Interfaces
catalog.ts — Alle Event-Type Definitionen (union type)
projections/
day-snapshot.ts — DaySnapshot Aggregation
streaks.ts — Streak-Berechnung
correlations.ts — Statistische Korrelationen
context-document.ts — LLM-Prompt-Generator
tools/
types.ts — ModuleTool Interface
registry.ts — Tool-Sammlung + LLM-Schema-Generator
executor.ts — Tool-Ausfuehrung mit Validierung
events/ ✅ Phase 1
event-bus.ts ✅ EventBus Singleton (sync dispatch, microtask handlers)
event-store.ts ✅ Persistenz in _events Tabelle (90d TTL, 50k max)
emit.ts ✅ emitDomainEvent() Helper
types.ts ✅ DomainEvent, EventMeta, EventBus Interfaces
catalog.ts ✅ 22 Event-Typen (ManaEvent union type)
index.ts ✅ Barrel Export
projections/ ✅ Phase 2
day-snapshot.ts ✅ useDaySnapshot() — live Tagesaggregation
streaks.ts ✅ useStreaks() — 3 Streak-Typen, 90d Lookback
context-document.ts ✅ generateContextDocument() — ~500 Token LLM-Prompt
correlations.ts ⬜ Phase 7 — Statistische Korrelationen
types.ts ✅ DaySnapshot, StreakInfo, TaskSummary, EventSummary
index.ts ✅ Barrel Export
tools/ ✅ Phase 4
types.ts ✅ ModuleTool, ToolParameter, ToolResult, LlmFunctionSchema
registry.ts ✅ registerTools(), getToolsForLlm()
executor.ts ✅ executeTool() mit Validierung + Typ-Coercion
init.ts ✅ initTools() — registriert alle 5 Module
index.ts ✅ Barrel Export
companion/
rules/
types.ts — PulseRule, Nudge, RuleContext
engine.ts — Rule Runner (als ReminderSource)
water-reminder.ts
streak-warning.ts
morning-summary.ts
evening-reflection.ts
overdue-tasks.ts
meal-reminder.ts
goal-check.ts
feedback/
types.ts NudgeOutcome
tracker.ts — Outcome-Recording
analyzer.ts — Pattern-Extraktion aus Outcomes
goals/ ✅ Phase 3
types.ts ✅ LocalGoal, GoalMetric, GoalTarget, 6 Templates
store.ts ✅ CRUD + Event-Bus-Subscription fuer Progress
queries.ts ✅ useActiveGoals(), useAllGoals()
index.ts ✅ Barrel Export
rules/ ✅ Phase 3
types.ts ✅ PulseRule, Nudge, NudgeType, RuleContext
rules.ts ✅ 5 Rules (water, streak, morning, overdue, meal)
engine.ts ✅ evaluateRules(), createPulseReminderSource()
index.ts ✅ Barrel Export
feedback/ ✅ Phase 3
types.ts NudgeOutcome
tracker.ts ✅ recordOutcome(), getOutcomeStats(), getActionRate()
index.ts ✅ Barrel Export
modules/
companion/
module.config.ts
collections.ts
stores/
chat.svelte.ts
rituals.svelte.ts
goals.svelte.ts
queries.ts
tools.ts
companion/ ⬜ Phase 5
module.config.ts ⬜
stores/chat.svelte.ts ⬜
stores/rituals.svelte.ts ⬜ Phase 6
queries.ts ⬜
components/
CompanionChat.svelte
CompanionFeed.svelte
RitualRunner.svelte
GoalCard.svelte
CompanionChat.svelte ⬜
CompanionFeed.svelte ⬜
RitualRunner.svelte ⬜ Phase 6
GoalCard.svelte
todo/
tools.ts — NEU: Tool-Definitionen
stores/tasks.svelte.ts — ANPASSEN: emit() Calls
tools.ts ✅ 3 Tools (create, complete, stats)
stores/tasks.svelte.ts ✅ 5 Events (Created, Completed, Uncompleted, Deleted, Subtasks)
calendar/
tools.ts — NEU
stores/events.svelte.ts — ANPASSEN
tools.ts ✅ 2 Tools (create_event, get_todays_events)
stores/events.svelte.ts ✅ 3 Events (Created, Updated, Deleted)
drink/
tools.ts — NEU
stores/drink.svelte.ts — ANPASSEN
tools.ts ✅ 3 Tools (log, progress, undo)
stores/drink.svelte.ts ✅ 3 Events (Logged, Deleted, Undone)
nutriphi/
tools.ts — NEU
mutations.ts — ANPASSEN
tools.ts ✅ 2 Tools (log_meal, nutrition_summary)
mutations.ts ✅ 3 Events (Logged, PhotoLogged, Deleted)
places/
tools.ts — NEU
stores/places.svelte.ts — ANPASSEN
stores/tracking.svelte.ts — ANPASSEN
tools.ts ✅ 4 Tools (create, visit, get_places, location)
stores/places.svelte.ts ✅ 3 Events (Created, Deleted, Visited)
stores/tracking.svelte.ts ✅ 3 Events (Started, Stopped, LocationLogged)
```
---
## 13. Neue Dexie-Tabellen
## 13. Dexie-Tabellen
### Implementiert (v10 Schema)
```typescript
// In database.ts, naechste Version:
// Event Store — append-only domain event log
_events: '++seq, type, meta.appId, meta.timestamp, meta.recordId, [meta.appId+meta.timestamp], [type+meta.timestamp]',
// Event Store (ersetzt _activity langfristig)
_events: '++seq, type, [meta.appId+meta.timestamp], [meta.type+meta.timestamp], meta.recordId',
// Goals — companion brain goal tracking
companionGoals: 'id, moduleId, status, [moduleId+status]',
// Goals
goals: 'id, moduleId, status, [moduleId+status]',
goalHistory: '++id, goalId, periodStart',
// Semantic Memory
// Semantic Memory — extracted user patterns (prepared, not yet populated)
_memory: 'id, category, confidence, lastConfirmed, [category+confidence]',
// Feedback Loop
// Feedback Loop — nudge outcome tracking
_nudgeOutcomes: '++id, nudgeId, nudgeType, outcome, timestamp, [nudgeType+outcome]',
```
// Companion Chat
### Noch ausstehend (Phase 5+)
```typescript
// Companion Chat (Phase 5)
companionConversations: 'id, createdAt',
companionMessages: 'id, conversationId, role, createdAt, [conversationId+createdAt]',
// Rituals
// Rituals (Phase 6)
rituals: 'id, status, createdAt',
ritualSteps: 'id, ritualId, order, [ritualId+order]',
ritualLogs: '++id, ritualId, date, [ritualId+date]',
@ -1175,74 +1181,116 @@ ritualLogs: '++id, ritualId, date, [ritualId+date]',
## 14. Implementierungs-Reihenfolge
### Phase 1: Event-Fundament (Woche 1-2)
### Phase 1: Event-Fundament — ERLEDIGT (2026-04-13)
1. `data/events/` — EventBus, EventStore, emit Helper, Type Catalog
2. Domain Events fuer 5 Pilot-Module definieren (catalog.ts)
3. Stores der 5 Module umbauen: `emit()` Calls einfuegen
4. Event Store Subscriber: `eventBus.onAny()``_events` Tabelle
5. Tests: Events werden korrekt emittiert und persistiert
Commit: `e927c1f10`
**Ergebnis:** Semantischer Event-Stream fliesst, Dexie-Writes + Events parallel.
1. ✅ `data/events/` — EventBus, EventStore, emit Helper, Type Catalog
2. ✅ Domain Events fuer 5 Pilot-Module definiert (catalog.ts, 22 Event-Typen)
3. ✅ Stores der 5 Module umgebaut: `emit()` Calls eingefuegt
4. ✅ Event Store Subscriber: `eventBus.onAny()``_events` Tabelle (v10 Schema)
5. ⬜ Tests: noch ausstehend
### Phase 2: Projections (Woche 2-3)
**Ergebnis:** Semantischer Event-Stream fliesst. 20 Domain Events aus 5 Modulen.
1. DaySnapshot Projection (live Dexie-Queries + Event-Listener)
2. Streaks Projection (basierend auf Events + TimeBlocks)
3. Context Document Generator (Template-basiert)
4. Dashboard-Widget: "Mein Tag" Karte mit DaySnapshot-Daten
**Implementierungsnotizen:**
- Events werden im Store emittiert (nicht im Dexie-Hook) — der Store kennt die Semantik
- `emitDomainEvent()` Helper reduziert Boilerplate auf eine Zeile pro Event
- Re-Entrancy-Guard im EventBus verhindert Endlos-Loops
- `_activity` Tabelle bleibt parallel bestehen (Sync-Debugging)
### Phase 2: Projections — ERLEDIGT (2026-04-13)
Commit: `40e1145e9`
1. ✅ DaySnapshot Projection (live Dexie-Queries ueber alle 5 Module)
2. ✅ Streaks Projection (3 Streak-Definitionen: Wasser, Tasks, Mahlzeiten, 90-Tage Lookback)
3. ✅ Context Document Generator (Template-basiert, ~300-500 Token)
4. ⬜ Dashboard-Widget: "Mein Tag" Karte — spaeter in UI-Phase
**Ergebnis:** Zentraler Ueberblick ueber alle 5 Module, live-reaktiv.
### Phase 3: Goals + Pulse (Woche 3-4)
**Implementierungsnotizen:**
- Projections nutzen `useLiveQueryWithDefault` aus `@mana/local-store/svelte`
- DaySnapshot queried 5 Dexie-Tabellen + decrypted in einem buildSnapshot()-Call
- Streaks berechnen per checkDate() ob ein Tag "zaehlt" (z.B. Wasser-Ziel erreicht)
- Context Document ist reines String-Template, kein LLM noetig
- `startEventStore()` in `(app)/+layout.svelte` bei Auth-Ready gewired
1. Goal Datenmodell + Store + Queries
2. Goal-Tracking via Event-Subscription
3. Goal-Templates (5 vordefinierte)
4. Rule Engine mit 5 initialen Rules
5. Integration in Reminder-Scheduler
6. Nudge-UI: Toast / Bottom-Sheet
### Phase 3: Goals + Pulse — ERLEDIGT (2026-04-13)
**Ergebnis:** Nutzer setzt Ziele, bekommt proaktive Nudges.
Commit: `9066b6c9a`
### Phase 4: Tool Layer (Woche 4-5)
1. ✅ Goal Datenmodell + Store + Queries (`companion/goals/`)
2. ✅ Goal-Tracking via Event-Bus-Subscription (auto-increment currentValue)
3. ✅ 6 Goal-Templates (Wasser, Tasks, Mahlzeiten, Kalorien, Orte, Kaffee-Limit)
4. ✅ Rule Engine mit 5 Rules (`companion/rules/`)
5. ✅ ReminderSource-Adapter fuer bestehenden Scheduler
6. ⬜ Nudge-UI: Toast / Bottom-Sheet — in Phase 5 (Companion Chat)
1. ModuleTool Interface + Registry
2. tools.ts fuer 5 Pilot-Module
3. Tool Executor mit Validierung
4. LLM Function Schema Generator
5. Integration in LLM Orchestrator (`runWithTools`)
**Ergebnis:** Goals tracken automatisch, Rules erzeugen Nudges.
**Ergebnis:** LLM kann Module lesen und beschreiben.
**Implementierungsnotizen:**
- Goals leben in `companionGoals` Tabelle (v10 Schema), nicht im Core-Modul
- Goal-Tracker subscribed auf `eventBus.onAny()` und matched per eventType + Filter
- Perioden-Reset (day/week/month) passiert automatisch beim naechsten Event
- `GoalReached` Event wird emittiert wenn Ziel erstmals in einer Periode erreicht
- Rules nutzen localStorage fuer Dismissal-Tracking und Last-Run-Timestamps
- `_memory` und `_nudgeOutcomes` Tabellen vorbereitet (v10 Schema)
### Phase 5: Companion Chat (Woche 5-6)
### Phase 4: Tool Layer — ERLEDIGT (2026-04-13)
1. Companion Modul (collections, stores, queries)
2. CompanionChat Svelte-Komponente
3. Chat-Flow: Context Document + Tools + LLM
4. CompanionFeed: Timeline von Nudges + Chat
Commit: `66dd684bb`
**Ergebnis:** Nutzer kann mit dem System sprechen und Aktionen ausfuehren.
1. ✅ ModuleTool Interface + Registry (dynamische Registrierung)
2. ✅ tools.ts fuer 5 Pilot-Module (13 Tools total)
3. ✅ Tool Executor mit Parameter-Validierung + Typ-Coercion
4. ✅ LLM Function Schema Generator (`getToolsForLlm()`)
5. ⬜ Integration in LLM Orchestrator (`runWithTools`) — in Phase 5
### Phase 6: Rituale (Woche 6-7)
**Ergebnis:** 13 Tools bereit fuer LLM Function-Calling.
1. Ritual Datenmodell (steps, logs)
2. RitualRunner Komponente
3. AI-Ritual-Generierung via Companion Chat
4. Vordefinierte Ritual-Templates (Morgen, Abend, Wasser)
**Implementierungsnotizen:**
- Registry nutzt `registerTools()` Pattern statt statische Imports (tree-shaking-freundlich)
- `initTools()` in `(app)/+layout.svelte` gewired neben `startEventStore()`
- Executor coerced String→Number und String→Boolean automatisch
- Tools pro Modul: Todo (3), Calendar (2), Drink (3), Nutriphi (2), Places (4)
- Jeder Tool hat eine `message` Feld fuer menschenlesbare Bestaetigung
**Ergebnis:** Gefuehrte Routinen die in Module schreiben.
### Phase 5: Companion Chat (NAECHSTE)
### Phase 7: Memory + Correlations (Woche 7-8)
1. ⬜ Companion Modul (collections, stores, queries)
2. ⬜ CompanionChat Svelte-Komponente
3. ⬜ Chat-Flow: Context Document + Tools + LLM
4. ⬜ CompanionFeed: Timeline von Nudges + Chat
5. ⬜ Integration: `runWithTools` im LLM Orchestrator
1. Semantic Memory Tabelle + Store
2. Regelbasierte Pattern-Extraktion
3. Correlation Engine ueber TimeBlocks
4. Memory + Correlations in Context Document
5. Feedback Loop (NudgeOutcome Tracking)
6. LLM-basierte Memory-Extraktion (optional, Tier 1)
**Ziel:** Nutzer kann mit dem System sprechen und Aktionen ausfuehren.
**Ergebnis:** System lernt ueber Zeit, Insights werden praeziser.
**Offene Fragen:**
- Soll der Companion als eigenes Modul mit Route `/companion` leben oder als Overlay?
- Soll der Chat persistent sein (IndexedDB) oder session-basiert?
- Wie integriert sich der CompanionFeed mit der bestehenden NotificationBar?
### Phase 6: Rituale
1. ⬜ Ritual Datenmodell (steps, logs)
2. ⬜ RitualRunner Komponente
3. ⬜ AI-Ritual-Generierung via Companion Chat
4. ⬜ Vordefinierte Ritual-Templates (Morgen, Abend, Wasser)
**Ziel:** Gefuehrte Routinen die in Module schreiben.
### Phase 7: Memory + Correlations
1. ⬜ Semantic Memory Store (nutzt vorbereitete `_memory` Tabelle)
2. ⬜ Regelbasierte Pattern-Extraktion
3. ⬜ Correlation Engine ueber TimeBlocks
4. ⬜ Memory + Correlations in Context Document integrieren
5. ⬜ Feedback Loop: Outcome-Patterns → Memory-Facts
6. ⬜ LLM-basierte Memory-Extraktion (optional, Tier 1)
**Ziel:** System lernt ueber Zeit, Insights werden praeziser.
### Phase 8: Rollout auf weitere Module (Woche 8+)