Till JS
|
56ffcbac39
|
feat: add Ollama memory optimization, LLM metrics, and chat streaming
Three improvements to the unified LLM infrastructure:
1. Ollama memory optimization (scripts/mac-mini/configure-ollama.sh):
- OLLAMA_KEEP_ALIVE=5m → models unload after 5min idle (saves 3-16GB RAM)
- OLLAMA_NUM_PARALLEL=1 → predictable memory usage
- OLLAMA_MAX_LOADED_MODELS=1 → max 1 model in RAM at a time
2. Request-level metrics in @manacore/shared-llm:
- LlmRequestMetrics interface (model, latency, tokens, fallback detection)
- LlmMetricsCollector class with summary stats (for health endpoints)
- Optional onMetrics callback in LlmModuleOptions
- Automatic metrics emission in chatMessages() (success + error)
3. Chat streaming (token-by-token SSE):
- Backend: POST /chat/completions/stream SSE endpoint
- OllamaService.createStreamingCompletion() via llm.chatStreamMessages()
- ChatService.createStreamingCompletion() with upfront credit consumption
- Web: chatApi.createStreamingCompletion() SSE consumer
- Chat store: sendMessage() now streams tokens into assistant message
- UI updates reactively as each token arrives
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
2026-03-24 09:41:33 +01:00 |
|