Commit graph

8 commits

Author SHA1 Message Date
Till JS
5520f1385e fix(mana-llm): add response_format to ChatCompletionRequest model
The first iteration of the Ollama response_format passthrough crashed
with 'ChatCompletionRequest object has no attribute response_format'
because the Pydantic request model didn't declare the field at all —
incoming response_format from OpenAI-compatible clients was being
silently dropped at the parsing layer before the provider could see it.

Fix: declare a typed ResponseFormat sub-model with the two OpenAI shapes
('json_object' and 'json_schema'), add it as an optional field on
ChatCompletionRequest, and let the Ollama provider read it directly
without defensive getattr fallbacks.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 18:50:54 +02:00
Till JS
3ef095aaff fix(mana-llm/ollama): pass response_format to Ollama + strip markdown fences
The Ollama provider was completely ignoring `response_format` from the
incoming OpenAI-compatible request. Two consequences:

  1. Clients that asked for `{"type":"json_object"}` or
     `{"type":"json_schema",...}` got back JSON wrapped in
     ```json ... ``` markdown fences, because Ollama defaults to
     conversational output.
  2. Strict downstream parsers (Vercel AI SDK `generateObject`,
     manual `JSON.parse`) failed to decode the response and threw,
     even though the underlying JSON was valid inside the fences.

Fix: when response_format is set, translate it to Ollama's native
`format` field:

  - `{"type":"json_object"}` → `format: "json"`
  - `{"type":"json_schema","json_schema":{"schema":{...}}}`
    → `format: <the schema dict>` (Ollama 0.5+ supports full JSON
    schemas in the format field)

Defensive belt-and-suspenders: a small `_strip_json_fences` helper
runs after the Ollama response is decoded and removes any leftover
```json ... ``` wrapping. Some older vision models still wrap
output in fences even when `format` is set; this catches them.

Streaming path is unchanged because the nutriphi/planta refactor uses
non-streaming `generateObject`. Streaming structured output with
Ollama deserves its own pass when someone actually needs it.

Discovered during the AI SDK + Zod refactor smoke test — neither the
old nor the new vision routes ever returned validated JSON locally
because of this bug. Production uses Google Gemini directly via
fallback so the issue was masked there.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 18:12:01 +02:00
Till JS
b8e18b7f82 chore(ai-services): adopt Windows GPU as source of truth for llm/stt/tts
The Windows GPU server has been the actual production home for these
services for some time, and the running code there has drifted ahead of
the repo. This sync pulls the live versions back into the repo so the
Windows box is no longer the only place those changes exist.

Pulled from C:\mana\services\* on mana-server-gpu (192.168.178.11):

mana-llm:
- src/main.py, src/config.py — small fixes (auth wiring, config tweaks)
- src/api_auth.py — NEW (cross-service GPU_API_KEY validator)
- service.pyw — Windows runner used by the ManaLLM scheduled task
  (sets up logging redirect, loads .env, calls uvicorn)

mana-stt:
- app/main.py — substantial cleanup (684→392 lines), drops the
  whisperx-as-separate-backend branching now that whisper_service.py
  rolls whisperx in directly
- app/whisper_service.py — full CUDA + whisperx rewrite (158→358 lines)
- app/auth.py + external_auth.py — significantly expanded auth
- app/vram_manager.py — NEW (shared VRAM accounting helper)
- service.pyw — Windows runner with CUDA pre-init, FFmpeg PATH
  injection, .env loading
- removed: app/whisper_service_cuda.py (folded into whisper_service.py)
- removed: app/whisperx_service.py (folded into whisper_service.py)

mana-tts:
- app/auth.py, external_auth.py — same auth expansion as stt
- app/f5_service.py, kokoro_service.py — Windows tweaks
- app/vram_manager.py — NEW (same shared helper as stt)
- service.pyw — Windows runner

mana-video-gen:
- service.pyw — Windows runner (no other changes; the .py code on the
  GPU box is byte-identical to what's already in the repo)

The service.pyw files contain absolute Windows paths
(C:\mana\services\<svc>) and a hardcoded FFmpeg PATH for the tills user
profile. Kept as-is intentionally — they exist to be deployed to that
one machine and any abstraction layer would just hide what's actually
happening. Anyone redeploying to a different layout will need to edit
the path strings, which is a known and obvious change.

Mac-Mini infrastructure for these services (launchd plists, install
scripts, scripts/mac-mini/setup-{stt,tts}.sh, the Mac-flux2c image-gen
implementation) is still on disk and will be removed in a follow-up
commit, along with replacing mana-image-gen with the Windows
diffusers+CUDA implementation. This commit is just the live-code sync.
2026-04-08 12:46:03 +02:00
Till JS
45063b88be feat(mana-llm): add Google Gemini fallback provider with auto-routing
Add Google Gemini as a fallback provider that activates automatically
when Ollama is overloaded or unavailable, ensuring LLM requests always
succeed even under load.

New provider (src/providers/google.py):
- Full LLMProvider implementation using google-genai SDK
- Chat completions (streaming + non-streaming)
- Vision/multimodal support (base64 images)
- Embeddings via text-embedding-004
- Model mapping: Ollama models → Gemini equivalents
  (gemma3:4b → gemini-2.0-flash, llava:7b → gemini-2.0-flash, etc.)

Auto-fallback routing (src/providers/router.py):
- Concurrent request tracking for Ollama (OLLAMA_MAX_CONCURRENT=3)
- When Ollama concurrent > max: route to Google automatically
- When Ollama fails: retry on Google with model mapping
- Health check caching (5s TTL) to avoid hammering Ollama
- Non-Ollama providers (openrouter, groq, together) are never fallback-routed
- Fallback info included in /health endpoint response

New config (src/config.py):
- GOOGLE_API_KEY: enables Google provider
- GOOGLE_DEFAULT_MODEL: default gemini-2.0-flash
- AUTO_FALLBACK_ENABLED: toggle fallback (default: true)
- OLLAMA_MAX_CONCURRENT: concurrent request threshold (default: 3)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 22:44:09 +01:00
Till-JS
aba79f5c16 fix(mana-llm): fix SSE double data prefix causing message parsing issues
EventSourceResponse from sse-starlette adds its own 'data:' prefix,
so we should yield dicts with a 'data' key instead of pre-formatted
SSE strings. This was causing 'data: data:' double prefixes and
backticks appearing in chat messages.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 15:29:11 +01:00
Till-JS
fdba0e3425 feat(llm-playground): add production deployment with auth
- Add Dockerfile for multi-stage Docker build
- Add mana-core-auth integration with login/register pages
- Add auth store using Svelte 5 runes
- Add protected route layout with auth guard
- Add health endpoint for container health checks
- Add runtime URL injection via hooks.server.ts
- Add logout button to header
- Update docker-compose.macmini.yml with llm-playground service
- Update cloudflared-config.yml with playground.mana.how route
- Update mana-llm CORS config for playground domain
- Update generate-env.mjs with auth URL variable

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 18:15:02 +01:00
Till-JS
3edbd0cb26 chore: update dependencies and mana-llm improvements
- Update pnpm-lock.yaml with matrix bot dependencies
- Add environment variables to generate-env.mjs
- Improve mana-llm config and ollama provider

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 17:50:58 +01:00
Till-JS
1495dbe476 feat(mana-llm): add central LLM abstraction service
Python/FastAPI service providing unified OpenAI-compatible API for
Ollama and cloud LLM providers (OpenRouter, Groq, Together).

Features:
- Chat completions with streaming (SSE)
- Vision/multimodal support
- Embeddings generation
- Multi-provider routing (provider/model format)
- Prometheus metrics
- Optional Redis caching
2026-01-29 22:01:00 +01:00