mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-14 18:01:09 +02:00
feat(llm-aliases): M5 — migrate consumers to MANA_LLM aliases
Final milestone of docs/plans/llm-fallback-aliases.md. Every backend
caller now requests models via the `mana/<class>` alias system instead
of hardcoded `ollama/...` strings. mana-llm resolves aliases through
`services/mana-llm/aliases.yaml` with health-aware fallback (M3) and
emits resolved-model + fallback metrics (M4).
SSOT moved to `packages/shared-ai/src/llm-aliases.ts` so apps/api,
apps/mana/apps/web, and services/mana-ai all import the same
`MANA_LLM` constant via the existing `@mana/shared-ai` workspace
dependency. Three additional sites (memoro-server, mana-events,
mana-research) inline the alias string with a SSOT comment because
they don't pull @mana/shared-ai today.
Migrated 14 sites across 10 files:
- apps/api: writing(LONG_FORM), comic(STRUCTURED), context(FAST_TEXT),
food(VISION), plants(VISION), research orchestrator (3 tiers
collapsed to STRUCTURED+FAST_TEXT/LONG_FORM)
- apps/mana/apps/web: voice/parse-task + parse-habit (STRUCTURED)
- services/mana-ai: planner llm-client + tick.ts (REASONING)
- services/mana-events: website-extractor (STRUCTURED, inlined)
- services/mana-research: mana-llm client (FAST_TEXT, inlined)
- apps/memoro/apps/server: ai.ts (FAST_TEXT, inlined)
Legacy env-vars removed: WRITING_MODEL, COMIC_STORYBOARD_MODEL,
VISION_MODEL, MANA_LLM_DEFAULT_MODEL. The chain in aliases.yaml is
now the single tuning surface; SIGHUP reloads it without redeploys.
New `scripts/validate-llm-strings.mjs` regex-scans 2538 files for
hardcoded `<provider>/<model>` strings and fails the build if any
land outside the SSOT or the explicitly-allowed paths (image-gen
modules, model-inspector code, this validator itself, the registry).
Wired into `validate:all` next to the i18n + theme validators.
Verified: `pnpm validate:llm-strings` clean, `pnpm --filter @mana/api
type-check` clean, `pnpm --filter @mana/ai-service type-check`
clean. Web type-check has 2 pre-existing errors in
SettingsSidebar.svelte (i18n MessageFormatter type drift, last
touched in 988c17a67 — unrelated to this work).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
8a49e3ffd5
commit
fea3adf5fe
19 changed files with 299 additions and 50 deletions
|
|
@ -176,7 +176,10 @@ OLLAMA_URL=http://localhost:11434
|
|||
# the GPU LLM proxy (gpu-llm.mana.how).
|
||||
MANA_LLM_URL=https://llm.mana.how
|
||||
MANA_LLM_API_KEY=
|
||||
MANA_LLM_DEFAULT_MODEL=gemma3:4b
|
||||
# Legacy: MANA_LLM_DEFAULT_MODEL / WRITING_MODEL / COMIC_STORYBOARD_MODEL
|
||||
# / VISION_MODEL — removed in M5 of llm-fallback-aliases. Backend code
|
||||
# now requests `mana/<class>` aliases (see packages/shared-ai/src/llm-
|
||||
# aliases.ts) which mana-llm resolves via services/mana-llm/aliases.yaml.
|
||||
|
||||
# mana-research — unified research orchestration (port 3068). Fronts
|
||||
# search + extract + sync/async research agents behind one API. mana-ai
|
||||
|
|
@ -523,9 +526,6 @@ GPU_API_KEY=sk-gpu-cf483ede1e05e28fba5e56c94cd3c24e7c245e57816d3e86
|
|||
GPU_SERVER_URL=https://gpu.mana.how
|
||||
GPU_SERVER_LAN_URL=http://192.168.178.11
|
||||
|
||||
# Vision Model for Food + Planta (local, replaces Google Gemini)
|
||||
VISION_MODEL=ollama/gemma3:12b
|
||||
|
||||
# ============================================
|
||||
# MANA-MAIL SERVICE (Port 3042)
|
||||
# ============================================
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue