feat(llm-aliases): M5 — migrate consumers to MANA_LLM aliases

Final milestone of docs/plans/llm-fallback-aliases.md. Every backend
caller now requests models via the `mana/<class>` alias system instead
of hardcoded `ollama/...` strings. mana-llm resolves aliases through
`services/mana-llm/aliases.yaml` with health-aware fallback (M3) and
emits resolved-model + fallback metrics (M4).

SSOT moved to `packages/shared-ai/src/llm-aliases.ts` so apps/api,
apps/mana/apps/web, and services/mana-ai all import the same
`MANA_LLM` constant via the existing `@mana/shared-ai` workspace
dependency. Three additional sites (memoro-server, mana-events,
mana-research) inline the alias string with a SSOT comment because
they don't pull @mana/shared-ai today.

Migrated 14 sites across 10 files:
- apps/api: writing(LONG_FORM), comic(STRUCTURED), context(FAST_TEXT),
  food(VISION), plants(VISION), research orchestrator (3 tiers
  collapsed to STRUCTURED+FAST_TEXT/LONG_FORM)
- apps/mana/apps/web: voice/parse-task + parse-habit (STRUCTURED)
- services/mana-ai: planner llm-client + tick.ts (REASONING)
- services/mana-events: website-extractor (STRUCTURED, inlined)
- services/mana-research: mana-llm client (FAST_TEXT, inlined)
- apps/memoro/apps/server: ai.ts (FAST_TEXT, inlined)

Legacy env-vars removed: WRITING_MODEL, COMIC_STORYBOARD_MODEL,
VISION_MODEL, MANA_LLM_DEFAULT_MODEL. The chain in aliases.yaml is
now the single tuning surface; SIGHUP reloads it without redeploys.

New `scripts/validate-llm-strings.mjs` regex-scans 2538 files for
hardcoded `<provider>/<model>` strings and fails the build if any
land outside the SSOT or the explicitly-allowed paths (image-gen
modules, model-inspector code, this validator itself, the registry).
Wired into `validate:all` next to the i18n + theme validators.

Verified: `pnpm validate:llm-strings` clean, `pnpm --filter @mana/api
type-check` clean, `pnpm --filter @mana/ai-service type-check`
clean. Web type-check has 2 pre-existing errors in
SettingsSidebar.svelte (i18n MessageFormatter type drift, last
touched in 988c17a67 — unrelated to this work).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Till JS 2026-04-26 21:26:03 +02:00
parent 8a49e3ffd5
commit fea3adf5fe
19 changed files with 299 additions and 50 deletions

View file

@ -10,8 +10,11 @@
* the full concatenated text at the end. Used for synthesis.
*
* mana-llm exposes an OpenAI-compatible /v1/chat/completions endpoint
* (see services/mana-llm). Models are namespaced as `provider/model`, e.g.
* `ollama/gemma3:4b`, `openrouter/meta-llama/llama-3.1-70b-instruct`.
* (see services/mana-llm). Callers should request models via the
* `MANA_LLM.<class>` aliases from `./llm-aliases` the gateway resolves
* them through `services/mana-llm/aliases.yaml` with health-aware
* fallback. Concrete provider/model strings are reserved for the
* registry itself.
*
* Internal service-to-service calls no auth on the wire (private network).
*/

View file

@ -27,9 +27,10 @@
import { Hono } from 'hono';
import { llmJson, LlmError } from '../../lib/llm';
import { MANA_LLM } from '@mana/shared-ai';
import { logger, type AuthVariables } from '@mana/shared-hono';
const STORYBOARD_MODEL = process.env.COMIC_STORYBOARD_MODEL || 'ollama/gemma3:4b';
const STORYBOARD_MODEL = MANA_LLM.STRUCTURED;
type ComicStyle = 'comic' | 'manga' | 'cartoon' | 'graphic-novel' | 'webtoon';

View file

@ -8,10 +8,11 @@
import { Hono } from 'hono';
import { consumeCredits, validateCredits } from '@mana/shared-hono/credits';
import type { AuthVariables } from '@mana/shared-hono';
import { MANA_LLM } from '@mana/shared-ai';
const LLM_URL = process.env.MANA_LLM_URL || 'http://localhost:3025';
const CRAWLER_URL = process.env.MANA_CRAWLER_URL || 'http://localhost:3023';
const DEFAULT_SUMMARY_MODEL = process.env.MANA_LLM_DEFAULT_MODEL || 'gemma3:4b';
const DEFAULT_SUMMARY_MODEL = MANA_LLM.FAST_TEXT;
const routes = new Hono<{ Variables: AuthVariables }>();
@ -231,7 +232,7 @@ routes.post('/ai/generate', async (c) => {
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
messages,
model: model || 'gemma3:4b',
model: model || MANA_LLM.FAST_TEXT,
max_tokens: maxTokens || 2000,
}),
});
@ -245,7 +246,7 @@ routes.post('/ai/generate', async (c) => {
// Consume credits
await consumeCredits(userId, 'AI_CONTEXT_GENERATE', 5, `AI generation (${tokensUsed} tokens)`);
return c.json({ content, tokensUsed, model: model || 'gemma3:4b' });
return c.json({ content, tokensUsed, model: model || MANA_LLM.FAST_TEXT });
} catch (_err) {
return c.json({ error: 'Generation failed' }, 500);
}

View file

@ -30,14 +30,13 @@ import {
type MealAnalysis,
} from '@mana/shared-types';
import { logger, type AuthVariables } from '@mana/shared-hono';
import { MANA_LLM } from '@mana/shared-ai';
const LLM_URL = process.env.MANA_LLM_URL || 'http://localhost:3025';
// mana-llm parses model strings as `provider/model` (router.py:_parse_model).
// Default to Gemma 3 (4B, multimodal) on the local Ollama instance — it
// runs on the GPU server (192.168.178.11) via the gpu-proxy bridge and
// supports vision out of the box. Override with VISION_MODEL=google/gemini-2.0-flash
// (or similar) once mana-llm has GOOGLE_API_KEY configured.
const VISION_MODEL = process.env.VISION_MODEL || 'ollama/gemma3:4b';
// mana-llm resolves this alias to a healthy vision model (chain in
// services/mana-llm/aliases.yaml). To swap the chain, edit the YAML
// and SIGHUP — no service redeploy here.
const VISION_MODEL = MANA_LLM.VISION;
const llm = createOpenAICompatible({
name: 'mana-llm',

View file

@ -19,11 +19,10 @@ import {
type PlantIdentification,
} from '@mana/shared-types';
import { logger, type AuthVariables } from '@mana/shared-hono';
import { MANA_LLM } from '@mana/shared-ai';
const LLM_URL = process.env.MANA_LLM_URL || 'http://localhost:3025';
// See food/routes.ts for the rationale on the default model and
// the /v1 base URL.
const VISION_MODEL = process.env.VISION_MODEL || 'ollama/gemma3:4b';
const VISION_MODEL = MANA_LLM.VISION;
const llm = createOpenAICompatible({
name: 'mana-llm',

View file

@ -18,9 +18,15 @@
import { eq } from 'drizzle-orm';
import { db, researchResults, sources, type ResearchDepth } from './schema';
import { llmJson, llmStream, LlmError } from '../../lib/llm';
import { MANA_LLM } from '@mana/shared-ai';
import { webSearch, bulkExtract, type SearchHit, SearchError } from '../../lib/search';
// ─── Depth configuration ────────────────────────────────────
//
// `planModel` is always `STRUCTURED` (the planner emits JSON).
// `synthModel` varies by depth: `quick` runs through `FAST_TEXT` for a
// terse summary, `standard`/`deep` use `LONG_FORM` for richer prose.
// Concrete provider/model selection lives in services/mana-llm/aliases.yaml.
interface DepthConfig {
subQueryCount: number;
@ -39,8 +45,8 @@ const DEPTH_CONFIG: Record<ResearchDepth, DepthConfig> = {
maxSources: 5,
extract: false,
categories: ['general'],
planModel: 'ollama/gemma3:4b',
synthModel: 'ollama/gemma3:4b',
planModel: MANA_LLM.STRUCTURED,
synthModel: MANA_LLM.FAST_TEXT,
},
standard: {
subQueryCount: 3,
@ -48,8 +54,8 @@ const DEPTH_CONFIG: Record<ResearchDepth, DepthConfig> = {
maxSources: 15,
extract: true,
categories: ['general', 'news'],
planModel: 'ollama/gemma3:4b',
synthModel: 'ollama/gemma3:12b',
planModel: MANA_LLM.STRUCTURED,
synthModel: MANA_LLM.LONG_FORM,
},
deep: {
subQueryCount: 6,
@ -57,8 +63,8 @@ const DEPTH_CONFIG: Record<ResearchDepth, DepthConfig> = {
maxSources: 30,
extract: true,
categories: ['general', 'news', 'science', 'it'],
planModel: 'ollama/gemma3:12b',
synthModel: 'ollama/gemma3:12b',
planModel: MANA_LLM.STRUCTURED,
synthModel: MANA_LLM.LONG_FORM,
},
};

View file

@ -17,9 +17,10 @@
import { Hono } from 'hono';
import { llmText, LlmError } from '../../lib/llm';
import { MANA_LLM } from '@mana/shared-ai';
import { logger, type AuthVariables } from '@mana/shared-hono';
const DEFAULT_MODEL = process.env.WRITING_MODEL || 'ollama/gemma3:4b';
const DEFAULT_MODEL = MANA_LLM.LONG_FORM;
/** Hard cap so a runaway briefing can't burn unlimited tokens. */
const MAX_OUTPUT_TOKENS = 8000;