mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-14 19:01:08 +02:00
feat(mana/web): pass MANA_LLM_API_KEY from voice parse proxies
The /api/v1/voice/parse-task and /api/v1/voice/parse-habit endpoints forwarded transcripts to mana-llm without an X-API-Key header. This worked against the local mana-llm container (no auth) but silently fell back to the no-LLM path when pointed at gpu-llm.mana.how, which requires an API key — voice quick-add would look like it was running in degraded mode forever with no signal that auth was the cause. Now both endpoints read MANA_LLM_API_KEY from the server-side env and attach it as X-API-Key when present, mirroring the pattern already used by /api/v1/voice/transcribe for mana-stt. When the var is empty the header is omitted, so local Docker setups without auth still work. Plumbing: generate-env.mjs writes MANA_LLM_URL + MANA_LLM_API_KEY into apps/mana/apps/web/.env, .env.development gets the new keys with empty defaults, ENVIRONMENT_VARIABLES.md documents the gateway and where to get a key. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
2514831a3b
commit
029c7973ef
5 changed files with 41 additions and 2 deletions
|
|
@ -153,6 +153,28 @@ curl https://gpu-stt.mana.how/health
|
|||
If this returns 502, see "GPU Tunnel" in `docs/MAC_MINI_SERVER.md` for the standard
|
||||
debug ladder.
|
||||
|
||||
### LLM gateway (mana-llm)
|
||||
|
||||
Used by the unified Mana web app's voice quick-add features to turn transcripts into structured
|
||||
data: `/api/v1/voice/parse-task` (todo titles + due dates + priorities) and `/api/v1/voice/parse-habit`
|
||||
(habit picker for voice logging). Both proxies live server-side and degrade gracefully — if
|
||||
mana-llm is unreachable or unauthorized, the endpoints return a fallback shape and voice quick-add
|
||||
still works, just without LLM enrichment.
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `MANA_LLM_URL` | mana-llm gateway URL (server-side, never exposed) | `http://localhost:3025` |
|
||||
| `MANA_LLM_API_KEY` | API key — required when pointing at the GPU LLM proxy. **Never commit a real value.** | _(empty)_ |
|
||||
| `PUBLIC_MANA_LLM_URL` | Same URL exposed to the browser for direct use (status page, playground) | mirrors `MANA_LLM_URL` |
|
||||
|
||||
**Local dev**: leave `MANA_LLM_URL=http://localhost:3025` and run mana-llm in Docker. If your local
|
||||
mana-llm has no models loaded (`curl http://localhost:3025/v1/models` returns `{"data":[]}`), point
|
||||
at the public proxy with `MANA_LLM_URL=https://gpu-llm.mana.how` and set `MANA_LLM_API_KEY` to a key
|
||||
from `services/mana-llm/.env` on the GPU box.
|
||||
|
||||
**Endpoints:** `http://localhost:3025` (Docker), `https://llm.mana.how` (Mac Mini, no auth),
|
||||
`https://gpu-llm.mana.how` (GPU server, X-API-Key required).
|
||||
|
||||
## Adding New Variables
|
||||
|
||||
### Step 1: Add to `.env.development`
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue