mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-14 17:41:09 +02:00
chore(env): default MANA_LLM_URL to llm.mana.how
Same convention as STT_URL — nobody runs mana-llm in local Docker for dev work, the shared gateway is always reachable, so the path of least friction is to point at it by default. Devs who want a fully offline stack can still override the var locally. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
b505024f7b
commit
68e8897c9c
1 changed files with 6 additions and 2 deletions
|
|
@ -169,8 +169,12 @@ OLLAMA_URL=http://localhost:11434
|
|||
|
||||
# mana-llm (OpenAI-compatible gateway, port 3025 locally / llm.mana.how prod)
|
||||
# Used by server-side voice quick-add proxies (parse-task, parse-habit).
|
||||
# API key is required when pointing at the GPU LLM proxy (gpu-llm.mana.how).
|
||||
MANA_LLM_URL=http://localhost:3025
|
||||
# Defaults to the shared dev gateway because nobody runs mana-llm in
|
||||
# local Docker — same convention as STT_URL above. If you want a fully
|
||||
# offline local stack, override this to http://localhost:3025 and run
|
||||
# `docker compose up mana-llm`. API key is required when pointing at
|
||||
# the GPU LLM proxy (gpu-llm.mana.how).
|
||||
MANA_LLM_URL=https://llm.mana.how
|
||||
MANA_LLM_API_KEY=
|
||||
|
||||
# ============================================
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue