From 68e8897c9c807b7291c7e409355cace092175ece Mon Sep 17 00:00:00 2001 From: Till JS Date: Wed, 8 Apr 2026 16:55:01 +0200 Subject: [PATCH] chore(env): default MANA_LLM_URL to llm.mana.how MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Same convention as STT_URL — nobody runs mana-llm in local Docker for dev work, the shared gateway is always reachable, so the path of least friction is to point at it by default. Devs who want a fully offline stack can still override the var locally. Co-Authored-By: Claude Opus 4.6 (1M context) --- .env.development | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/.env.development b/.env.development index 8f4e69db1..8c3af6ca2 100644 --- a/.env.development +++ b/.env.development @@ -169,8 +169,12 @@ OLLAMA_URL=http://localhost:11434 # mana-llm (OpenAI-compatible gateway, port 3025 locally / llm.mana.how prod) # Used by server-side voice quick-add proxies (parse-task, parse-habit). -# API key is required when pointing at the GPU LLM proxy (gpu-llm.mana.how). -MANA_LLM_URL=http://localhost:3025 +# Defaults to the shared dev gateway because nobody runs mana-llm in +# local Docker — same convention as STT_URL above. If you want a fully +# offline local stack, override this to http://localhost:3025 and run +# `docker compose up mana-llm`. API key is required when pointing at +# the GPU LLM proxy (gpu-llm.mana.how). +MANA_LLM_URL=https://llm.mana.how MANA_LLM_API_KEY= # ============================================