fix(research): use /v1/chat/completions for mana-llm (not /api/v1/)

End-to-end testing surfaced a 404 from the synth path. mana-llm
(services/mana-llm/src/main.py) mounts the OpenAI-compatible API at
/v1/* — there's no /api prefix.

The first quick-depth e2e run only worked because the planner is
skipped on quick (it just uses the question itself), so llmJson never
fired; only llmStream did, and the streaming path also used the wrong
prefix but the test happened to land before this was caught.

The other apps/api modules (chat, guides, context, traces) all use the
wrong /api/v1/ path too — that's a separate, pre-existing bug to be
addressed in their own commits.

Verified by re-running a standard-depth research run end-to-end against
mana-llm pointed at the GPU server's ollama with gemma3:4b/12b: plan +
retrieve + extract + synth all succeed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Till JS 2026-04-08 22:37:07 +02:00
parent 30766f153e
commit 63a91e36a2

View file

@ -9,7 +9,7 @@
* llmStream() streaming, calls onToken() for each delta and returns
* the full concatenated text at the end. Used for synthesis.
*
* mana-llm exposes an OpenAI-compatible /api/v1/chat/completions endpoint
* mana-llm exposes an OpenAI-compatible /v1/chat/completions endpoint
* (see services/mana-llm). Models are namespaced as `provider/model`, e.g.
* `ollama/gemma3:4b`, `openrouter/meta-llama/llama-3.1-70b-instruct`.
*
@ -66,7 +66,7 @@ function buildMessages(system: string | undefined, user: string): LlmMessage[] {
* Throws LlmError on transport/HTTP failure or if the body isn't valid JSON.
*/
export async function llmJson<T = unknown>(opts: LlmJsonOptions): Promise<T> {
const res = await fetch(`${LLM_URL}/api/v1/chat/completions`, {
const res = await fetch(`${LLM_URL}/v1/chat/completions`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
@ -109,7 +109,7 @@ export async function llmJson<T = unknown>(opts: LlmJsonOptions): Promise<T> {
* sentinel `data: [DONE]`.
*/
export async function llmStream(opts: LlmStreamOptions): Promise<string> {
const res = await fetch(`${LLM_URL}/api/v1/chat/completions`, {
const res = await fetch(`${LLM_URL}/v1/chat/completions`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({