fix(mana-image-gen): align source default port with production reality

Source default was 3026 but Mac Mini production has been overriding to
3025 via the launchd plist in scripts/mac-mini/setup-image-gen.sh ever
since the service was set up. The override existed in exactly one place
that is not version-controlled in any obvious way — anyone redeploying
without that script would land on 3026 and clients pointing at 3025
would fail to connect.

Source default → 3025 across main.py, setup.sh, README, CLAUDE.md so the
launchd plist is no longer load-bearing. The Mac Mini setup script still
sets PORT=3025 explicitly; that's now belt-and-suspenders rather than the
only thing keeping production alive.

Also added a note clarifying that this Mac Mini service (flux2.c, MPS,
arm64-only) is *not* the same thing as the "image-gen" running on the
Windows GPU server (PyTorch + diffusers + CUDA, port 3023, code lives at
C:\mana\services\mana-image-gen\ outside this repo). Two different
implementations sharing a name was confusing the port-collision audit.

Updated docs/PORT_SCHEMA.md warning block to retract the previous false
claims of two active port collisions:

  - image-gen ↔ video-gen on 3026 — wrong: image-gen runs on Mac Mini
    on 3025 (now also the source default), video-gen is alone on the
    Windows GPU on 3026
  - voice-bot ↔ sync on 3050 — latent only: mana-voice-bot is not
    deployed anywhere (no launchd, no scheduled task, no cloudflared
    route), so the collision is in source defaults but not in production

The voice-bot 3050 default should still be moved before voice-bot is
ever deployed — flagged in the PORT_SCHEMA warning instead of silently
fixed since voice-bot deployment is its own decision.
This commit is contained in:
Till JS 2026-04-08 12:30:33 +02:00
parent b0a08ce239
commit 3c91691d26
5 changed files with 58 additions and 30 deletions

View file

@ -4,11 +4,19 @@
AI image generation microservice using FLUX.2 klein 4B model via flux2.c:
- **Port**: 3026
- **Port**: 3025
- **Host**: Mac Mini only — `setup.sh` hard-fails on anything other than macOS arm64
- **Framework**: Python + FastAPI
- **Model**: FLUX.2 klein 4B (Black Forest Labs)
- **Backend**: flux2.c (Pure C, MPS accelerated)
> ⚠️ **Two image-gen services exist with the same name.** This one is the
> Mac Mini implementation in the repo (flux2.c, MPS, Apple Silicon only).
> The Windows GPU server runs a *separate* image-gen on `gpu-img.mana.how`
> (port 3023, PyTorch + diffusers + CUDA) whose code lives outside the
> repo at `C:\mana\services\mana-image-gen\` on the GPU box. See
> `docs/WINDOWS_GPU_SERVER_SETUP.md` for that one.
## Features
- **Sub-second generation** on Apple Silicon (M4)
@ -26,14 +34,14 @@ AI image generation microservice using FLUX.2 klein 4B model via flux2.c:
# Development
source .venv/bin/activate
FLUX_BINARY=/opt/flux2/flux FLUX_MODEL_DIR=/opt/flux2/model \
uvicorn app.main:app --host 0.0.0.0 --port 3026 --reload
uvicorn app.main:app --host 0.0.0.0 --port 3025 --reload
# Production
../../scripts/mac-mini/setup-image-gen.sh
# Test
curl http://localhost:3026/health
curl -X POST http://localhost:3026/generate \
curl http://localhost:3025/health
curl -X POST http://localhost:3025/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "A cat in space"}' | jq
```
@ -95,7 +103,7 @@ services/mana-image-gen/
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3026` | Service port |
| `PORT` | `3025` | Service port |
| `FLUX_BINARY` | `/opt/flux2/flux` | Path to flux2.c binary |
| `FLUX_MODEL_DIR` | `/opt/flux2/model` | Path to model weights |
| `DEFAULT_STEPS` | `4` | Default sampling steps |
@ -128,7 +136,7 @@ The service is designed to be used by:
### Example Integration (TypeScript)
```typescript
const response = await fetch('http://localhost:3026/generate', {
const response = await fetch('http://localhost:3025/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
@ -139,7 +147,7 @@ const response = await fetch('http://localhost:3026/generate', {
});
const result = await response.json();
const imageUrl = `http://localhost:3026${result.image_url}`;
const imageUrl = `http://localhost:3025${result.image_url}`;
```
## Dependencies

View file

@ -25,10 +25,10 @@ Local AI image generation using **FLUX.2 klein 4B** model via flux2.c.
# 2. Start the service
source .venv/bin/activate
FLUX_BINARY=/opt/flux2/flux FLUX_MODEL_DIR=/opt/flux2/model \
uvicorn app.main:app --host 0.0.0.0 --port 3026
uvicorn app.main:app --host 0.0.0.0 --port 3025
# 3. Generate an image
curl -X POST http://localhost:3026/generate \
curl -X POST http://localhost:3025/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "A cat wearing sunglasses"}' | jq
```
@ -87,7 +87,7 @@ GET /models
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3026` | Service port |
| `PORT` | `3025` | Service port |
| `FLUX_BINARY` | `/opt/flux2/flux` | flux2.c binary path |
| `FLUX_MODEL_DIR` | `/opt/flux2/model` | Model weights path |
| `DEFAULT_STEPS` | `4` | Sampling steps |

View file

@ -40,7 +40,7 @@ logging.basicConfig(
logger = logging.getLogger(__name__)
# Configuration from environment
PORT = int(os.getenv("PORT", "3026"))
PORT = int(os.getenv("PORT", "3025"))
MAX_PROMPT_LENGTH = int(os.getenv("MAX_PROMPT_LENGTH", "2000"))
MIN_DIMENSION = int(os.getenv("MIN_DIMENSION", "256"))
MAX_DIMENSION = int(os.getenv("MAX_DIMENSION", "2048"))

View file

@ -212,16 +212,16 @@ echo "To start the service:"
echo ""
echo " cd $SCRIPT_DIR"
echo " source .venv/bin/activate"
echo " FLUX_BINARY=$FLUX_DIR/flux FLUX_MODEL_DIR=$MODEL_DIR uvicorn app.main:app --host 0.0.0.0 --port 3026"
echo " FLUX_BINARY=$FLUX_DIR/flux FLUX_MODEL_DIR=$MODEL_DIR uvicorn app.main:app --host 0.0.0.0 --port 3025"
echo ""
echo "Or for development with auto-reload:"
echo ""
echo " FLUX_BINARY=$FLUX_DIR/flux FLUX_MODEL_DIR=$MODEL_DIR uvicorn app.main:app --host 0.0.0.0 --port 3026 --reload"
echo " FLUX_BINARY=$FLUX_DIR/flux FLUX_MODEL_DIR=$MODEL_DIR uvicorn app.main:app --host 0.0.0.0 --port 3025 --reload"
echo ""
echo "Test the service:"
echo ""
echo " curl http://localhost:3026/health"
echo " curl -X POST http://localhost:3026/generate \\"
echo " curl http://localhost:3025/health"
echo " curl -X POST http://localhost:3025/generate \\"
echo " -H 'Content-Type: application/json' \\"
echo " -d '{\"prompt\": \"A cat wearing sunglasses\"}'"
echo ""