fix(mana-notify): resolve BullMQ circular import issue

Move queue name constants to separate file (queue-names.ts) to avoid
circular dependency between queue.module.ts and processor files.

The @Processor decorator evaluates at module load time, and importing
constants from queue.module.ts created a circular dependency that
resulted in undefined queue names.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Till-JS 2026-01-29 22:58:47 +01:00
parent 384244fe50
commit f4c49fe8f2
15 changed files with 1400 additions and 8 deletions

View file

@ -0,0 +1,192 @@
# CLAUDE.md - Mana Image Generation Service
## Service Overview
AI image generation microservice using FLUX.2 klein 4B model via flux2.c:
- **Port**: 3025
- **Framework**: Python + FastAPI
- **Model**: FLUX.2 klein 4B (Black Forest Labs)
- **Backend**: flux2.c (Pure C, MPS accelerated)
## Features
- **Sub-second generation** on Apple Silicon (M4)
- **Memory efficient**: ~4-5 GB RAM usage (memory-mapped weights)
- **Apache 2.0 license**: Commercially usable
- **4 sampling steps**: Optimized for speed
- **1024x1024 default resolution**
## Commands
```bash
# Setup (installs flux2.c + downloads model)
./setup.sh
# Development
source .venv/bin/activate
FLUX_BINARY=/opt/flux2/flux FLUX_MODEL_DIR=/opt/flux2/model \
uvicorn app.main:app --host 0.0.0.0 --port 3025 --reload
# Production
../../scripts/mac-mini/setup-image-gen.sh
# Test
curl http://localhost:3025/health
curl -X POST http://localhost:3025/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "A cat in space"}' | jq
```
## File Structure
```
services/mana-image-gen/
├── app/
│ ├── __init__.py
│ ├── main.py # FastAPI endpoints
│ └── flux_service.py # flux2.c subprocess wrapper
├── setup.sh # Setup script
├── requirements.txt
├── CLAUDE.md
└── README.md
```
## API Endpoints
| Endpoint | Method | Purpose |
|----------|--------|---------|
| `/health` | GET | Health check |
| `/models` | GET | Model info |
| `/generate` | POST | Generate image |
| `/images/{filename}` | GET | Serve generated image |
| `/images/{filename}` | DELETE | Delete image |
| `/cleanup` | POST | Clean old images |
## Generate Request
```json
{
"prompt": "A beautiful sunset over mountains",
"width": 1024,
"height": 1024,
"steps": 4,
"seed": -1,
"output_format": "png"
}
```
## Generate Response
```json
{
"success": true,
"image_url": "/images/abc123.png",
"prompt": "A beautiful sunset over mountains",
"width": 1024,
"height": 1024,
"steps": 4,
"seed": 42,
"generation_time": 0.85
}
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3025` | Service port |
| `FLUX_BINARY` | `/opt/flux2/flux` | Path to flux2.c binary |
| `FLUX_MODEL_DIR` | `/opt/flux2/model` | Path to model weights |
| `DEFAULT_STEPS` | `4` | Default sampling steps |
| `DEFAULT_WIDTH` | `1024` | Default image width |
| `DEFAULT_HEIGHT` | `1024` | Default image height |
| `GENERATION_TIMEOUT` | `120` | Timeout in seconds |
| `MAX_PROMPT_LENGTH` | `2000` | Max prompt chars |
| `CORS_ORIGINS` | (production URLs) | CORS config |
## Model Details
### FLUX.2 klein 4B
- **Parameters**: 4 billion
- **License**: Apache 2.0 (commercial use allowed)
- **Download size**: ~16 GB
- **RAM usage**: ~4-5 GB (memory-mapped)
- **Optimal steps**: 4 (distilled model)
- **Release**: January 2026
## Integration with Other Apps
The service is designed to be used by:
- **Picture App** (`apps/picture/`) - AI image generation platform
- **Chat App** (`apps/chat/`) - Inline image generation
- **Matrix Bots** - Image generation via chat commands
- **API Gateway** - Public API access
### Example Integration (TypeScript)
```typescript
const response = await fetch('http://localhost:3025/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
prompt: 'A futuristic city at night',
width: 1024,
height: 1024,
}),
});
const result = await response.json();
const imageUrl = `http://localhost:3025${result.image_url}`;
```
## Dependencies
- `fastapi` - Web framework
- `uvicorn` - ASGI server
- `pillow` - Image processing
- `flux2.c` - Native binary (installed separately)
## Performance
On Mac Mini M4 (16 GB):
| Resolution | Steps | Time |
|------------|-------|------|
| 512x512 | 4 | ~0.3s |
| 1024x1024 | 4 | ~0.8s |
| 1024x1024 | 8 | ~1.5s |
## Troubleshooting
### flux2.c not found
```bash
# Verify installation
ls -la /opt/flux2/flux
# Reinstall
sudo rm -rf /opt/flux2
./setup.sh
```
### Model not found
```bash
# Check model directory
ls -la /opt/flux2/model/
# Re-download
cd /opt/flux2/src
./download-model.sh /opt/flux2/model
```
### Out of memory
- Reduce resolution to 512x512
- Close other applications
- The 16 GB Mac Mini should handle 1024x1024 fine
### Slow generation
- Ensure MPS build was used: `make mps`
- Check Metal GPU is being used
- Reduce steps (4 is optimal for klein)

View file

@ -0,0 +1,109 @@
# Mana Image Generation Service
Local AI image generation using **FLUX.2 klein 4B** model via flux2.c.
## Features
- **Fast**: Sub-second generation on Apple Silicon
- **Efficient**: ~4-5 GB RAM (memory-mapped weights)
- **Open**: Apache 2.0 license (commercial use)
- **Local**: 100% on-device, no API keys needed
## Requirements
- macOS with Apple Silicon (M1/M2/M3/M4)
- 16 GB RAM minimum
- ~20 GB disk space (model + binary)
- Python 3.11+
## Quick Start
```bash
# 1. Run setup (installs flux2.c + downloads model)
./setup.sh
# 2. Start the service
source .venv/bin/activate
FLUX_BINARY=/opt/flux2/flux FLUX_MODEL_DIR=/opt/flux2/model \
uvicorn app.main:app --host 0.0.0.0 --port 3025
# 3. Generate an image
curl -X POST http://localhost:3025/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "A cat wearing sunglasses"}' | jq
```
## API
### Generate Image
```bash
POST /generate
Content-Type: application/json
{
"prompt": "A beautiful mountain landscape",
"width": 1024,
"height": 1024,
"steps": 4,
"seed": -1,
"output_format": "png"
}
```
Response:
```json
{
"success": true,
"image_url": "/images/abc123.png",
"prompt": "A beautiful mountain landscape",
"width": 1024,
"height": 1024,
"steps": 4,
"seed": 42,
"generation_time": 0.85
}
```
### Get Image
```bash
GET /images/{filename}
```
### Health Check
```bash
GET /health
```
### Model Info
```bash
GET /models
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3025` | Service port |
| `FLUX_BINARY` | `/opt/flux2/flux` | flux2.c binary path |
| `FLUX_MODEL_DIR` | `/opt/flux2/model` | Model weights path |
| `DEFAULT_STEPS` | `4` | Sampling steps |
| `DEFAULT_WIDTH` | `1024` | Default width |
| `DEFAULT_HEIGHT` | `1024` | Default height |
## Model
**FLUX.2 klein 4B** by Black Forest Labs (January 2026)
- 4 billion parameters
- Apache 2.0 license
- Optimized for 4 sampling steps
- Sub-second inference on consumer GPUs
## Credits
- [flux2.c](https://github.com/antirez/flux2.c) - Pure C implementation by antirez
- [Black Forest Labs](https://bfl.ai) - FLUX.2 model

View file

@ -0,0 +1 @@
"""Mana Image Generation Service - FLUX.2 klein powered image generation."""

View file

@ -0,0 +1,212 @@
"""
FLUX.2 klein Image Generation Service
Uses flux2.c (Pure C implementation) for image generation.
Optimized for Apple Silicon with MPS acceleration.
"""
import asyncio
import logging
import os
import tempfile
import uuid
from dataclasses import dataclass
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
# Configuration
FLUX_BINARY = os.getenv("FLUX_BINARY", os.path.expanduser("~/flux2/flux"))
FLUX_MODEL_DIR = os.getenv("FLUX_MODEL_DIR", os.path.expanduser("~/flux2/model"))
DEFAULT_STEPS = int(os.getenv("DEFAULT_STEPS", "4"))
DEFAULT_WIDTH = int(os.getenv("DEFAULT_WIDTH", "1024"))
DEFAULT_HEIGHT = int(os.getenv("DEFAULT_HEIGHT", "1024"))
DEFAULT_SEED = int(os.getenv("DEFAULT_SEED", "-1")) # -1 = random
GENERATION_TIMEOUT = int(os.getenv("GENERATION_TIMEOUT", "300")) # seconds (first load takes ~90s)
# Output directory for generated images
OUTPUT_DIR = Path(os.getenv("OUTPUT_DIR", "/tmp/mana-image-gen"))
OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
@dataclass
class GenerationResult:
"""Result of image generation."""
image_path: str
prompt: str
width: int
height: int
steps: int
seed: int
generation_time: float
def is_flux_available() -> bool:
"""Check if flux2.c binary and model are available."""
binary_exists = Path(FLUX_BINARY).exists()
model_exists = Path(FLUX_MODEL_DIR).exists()
return binary_exists and model_exists
def get_flux_info() -> dict:
"""Get information about the flux installation."""
return {
"binary": FLUX_BINARY,
"binary_exists": Path(FLUX_BINARY).exists(),
"model_dir": FLUX_MODEL_DIR,
"model_exists": Path(FLUX_MODEL_DIR).exists(),
"model_name": "FLUX.2-klein-4B",
"parameters": "4 billion",
"license": "Apache 2.0",
"default_steps": DEFAULT_STEPS,
"default_resolution": f"{DEFAULT_WIDTH}x{DEFAULT_HEIGHT}",
}
async def generate_image(
prompt: str,
width: int = DEFAULT_WIDTH,
height: int = DEFAULT_HEIGHT,
steps: int = DEFAULT_STEPS,
seed: Optional[int] = None,
output_format: str = "png",
) -> GenerationResult:
"""
Generate an image using FLUX.2 klein via flux2.c.
Args:
prompt: Text prompt for image generation
width: Image width (default 1024)
height: Image height (default 1024)
steps: Number of sampling steps (default 4)
seed: Random seed (-1 for random)
output_format: Output format (png, jpg)
Returns:
GenerationResult with image path and metadata
Raises:
RuntimeError: If flux2.c is not available or generation fails
"""
if not is_flux_available():
raise RuntimeError(
f"flux2.c not available. Binary: {FLUX_BINARY}, Model: {FLUX_MODEL_DIR}"
)
# Generate unique output filename
image_id = str(uuid.uuid4())[:8]
output_path = OUTPUT_DIR / f"{image_id}.{output_format}"
# Use provided seed or generate random
actual_seed = seed if seed is not None and seed >= 0 else -1
# Build flux2.c command
cmd = [
FLUX_BINARY,
"-d", FLUX_MODEL_DIR,
"-p", prompt,
"-o", str(output_path),
"-W", str(width),
"-H", str(height),
"-s", str(steps),
]
if actual_seed >= 0:
cmd.extend(["-S", str(actual_seed)])
logger.info(f"Running flux2.c: {' '.join(cmd[:6])}...")
import time
start_time = time.time()
try:
# Run flux2.c as subprocess
process = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(
process.communicate(),
timeout=GENERATION_TIMEOUT,
)
generation_time = time.time() - start_time
if process.returncode != 0:
error_msg = stderr.decode() if stderr else "Unknown error"
logger.error(f"flux2.c failed: {error_msg}")
raise RuntimeError(f"Image generation failed: {error_msg}")
# Verify output file exists
if not output_path.exists():
raise RuntimeError("Image generation completed but output file not found")
# Parse seed from output if random
parsed_seed = actual_seed
if stdout:
output_text = stdout.decode()
# flux2.c outputs "seed: 12345" when using random seed
for line in output_text.split("\n"):
if line.startswith("seed:"):
try:
parsed_seed = int(line.split(":")[1].strip())
except (ValueError, IndexError):
pass
logger.info(
f"Image generated: {output_path} ({width}x{height}, {steps} steps, {generation_time:.2f}s)"
)
return GenerationResult(
image_path=str(output_path),
prompt=prompt,
width=width,
height=height,
steps=steps,
seed=parsed_seed,
generation_time=generation_time,
)
except asyncio.TimeoutError:
logger.error(f"Image generation timed out after {GENERATION_TIMEOUT}s")
raise RuntimeError(f"Generation timed out after {GENERATION_TIMEOUT} seconds")
except Exception as e:
logger.error(f"Image generation error: {e}")
raise
def cleanup_image(image_path: str) -> bool:
"""Delete a generated image file."""
try:
path = Path(image_path)
if path.exists() and path.parent == OUTPUT_DIR:
path.unlink()
return True
except Exception as e:
logger.warning(f"Failed to cleanup image {image_path}: {e}")
return False
def cleanup_old_images(max_age_hours: int = 24) -> int:
"""Clean up images older than max_age_hours."""
import time
cleaned = 0
cutoff = time.time() - (max_age_hours * 3600)
try:
for file in OUTPUT_DIR.iterdir():
if file.is_file() and file.stat().st_mtime < cutoff:
file.unlink()
cleaned += 1
except Exception as e:
logger.warning(f"Cleanup error: {e}")
if cleaned > 0:
logger.info(f"Cleaned up {cleaned} old images")
return cleaned

View file

@ -0,0 +1,362 @@
"""
Mana Image Generation - AI Image Generation Microservice
Provides image generation using FLUX.2 klein 4B model via flux2.c.
Optimized for Apple Silicon (MPS acceleration).
API:
- POST /generate - Generate image from text prompt
- GET /health - Health check
- GET /models - Model information
"""
import logging
import os
from contextlib import asynccontextmanager
from pathlib import Path
from typing import Optional
from fastapi import FastAPI, HTTPException, Response, BackgroundTasks
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import FileResponse
from pydantic import BaseModel, Field
from .flux_service import (
generate_image,
is_flux_available,
get_flux_info,
cleanup_image,
cleanup_old_images,
DEFAULT_STEPS,
DEFAULT_WIDTH,
DEFAULT_HEIGHT,
)
# Configure logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
)
logger = logging.getLogger(__name__)
# Configuration from environment
PORT = int(os.getenv("PORT", "3025"))
MAX_PROMPT_LENGTH = int(os.getenv("MAX_PROMPT_LENGTH", "2000"))
MIN_DIMENSION = int(os.getenv("MIN_DIMENSION", "256"))
MAX_DIMENSION = int(os.getenv("MAX_DIMENSION", "2048"))
MAX_STEPS = int(os.getenv("MAX_STEPS", "8"))
CORS_ORIGINS = os.getenv(
"CORS_ORIGINS",
"https://mana.how,https://picture.mana.how,https://chat.mana.how,http://localhost:5173",
).split(",")
@asynccontextmanager
async def lifespan(app: FastAPI):
"""Application lifespan manager for startup/shutdown."""
logger.info(f"Starting Mana Image Generation service on port {PORT}")
# Check flux2.c availability
if is_flux_available():
info = get_flux_info()
logger.info(f"flux2.c available: {info['model_name']}")
else:
logger.warning("flux2.c not available - service will return errors until installed")
# Cleanup old images on startup
cleanup_old_images(max_age_hours=24)
yield
logger.info("Shutting down Mana Image Generation service")
# Create FastAPI app
app = FastAPI(
title="Mana Image Generation",
description="AI image generation service using FLUX.2 klein 4B",
version="1.0.0",
lifespan=lifespan,
)
# CORS middleware
app.add_middleware(
CORSMiddleware,
allow_origins=CORS_ORIGINS,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# ============================================================================
# Request/Response Models
# ============================================================================
class GenerateRequest(BaseModel):
"""Request for image generation."""
prompt: str = Field(
...,
description="Text prompt for image generation",
min_length=1,
max_length=2000,
)
width: int = Field(
DEFAULT_WIDTH,
ge=256,
le=2048,
description="Image width in pixels",
)
height: int = Field(
DEFAULT_HEIGHT,
ge=256,
le=2048,
description="Image height in pixels",
)
steps: int = Field(
DEFAULT_STEPS,
ge=1,
le=8,
description="Number of sampling steps (FLUX.2 klein optimized for 4)",
)
seed: Optional[int] = Field(
None,
ge=-1,
description="Random seed (-1 or None for random)",
)
output_format: str = Field(
"png",
description="Output format (png, jpg)",
)
class GenerateResponse(BaseModel):
"""Response for image generation."""
success: bool
image_url: str
prompt: str
width: int
height: int
steps: int
seed: int
generation_time: float
class HealthResponse(BaseModel):
"""Health check response."""
status: str
service: str
flux_available: bool
class ModelsResponse(BaseModel):
"""Available models response."""
flux: dict
class ErrorResponse(BaseModel):
"""Error response."""
error: str
detail: str
# ============================================================================
# Health & Info Endpoints
# ============================================================================
@app.get("/health", response_model=HealthResponse)
async def health_check():
"""Check service health and flux2.c availability."""
return HealthResponse(
status="healthy" if is_flux_available() else "degraded",
service="mana-image-gen",
flux_available=is_flux_available(),
)
@app.get("/models", response_model=ModelsResponse)
async def get_models():
"""Get information about available models."""
return ModelsResponse(flux=get_flux_info())
# ============================================================================
# Image Generation Endpoints
# ============================================================================
@app.post("/generate", response_model=GenerateResponse)
async def generate(request: GenerateRequest, background_tasks: BackgroundTasks):
"""
Generate an image from a text prompt using FLUX.2 klein.
The model is optimized for 4 sampling steps and produces high-quality
images in sub-second time on Apple Silicon.
"""
# Validate prompt
if len(request.prompt) > MAX_PROMPT_LENGTH:
raise HTTPException(
status_code=400,
detail=f"Prompt exceeds maximum length of {MAX_PROMPT_LENGTH} characters",
)
if not request.prompt.strip():
raise HTTPException(status_code=400, detail="Prompt cannot be empty")
# Validate dimensions
if request.width < MIN_DIMENSION or request.width > MAX_DIMENSION:
raise HTTPException(
status_code=400,
detail=f"Width must be between {MIN_DIMENSION} and {MAX_DIMENSION}",
)
if request.height < MIN_DIMENSION or request.height > MAX_DIMENSION:
raise HTTPException(
status_code=400,
detail=f"Height must be between {MIN_DIMENSION} and {MAX_DIMENSION}",
)
# Validate steps
if request.steps > MAX_STEPS:
raise HTTPException(
status_code=400,
detail=f"Steps must be at most {MAX_STEPS} (FLUX.2 klein is optimized for 4)",
)
# Validate output format
output_format = request.output_format.lower()
if output_format not in ("png", "jpg", "jpeg"):
raise HTTPException(
status_code=400,
detail="Output format must be 'png' or 'jpg'",
)
if output_format == "jpeg":
output_format = "jpg"
# Check flux availability
if not is_flux_available():
raise HTTPException(
status_code=503,
detail="Image generation service not available. flux2.c not installed.",
)
try:
# Generate image
result = await generate_image(
prompt=request.prompt,
width=request.width,
height=request.height,
steps=request.steps,
seed=request.seed,
output_format=output_format,
)
# Build image URL (relative path for now)
image_filename = Path(result.image_path).name
image_url = f"/images/{image_filename}"
return GenerateResponse(
success=True,
image_url=image_url,
prompt=result.prompt,
width=result.width,
height=result.height,
steps=result.steps,
seed=result.seed,
generation_time=result.generation_time,
)
except RuntimeError as e:
logger.error(f"Generation error: {e}")
raise HTTPException(status_code=500, detail=str(e))
except Exception as e:
logger.error(f"Unexpected error: {e}")
raise HTTPException(status_code=500, detail=f"Image generation failed: {e}")
@app.get("/images/{filename}")
async def get_image(filename: str):
"""Serve a generated image."""
from .flux_service import OUTPUT_DIR
# Security: only allow specific extensions and no path traversal
if ".." in filename or "/" in filename or "\\" in filename:
raise HTTPException(status_code=400, detail="Invalid filename")
allowed_extensions = {".png", ".jpg", ".jpeg"}
ext = Path(filename).suffix.lower()
if ext not in allowed_extensions:
raise HTTPException(status_code=400, detail="Invalid file type")
image_path = OUTPUT_DIR / filename
if not image_path.exists():
raise HTTPException(status_code=404, detail="Image not found")
media_type = "image/png" if ext == ".png" else "image/jpeg"
return FileResponse(image_path, media_type=media_type)
@app.delete("/images/{filename}")
async def delete_image(filename: str):
"""Delete a generated image."""
from .flux_service import OUTPUT_DIR
# Security: only allow specific extensions and no path traversal
if ".." in filename or "/" in filename or "\\" in filename:
raise HTTPException(status_code=400, detail="Invalid filename")
image_path = OUTPUT_DIR / filename
if not image_path.exists():
raise HTTPException(status_code=404, detail="Image not found")
if cleanup_image(str(image_path)):
return {"success": True, "message": f"Image {filename} deleted"}
else:
raise HTTPException(status_code=500, detail="Failed to delete image")
# ============================================================================
# Maintenance Endpoints
# ============================================================================
@app.post("/cleanup")
async def cleanup_images(max_age_hours: int = 24):
"""Clean up old generated images."""
cleaned = cleanup_old_images(max_age_hours)
return {"success": True, "cleaned": cleaned}
# ============================================================================
# Error Handler
# ============================================================================
@app.exception_handler(Exception)
async def global_exception_handler(request, exc):
"""Handle uncaught exceptions."""
logger.error(f"Unhandled exception: {exc}")
return Response(
content=f'{{"error": "Internal server error", "detail": "{str(exc)}"}}',
status_code=500,
media_type="application/json",
)
# ============================================================================
# Main
# ============================================================================
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=PORT)

View file

@ -0,0 +1,12 @@
# Web Framework
fastapi>=0.115.0
uvicorn[standard]>=0.34.0
python-multipart>=0.0.20
# Image Processing
pillow>=10.0.0
numpy>=1.26.0
# Utilities
aiofiles>=24.1.0
httpx>=0.27.0

227
services/mana-image-gen/setup.sh Executable file
View file

@ -0,0 +1,227 @@
#!/bin/bash
# Setup script for Mana Image Generation service
# Installs flux2.c and FLUX.2 klein 4B model
# Optimized for Apple Silicon (MPS)
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
VENV_DIR="$SCRIPT_DIR/.venv"
FLUX_DIR="/opt/flux2"
MODEL_DIR="$FLUX_DIR/model"
echo "=========================================="
echo "Mana Image Generation Setup"
echo "=========================================="
echo ""
# Check platform
if [[ "$(uname)" != "Darwin" ]]; then
echo "Error: This service requires macOS with Apple Silicon."
echo "flux2.c uses MPS (Metal Performance Shaders) for acceleration."
exit 1
fi
# Check for Apple Silicon
if [[ "$(uname -m)" != "arm64" ]]; then
echo "Error: This service requires Apple Silicon (arm64)."
echo "flux2.c is optimized for M1/M2/M3/M4 chips."
exit 1
fi
echo "Platform: macOS $(sw_vers -productVersion) on $(uname -m)"
echo ""
# ============================================
# Step 1: Install flux2.c
# ============================================
echo "Step 1: Installing flux2.c"
echo "----------------------------------------"
# Check if flux2.c already exists
if [[ -f "$FLUX_DIR/flux" ]]; then
echo "flux2.c already installed at $FLUX_DIR/flux"
echo "To reinstall, remove the directory first: sudo rm -rf $FLUX_DIR"
else
echo "Creating installation directory..."
sudo mkdir -p "$FLUX_DIR"
sudo chown $(whoami) "$FLUX_DIR"
# Clone flux2.c repository
echo "Cloning flux2.c repository..."
cd "$FLUX_DIR"
git clone https://github.com/antirez/flux2.c.git src
cd src
# Build with MPS support (Apple Silicon optimized)
echo "Building flux2.c with MPS acceleration..."
make mps
# Move binary to parent directory
cp flux "$FLUX_DIR/flux"
chmod +x "$FLUX_DIR/flux"
echo "flux2.c installed successfully!"
fi
# Verify binary
if [[ -x "$FLUX_DIR/flux" ]]; then
echo "Binary: $FLUX_DIR/flux"
else
echo "Error: flux2.c binary not found or not executable"
exit 1
fi
echo ""
# ============================================
# Step 2: Download FLUX.2 klein 4B model
# ============================================
echo "Step 2: Downloading FLUX.2 klein 4B model"
echo "----------------------------------------"
echo "Note: This will download ~16GB of model weights"
echo ""
if [[ -d "$MODEL_DIR" ]] && [[ -f "$MODEL_DIR/flux.safetensors" ]]; then
echo "Model already downloaded at $MODEL_DIR"
else
mkdir -p "$MODEL_DIR"
cd "$FLUX_DIR/src"
# Run the model download script
if [[ -f "./download-model.sh" ]]; then
echo "Running download script..."
./download-model.sh "$MODEL_DIR"
else
echo "Downloading model manually..."
# flux2.c expects the model in a specific format
# The model includes:
# - flux.safetensors (main weights)
# - qwen3-4b.safetensors (text encoder)
# - ae.safetensors (autoencoder)
echo "Please run the following commands manually:"
echo ""
echo " cd $FLUX_DIR/src"
echo " ./download-model.sh $MODEL_DIR"
echo ""
echo "Or download from Hugging Face:"
echo " https://huggingface.co/black-forest-labs/FLUX.2-klein-4B"
echo ""
fi
fi
echo ""
# ============================================
# Step 3: Setup Python environment
# ============================================
echo "Step 3: Setting up Python environment"
echo "----------------------------------------"
# Find Python
if command -v python3.11 &> /dev/null; then
PYTHON_CMD="python3.11"
elif command -v python3 &> /dev/null; then
PYTHON_CMD="python3"
else
echo "Error: Python 3 not found. Please install Python 3.11 or later."
exit 1
fi
echo "Using Python: $PYTHON_CMD"
$PYTHON_CMD --version
echo ""
# Create virtual environment
if [[ -d "$VENV_DIR" ]]; then
echo "Virtual environment exists at $VENV_DIR"
read -p "Recreate it? (y/N) " -n 1 -r
echo ""
if [[ $REPLY =~ ^[Yy]$ ]]; then
rm -rf "$VENV_DIR"
$PYTHON_CMD -m venv "$VENV_DIR"
fi
else
echo "Creating virtual environment..."
$PYTHON_CMD -m venv "$VENV_DIR"
fi
# Activate and install dependencies
source "$VENV_DIR/bin/activate"
pip install --upgrade pip
pip install -r "$SCRIPT_DIR/requirements.txt"
echo ""
# ============================================
# Step 4: Create output directory
# ============================================
echo "Step 4: Creating output directory"
echo "----------------------------------------"
OUTPUT_DIR="/tmp/mana-image-gen"
mkdir -p "$OUTPUT_DIR"
echo "Output directory: $OUTPUT_DIR"
echo ""
# ============================================
# Step 5: Test flux2.c
# ============================================
echo "Step 5: Testing flux2.c"
echo "----------------------------------------"
if [[ -x "$FLUX_DIR/flux" ]] && [[ -d "$MODEL_DIR" ]]; then
echo "Testing image generation..."
TEST_OUTPUT="$OUTPUT_DIR/test_setup.png"
# Quick test with low resolution
"$FLUX_DIR/flux" -d "$MODEL_DIR" -p "A simple test image" -o "$TEST_OUTPUT" -W 256 -H 256 -s 2 2>/dev/null && {
echo "Test successful! Generated: $TEST_OUTPUT"
rm -f "$TEST_OUTPUT"
} || {
echo "Warning: Test generation failed. Model may not be fully downloaded."
echo "Please ensure the model is complete before using the service."
}
else
echo "Skipping test - flux2.c or model not ready"
fi
echo ""
# ============================================
# Done
# ============================================
echo "=========================================="
echo "Setup Complete!"
echo "=========================================="
echo ""
echo "Configuration:"
echo " FLUX_BINARY: $FLUX_DIR/flux"
echo " FLUX_MODEL_DIR: $MODEL_DIR"
echo " OUTPUT_DIR: $OUTPUT_DIR"
echo ""
echo "To start the service:"
echo ""
echo " cd $SCRIPT_DIR"
echo " source .venv/bin/activate"
echo " FLUX_BINARY=$FLUX_DIR/flux FLUX_MODEL_DIR=$MODEL_DIR uvicorn app.main:app --host 0.0.0.0 --port 3025"
echo ""
echo "Or for development with auto-reload:"
echo ""
echo " FLUX_BINARY=$FLUX_DIR/flux FLUX_MODEL_DIR=$MODEL_DIR uvicorn app.main:app --host 0.0.0.0 --port 3025 --reload"
echo ""
echo "Test the service:"
echo ""
echo " curl http://localhost:3025/health"
echo " curl -X POST http://localhost:3025/generate \\"
echo " -H 'Content-Type: application/json' \\"
echo " -d '{\"prompt\": \"A cat wearing sunglasses\"}'"
echo ""

View file

@ -2,7 +2,7 @@ import { Processor, WorkerHost, OnWorkerEvent } from '@nestjs/bullmq';
import { Logger, Inject } from '@nestjs/common';
import { Job } from 'bullmq';
import { eq } from 'drizzle-orm';
import { EMAIL_QUEUE } from '../queue.module';
import { EMAIL_QUEUE } from '../queue-names';
import { EmailService } from '../../channels/email/email.service';
import { MetricsService } from '../../metrics/metrics.service';
import { DATABASE_CONNECTION } from '../../db/database.module';

View file

@ -2,7 +2,7 @@ import { Processor, WorkerHost, OnWorkerEvent } from '@nestjs/bullmq';
import { Logger, Inject } from '@nestjs/common';
import { Job } from 'bullmq';
import { eq } from 'drizzle-orm';
import { MATRIX_QUEUE } from '../queue.module';
import { MATRIX_QUEUE } from '../queue-names';
import { MatrixService } from '../../channels/matrix/matrix.service';
import { MetricsService } from '../../metrics/metrics.service';
import { DATABASE_CONNECTION } from '../../db/database.module';

View file

@ -2,7 +2,7 @@ import { Processor, WorkerHost, OnWorkerEvent } from '@nestjs/bullmq';
import { Logger, Inject } from '@nestjs/common';
import { Job } from 'bullmq';
import { eq } from 'drizzle-orm';
import { PUSH_QUEUE } from '../queue.module';
import { PUSH_QUEUE } from '../queue-names';
import { PushService } from '../../channels/push/push.service';
import { MetricsService } from '../../metrics/metrics.service';
import { DATABASE_CONNECTION } from '../../db/database.module';

View file

@ -2,7 +2,7 @@ import { Processor, WorkerHost, OnWorkerEvent } from '@nestjs/bullmq';
import { Logger, Inject } from '@nestjs/common';
import { Job } from 'bullmq';
import { eq } from 'drizzle-orm';
import { WEBHOOK_QUEUE } from '../queue.module';
import { WEBHOOK_QUEUE } from '../queue-names';
import { WebhookService } from '../../channels/webhook/webhook.service';
import { MetricsService } from '../../metrics/metrics.service';
import { DATABASE_CONNECTION } from '../../db/database.module';

View file

@ -0,0 +1,5 @@
// Queue names - separate file to avoid circular imports with processors
export const EMAIL_QUEUE = 'email';
export const PUSH_QUEUE = 'push';
export const MATRIX_QUEUE = 'matrix';
export const WEBHOOK_QUEUE = 'webhook';

View file

@ -6,11 +6,10 @@ import { MatrixProcessor } from './processors/matrix.processor';
import { WebhookProcessor } from './processors/webhook.processor';
import { ChannelsModule } from '../channels/channels.module';
import { MetricsModule } from '../metrics/metrics.module';
import { EMAIL_QUEUE, PUSH_QUEUE, MATRIX_QUEUE, WEBHOOK_QUEUE } from './queue-names';
export const EMAIL_QUEUE = 'email';
export const PUSH_QUEUE = 'push';
export const MATRIX_QUEUE = 'matrix';
export const WEBHOOK_QUEUE = 'webhook';
// Re-export for convenience
export { EMAIL_QUEUE, PUSH_QUEUE, MATRIX_QUEUE, WEBHOOK_QUEUE } from './queue-names';
@Module({
imports: [