feat(picture): migrate from Supabase to NestJS backend API

- Migrate image generation from Supabase Edge Functions to NestJS
- Add profiles and image-likes schemas
- Refactor mobile auth context for backend API
- Update all mobile hooks and services for API integration
- Add Docker configuration for deployment
- Remove Supabase functions and migrations
- Add migration plan documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Till-JS 2025-11-27 14:47:23 +01:00
parent 14aace00c2
commit c561c4c8d8
70 changed files with 2643 additions and 8552 deletions

View file

@ -0,0 +1,300 @@
# Picture App: Supabase zu Backend API Migration
## Ziel
Komplette Entfernung der direkten Supabase-Nutzung in der Mobile App. Alle Datenbankzugriffe sollen über die Backend API erfolgen (wie in der Chat App).
---
## Aktuelle Situation
### Backend (bereits implementiert)
| Endpoint | Status |
|----------|--------|
| `/api/images/*` | Vorhanden - CRUD, Archive, Batch-Operationen |
| `/api/generate/*` | Vorhanden - Bildgenerierung mit Replicate |
| `/api/tags/*` | Vorhanden - Tag-Management |
| `/api/boards/*` | Vorhanden - Board-Management |
| `/api/board-items/*` | Vorhanden - Board-Items |
| `/api/models/*` | Vorhanden - AI Modelle |
| `/api/explore/*` | Vorhanden - Öffentliche Galerie |
| `/api/upload/*` | Vorhanden - Datei-Upload |
| `/api/profiles/*` | **FEHLT** - User Profile |
| `/api/likes/*` | **FEHLT** - Image Likes |
### Mobile App (direkte Supabase-Nutzung)
Dateien die `supabase.from()` oder Supabase-Client nutzen:
1. `app/(tabs)/profile.tsx` - Profile-Daten laden/aktualisieren
2. `app/image/[id].tsx` - Einzelbild-Details
3. `hooks/useImageFetching.ts` - Bilder laden
4. `hooks/useImageLikes.ts` - Like-Funktionalität
5. `hooks/useImagePrefetch.ts` - Prefetching
6. `hooks/useExploreFetching.ts` - Explore-Daten
7. `hooks/useExplorePrefetch.ts` - Explore-Prefetching
8. `hooks/useArchiveFetching.ts` - Archiv-Daten
9. `store/tagStore.ts` - Tag-Daten
10. `store/batchStore.ts` - Batch-Generierung (nutzt Edge Functions)
11. `components/RateLimitIndicator.tsx` - Rate Limit Check
---
## Migrationsplan
### Phase 1: Backend erweitern
#### 1.1 Profile Endpoints hinzufügen
```
GET /api/profiles/me - Eigenes Profil laden
PATCH /api/profiles/me - Profil aktualisieren
GET /api/profiles/stats - User Stats (Bilder, Favoriten)
```
**Schema erweitern** (falls nicht vorhanden):
```typescript
// profiles table
{
id: uuid (PK, same as auth user id)
username: text
email: text
avatarUrl: text (optional)
createdAt: timestamp
updatedAt: timestamp
}
```
#### 1.2 Like Endpoints hinzufügen
```
POST /api/images/:id/like - Bild liken
DELETE /api/images/:id/like - Like entfernen
GET /api/images/:id/likes - Like-Status & Anzahl
```
**Schema erweitern** (falls nicht vorhanden):
```typescript
// image_likes table
{
id: uuid (PK)
imageId: uuid (FK to images)
userId: uuid
createdAt: timestamp
}
```
#### 1.3 Rate Limit Endpoint
```
GET /api/rate-limit - Aktueller Rate Limit Status
```
---
### Phase 2: Mobile API Client erstellen
#### 2.1 Zentraler API Client
Datei: `services/api/client.ts`
```typescript
import * as SecureStore from 'expo-secure-store';
const APP_TOKEN_KEY = '@manacore/app_token';
const BACKEND_URL = process.env.EXPO_PUBLIC_PICTURE_BACKEND_URL || 'http://localhost:3003';
async function getAuthToken(): Promise<string | null> {
try {
return await SecureStore.getItemAsync(APP_TOKEN_KEY);
} catch {
return null;
}
}
export async function apiRequest<T>(
endpoint: string,
options: RequestInit = {}
): Promise<{ data: T | null; error: string | null }> {
const token = await getAuthToken();
const headers: HeadersInit = {
'Content-Type': 'application/json',
...(options.headers || {}),
};
if (token) {
headers['Authorization'] = `Bearer ${token}`;
}
try {
const response = await fetch(`${BACKEND_URL}/api${endpoint}`, {
...options,
headers,
});
if (!response.ok) {
const error = await response.json().catch(() => ({ message: 'Request failed' }));
return { data: null, error: error.message || `HTTP ${response.status}` };
}
const data = await response.json();
return { data, error: null };
} catch (error) {
return { data: null, error: error instanceof Error ? error.message : 'Network error' };
}
}
```
#### 2.2 Domain-spezifische API Module
**`services/api/profiles.ts`**
```typescript
export const profileApi = {
getMyProfile: () => apiRequest<Profile>('/profiles/me'),
updateProfile: (data: UpdateProfileDto) => apiRequest<Profile>('/profiles/me', {
method: 'PATCH',
body: JSON.stringify(data),
}),
getStats: () => apiRequest<UserStats>('/profiles/stats'),
};
```
**`services/api/images.ts`** (erweitern)
```typescript
export const imageApi = {
// Bestehende Funktionen...
getImages: (params) => apiRequest<Image[]>(`/images?${new URLSearchParams(params)}`),
getImage: (id) => apiRequest<Image>(`/images/${id}`),
likeImage: (id) => apiRequest(`/images/${id}/like`, { method: 'POST' }),
unlikeImage: (id) => apiRequest(`/images/${id}/like`, { method: 'DELETE' }),
// etc.
};
```
**`services/api/explore.ts`**
```typescript
export const exploreApi = {
getPublicImages: (params) => apiRequest<Image[]>(`/explore?${new URLSearchParams(params)}`),
search: (term, params) => apiRequest<Image[]>(`/explore/search?searchTerm=${term}&${new URLSearchParams(params)}`),
};
```
**`services/api/tags.ts`**
```typescript
export const tagApi = {
getTags: () => apiRequest<Tag[]>('/tags'),
createTag: (data) => apiRequest<Tag>('/tags', { method: 'POST', body: JSON.stringify(data) }),
updateTag: (id, data) => apiRequest<Tag>(`/tags/${id}`, { method: 'PATCH', body: JSON.stringify(data) }),
deleteTag: (id) => apiRequest(`/tags/${id}`, { method: 'DELETE' }),
getImageTags: (imageId) => apiRequest<Tag[]>(`/tags/image/${imageId}`),
addTagToImage: (imageId, tagId) => apiRequest(`/tags/image/${imageId}/${tagId}`, { method: 'POST' }),
removeTagFromImage: (imageId, tagId) => apiRequest(`/tags/image/${imageId}/${tagId}`, { method: 'DELETE' }),
};
```
---
### Phase 3: Hooks migrieren
#### 3.1 `useImageFetching.ts`
- Ersetze `supabase.from('images')` durch `imageApi.getImages()`
- Pagination über API Query-Parameter
#### 3.2 `useImageLikes.ts`
- Ersetze `supabase.from('image_likes')` durch `imageApi.likeImage()` / `unlikeImage()`
#### 3.3 `useExploreFetching.ts`
- Ersetze `supabase.from('images').eq('is_public', true)` durch `exploreApi.getPublicImages()`
#### 3.4 `useArchiveFetching.ts`
- Ersetze `supabase.from('images').not('archived_at', 'is', null)` durch `imageApi.getImages({ archived: true })`
---
### Phase 4: Stores migrieren
#### 4.1 `tagStore.ts`
- Ersetze alle `supabase.from('tags')` und `supabase.from('image_tags')` durch `tagApi.*`
#### 4.2 `batchStore.ts`
- Ersetze `supabase.functions.invoke()` durch Backend API Calls
- Ersetze `supabase.channel()` Realtime durch Polling oder WebSocket zum Backend
- **ODER**: Batch-Generierung komplett über Backend API abwickeln
---
### Phase 5: Screens migrieren
#### 5.1 `profile.tsx`
```typescript
// Vorher:
const { data } = await supabase.from('profiles').select('*').eq('id', user.id).single();
// Nachher:
const { data } = await profileApi.getMyProfile();
```
#### 5.2 `image/[id].tsx`
```typescript
// Vorher:
const { data } = await supabase.from('images').select('*').eq('id', id).single();
// Nachher:
const { data } = await imageApi.getImage(id);
```
---
### Phase 6: Aufräumen
#### 6.1 Dateien entfernen
- `utils/supabase.ts` - Supabase Client
- Alle Supabase-Typen die nicht mehr gebraucht werden
#### 6.2 Dependencies entfernen
```json
// package.json - Diese entfernen:
"@supabase/supabase-js": "^2.38.4",
```
#### 6.3 Environment Variables aufräumen
```
# Nicht mehr benötigt:
EXPO_PUBLIC_SUPABASE_URL
EXPO_PUBLIC_SUPABASE_ANON_KEY
# Weiterhin benötigt:
EXPO_PUBLIC_PICTURE_BACKEND_URL
EXPO_PUBLIC_MANA_CORE_AUTH_URL
```
---
## Implementierungsreihenfolge
1. **Backend erweitern** (Profile, Likes, Rate Limit Endpoints)
2. **API Client erstellen** (`services/api/client.ts`)
3. **Domain APIs erstellen** (profiles, images erweitern, explore, tags)
4. **Hooks einzeln migrieren** (mit Tests nach jeder Migration)
5. **Stores migrieren** (tagStore, batchStore)
6. **Screens migrieren** (profile.tsx, image/[id].tsx)
7. **Supabase entfernen** (Dependencies, Environment Variables)
8. **Testen** (Alle Flows durchgehen)
---
## Geschätzter Aufwand
| Phase | Aufwand |
|-------|---------|
| Phase 1: Backend erweitern | ~2-3 Stunden |
| Phase 2: API Client | ~1 Stunde |
| Phase 3: Hooks migrieren | ~2-3 Stunden |
| Phase 4: Stores migrieren | ~1-2 Stunden |
| Phase 5: Screens migrieren | ~1 Stunde |
| Phase 6: Aufräumen & Testen | ~1 Stunde |
| **Gesamt** | **~8-11 Stunden** |
---
## Risiken & Hinweise
1. **Realtime-Subscriptions**: Supabase Realtime wird durch Polling ersetzt (bereits bei Bildgenerierung umgesetzt)
2. **Batch-Generierung**: Der `batchStore` nutzt Edge Functions - diese müssen ins Backend migriert werden
3. **Storage**: Bilder werden weiterhin irgendwo gespeichert - prüfen ob S3/R2 oder weiterhin Supabase Storage
4. **RLS Policies**: Backend übernimmt die Autorisierung - alle Queries müssen `userId` filtern

View file

@ -0,0 +1,63 @@
# Build stage
FROM node:20-alpine AS builder
# Install pnpm
RUN corepack enable && corepack prepare pnpm@9.15.0 --activate
WORKDIR /app
# Copy root workspace files
COPY pnpm-workspace.yaml ./
COPY package.json ./
COPY pnpm-lock.yaml ./
# Copy shared packages
COPY packages/shared-errors ./packages/shared-errors
# Copy picture backend
COPY apps/picture/apps/backend ./apps/picture/apps/backend
# Install dependencies
RUN pnpm install --frozen-lockfile
# Build shared packages first
WORKDIR /app/packages/shared-errors
RUN pnpm build
# Build the backend
WORKDIR /app/apps/picture/apps/backend
RUN pnpm build
# Production stage
FROM node:20-alpine AS production
# Install pnpm and postgresql-client for health checks
RUN corepack enable && corepack prepare pnpm@9.15.0 --activate \
&& apk add --no-cache postgresql-client
WORKDIR /app
# Copy everything from builder (including node_modules)
COPY --from=builder /app/pnpm-workspace.yaml ./
COPY --from=builder /app/package.json ./
COPY --from=builder /app/pnpm-lock.yaml ./
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/packages ./packages
COPY --from=builder /app/apps/picture/apps/backend ./apps/picture/apps/backend
# Copy entrypoint script
COPY apps/picture/apps/backend/docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
WORKDIR /app/apps/picture/apps/backend
# Expose port
EXPOSE 3003
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3003/api/health || exit 1
# Run entrypoint script
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["node", "dist/main.js"]

View file

@ -0,0 +1,72 @@
services:
# PostgreSQL Database
postgres:
image: postgres:16-alpine
container_name: picture-postgres
restart: unless-stopped
environment:
POSTGRES_USER: ${DB_USER:-picture}
POSTGRES_PASSWORD: ${DB_PASSWORD:-picturepassword}
POSTGRES_DB: ${DB_NAME:-picture}
ports:
- "${DB_PORT:-5434}:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init-db:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-picture} -d ${DB_NAME:-picture}"]
interval: 10s
timeout: 5s
retries: 5
# Picture Backend API
backend:
build:
context: ../../../..
dockerfile: apps/picture/apps/backend/Dockerfile
container_name: picture-backend
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
environment:
# Database
DATABASE_URL: postgresql://${DB_USER:-picture}:${DB_PASSWORD:-picturepassword}@postgres:5432/${DB_NAME:-picture}
DB_HOST: postgres
DB_PORT: 5432
DB_USER: ${DB_USER:-picture}
DB_PASSWORD: ${DB_PASSWORD:-picturepassword}
DB_NAME: ${DB_NAME:-picture}
# Replicate API
REPLICATE_API_TOKEN: ${REPLICATE_API_TOKEN}
# Supabase (for storage only)
SUPABASE_URL: ${SUPABASE_URL}
SUPABASE_SERVICE_ROLE_KEY: ${SUPABASE_SERVICE_ROLE_KEY}
# Mana Core Auth
MANA_CORE_AUTH_URL: ${MANA_CORE_AUTH_URL:-http://host.docker.internal:3001}
# Webhook for Replicate callbacks
WEBHOOK_BASE_URL: ${WEBHOOK_BASE_URL}
# Server
PORT: 3003
NODE_ENV: production
ports:
- "${BACKEND_PORT:-3003}:3003"
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3003/api/health"]
interval: 30s
timeout: 10s
start_period: 30s
retries: 3
volumes:
postgres_data:
driver: local
networks:
default:
name: picture-network

View file

@ -0,0 +1,34 @@
#!/bin/sh
set -e
echo "=== Picture Backend Entrypoint ==="
# Wait for PostgreSQL to be ready
echo "Waiting for PostgreSQL..."
until pg_isready -h ${DB_HOST:-postgres} -p ${DB_PORT:-5432} -U ${DB_USER:-picture} 2>/dev/null; do
echo "PostgreSQL is unavailable - sleeping"
sleep 2
done
echo "PostgreSQL is up!"
cd /app/apps/picture/apps/backend
# Run schema push (for development) or migrations (for production)
if [ "$NODE_ENV" = "production" ] && [ -d "src/db/migrations/meta" ]; then
echo "Running database migrations..."
npx tsx src/db/migrate.ts
echo "Migrations completed!"
else
echo "Pushing database schema (development mode)..."
npx drizzle-kit push --force
echo "Schema push completed!"
fi
# Run seed (only seeds if data doesn't exist)
echo "Running database seed..."
npx tsx src/db/seed.ts
echo "Seed completed!"
# Execute the main command
echo "Starting application..."
exec "$@"

View file

@ -5,7 +5,7 @@ import {
ForbiddenException,
Logger,
} from '@nestjs/common';
import { eq, and, max, inArray, gt, lt } from 'drizzle-orm';
import { eq, and, max, inArray, gt, lt, sql } from 'drizzle-orm';
import { DATABASE_CONNECTION } from '../db/database.module';
import { type Database } from '../db/connection';
import { boards, boardItems, images, type BoardItem } from '../db/schema';
@ -375,7 +375,7 @@ export class BoardItemService {
// Shift all other items up
await this.db
.update(boardItems)
.set({ zIndex: boardItems.zIndex + 1 } as any)
.set({ zIndex: sql`${boardItems.zIndex} + 1` })
.where(eq(boardItems.boardId, item[0].boardId));
} else if (direction === 'up') {
// Find the next item above

View file

@ -0,0 +1,27 @@
import {
pgTable,
uuid,
timestamp,
unique,
} from 'drizzle-orm/pg-core';
import { images } from './images.schema';
export const imageLikes = pgTable(
'image_likes',
{
id: uuid('id').primaryKey().defaultRandom(),
imageId: uuid('image_id')
.notNull()
.references(() => images.id, { onDelete: 'cascade' }),
userId: uuid('user_id').notNull(),
createdAt: timestamp('created_at', { withTimezone: true })
.defaultNow()
.notNull(),
},
(table) => ({
uniqueImageUser: unique('unique_image_user').on(table.imageId, table.userId),
}),
);
export type ImageLike = typeof imageLikes.$inferSelect;
export type NewImageLike = typeof imageLikes.$inferInsert;

View file

@ -0,0 +1,22 @@
import {
pgTable,
uuid,
text,
timestamp,
} from 'drizzle-orm/pg-core';
export const profiles = pgTable('profiles', {
id: uuid('id').primaryKey(), // Same as auth user id
username: text('username'),
email: text('email').notNull(),
avatarUrl: text('avatar_url'),
createdAt: timestamp('created_at', { withTimezone: true })
.defaultNow()
.notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true })
.defaultNow()
.notNull(),
});
export type Profile = typeof profiles.$inferSelect;
export type NewProfile = typeof profiles.$inferInsert;

View file

@ -1,4 +1,4 @@
import { IsString, IsOptional, IsNumber } from 'class-validator';
import { IsString, IsOptional, IsNumber, IsBoolean } from 'class-validator';
export class GenerateImageDto {
@IsString()
@ -7,6 +7,10 @@ export class GenerateImageDto {
@IsString()
modelId: string;
@IsString()
@IsOptional()
modelVersion?: string;
@IsString()
@IsOptional()
negativePrompt?: string;
@ -38,4 +42,12 @@ export class GenerateImageDto {
@IsNumber()
@IsOptional()
generationStrength?: number;
@IsString()
@IsOptional()
style?: string;
@IsBoolean()
@IsOptional()
waitForResult?: boolean;
}

View file

@ -16,10 +16,16 @@ import {
type ImageGeneration,
type Image,
} from '../db/schema';
import { ReplicateService } from './replicate.service';
import { ReplicateService, GenerationParams } from './replicate.service';
import { StorageService } from '../upload/storage.service';
import { GenerateImageDto } from './dto/generate.dto';
export interface GenerateResponse {
generationId: string;
status: string;
image?: Image;
}
@Injectable()
export class GenerateService {
private readonly logger = new Logger(GenerateService.name);
@ -36,10 +42,13 @@ export class GenerateService {
'http://localhost:3003';
}
/**
* Generate an image - supports both async (webhook) and sync (polling) modes
*/
async generateImage(
userId: string,
dto: GenerateImageDto,
): Promise<{ generationId: string; status: string }> {
): Promise<GenerateResponse> {
try {
// Get model info
const modelResult = await this.db
@ -63,6 +72,7 @@ export class GenerateService {
prompt: dto.prompt,
negativePrompt: dto.negativePrompt,
model: model.name,
style: dto.style,
width: dto.width || model.defaultWidth || 1024,
height: dto.height || model.defaultHeight || 1024,
steps: dto.steps || model.defaultSteps || 25,
@ -76,52 +86,29 @@ export class GenerateService {
const generation = generationResult[0];
// Start the prediction
try {
const webhookUrl = `${this.webhookBaseUrl}/api/generate/webhook`;
// Build generation params
const generationParams: GenerationParams = {
prompt: dto.prompt,
negativePrompt: dto.negativePrompt,
modelId: model.replicateId,
modelVersion: dto.modelVersion || model.version,
width: dto.width || model.defaultWidth || 1024,
height: dto.height || model.defaultHeight || 1024,
steps: dto.steps || model.defaultSteps || 25,
guidanceScale: dto.guidanceScale || model.defaultGuidanceScale || 7.5,
seed: dto.seed,
sourceImageUrl: dto.sourceImageUrl,
strength: dto.generationStrength,
style: dto.style,
};
const prediction = await this.replicateService.createPrediction(
model.replicateId,
model.version || '',
{
prompt: dto.prompt,
negative_prompt: dto.negativePrompt,
width: dto.width || model.defaultWidth || 1024,
height: dto.height || model.defaultHeight || 1024,
num_inference_steps: dto.steps || model.defaultSteps || 25,
guidance_scale: dto.guidanceScale || model.defaultGuidanceScale || 7.5,
seed: dto.seed,
image: dto.sourceImageUrl,
prompt_strength: dto.generationStrength,
},
webhookUrl,
);
// Update generation with prediction ID
await this.db
.update(imageGenerations)
.set({
replicatePredictionId: prediction.id,
status: 'processing',
})
.where(eq(imageGenerations.id, generation.id));
return {
generationId: generation.id,
status: 'processing',
};
} catch (error) {
// Update generation as failed
await this.db
.update(imageGenerations)
.set({
status: 'failed',
errorMessage: error instanceof Error ? error.message : 'Unknown error',
})
.where(eq(imageGenerations.id, generation.id));
throw error;
// If waitForResult is true, use synchronous generation with polling
if (dto.waitForResult) {
return this.generateSync(generation, generationParams);
}
// Otherwise use async generation with webhook
return this.generateAsync(generation, model, generationParams);
} catch (error) {
if (error instanceof NotFoundException) {
throw error;
@ -131,6 +118,142 @@ export class GenerateService {
}
}
/**
* Synchronous generation - polls until complete
*/
private async generateSync(
generation: ImageGeneration,
params: GenerationParams,
): Promise<GenerateResponse> {
try {
// Update status to processing
await this.db
.update(imageGenerations)
.set({ status: 'processing' })
.where(eq(imageGenerations.id, generation.id));
// Process generation with polling
const result = await this.replicateService.processGeneration(params);
if (!result.success || !result.outputUrl) {
await this.db
.update(imageGenerations)
.set({
status: 'failed',
errorMessage: result.error || 'Generation failed',
})
.where(eq(imageGenerations.id, generation.id));
return {
generationId: generation.id,
status: 'failed',
};
}
// Download and upload to storage
const { storagePath, publicUrl } = await this.storageService.uploadFromUrl(
result.outputUrl,
generation.userId,
`generated-${generation.id}.${result.format || 'png'}`,
);
// Create image record
const imageResult = await this.db
.insert(images)
.values({
userId: generation.userId,
generationId: generation.id,
prompt: generation.prompt,
negativePrompt: generation.negativePrompt,
model: generation.model,
style: generation.style,
storagePath,
publicUrl,
filename: `generated-${generation.id}.${result.format || 'png'}`,
width: result.width || generation.width,
height: result.height || generation.height,
format: result.format || 'png',
})
.returning();
// Update generation as completed
await this.db
.update(imageGenerations)
.set({
status: 'completed',
generationTimeSeconds: result.generationTimeSeconds,
completedAt: new Date(),
})
.where(eq(imageGenerations.id, generation.id));
return {
generationId: generation.id,
status: 'completed',
image: imageResult[0],
};
} catch (error) {
this.logger.error(`Error in sync generation for ${generation.id}`, error);
await this.db
.update(imageGenerations)
.set({
status: 'failed',
errorMessage: error instanceof Error ? error.message : 'Unknown error',
})
.where(eq(imageGenerations.id, generation.id));
return {
generationId: generation.id,
status: 'failed',
};
}
}
/**
* Async generation - uses webhook for completion
*/
private async generateAsync(
generation: ImageGeneration,
model: any,
params: GenerationParams,
): Promise<GenerateResponse> {
try {
const webhookUrl = `${this.webhookBaseUrl}/api/generate/webhook`;
const prediction = await this.replicateService.createPrediction(
model.replicateId,
params.modelVersion || model.version || '',
params,
webhookUrl,
);
// Update generation with prediction ID
await this.db
.update(imageGenerations)
.set({
replicatePredictionId: prediction.id,
status: 'processing',
})
.where(eq(imageGenerations.id, generation.id));
return {
generationId: generation.id,
status: 'processing',
};
} catch (error) {
// Update generation as failed
await this.db
.update(imageGenerations)
.set({
status: 'failed',
errorMessage: error instanceof Error ? error.message : 'Unknown error',
})
.where(eq(imageGenerations.id, generation.id));
throw error;
}
}
async checkStatus(
generationId: string,
userId: string,
@ -325,20 +448,32 @@ export class GenerateService {
private async processCompletedGeneration(
generation: ImageGeneration,
output: string[] | string,
output: string[] | string | { url?: string },
): Promise<void> {
try {
const imageUrl = Array.isArray(output) ? output[0] : output;
if (!imageUrl) {
// Extract output URL
let imageUrl: string;
if (Array.isArray(output)) {
imageUrl = output[0];
} else if (typeof output === 'string') {
imageUrl = output;
} else if (output && typeof output === 'object' && output.url) {
imageUrl = output.url;
} else {
throw new Error('No output URL from generation');
}
// Determine format from URL
let format = 'png';
if (imageUrl.includes('.webp')) format = 'webp';
else if (imageUrl.includes('.jpg') || imageUrl.includes('.jpeg')) format = 'jpeg';
else if (imageUrl.includes('.svg')) format = 'svg';
// Download and upload to storage
const { storagePath, publicUrl } = await this.storageService.uploadFromUrl(
imageUrl,
generation.userId,
`generated-${generation.id}.png`,
`generated-${generation.id}.${format}`,
);
// Create image record
@ -348,12 +483,13 @@ export class GenerateService {
prompt: generation.prompt,
negativePrompt: generation.negativePrompt,
model: generation.model,
style: generation.style,
storagePath,
publicUrl,
filename: `generated-${generation.id}.png`,
filename: `generated-${generation.id}.${format}`,
width: generation.width,
height: generation.height,
format: 'png',
format,
});
// Update generation as completed

View file

@ -2,22 +2,35 @@ import { Injectable, Logger } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import Replicate from 'replicate';
export interface PredictionInput {
export interface GenerationParams {
prompt: string;
negative_prompt?: string;
negativePrompt?: string | null;
modelId: string;
modelVersion?: string | null;
width: number;
height: number;
steps: number;
guidanceScale: number;
seed?: number | null;
sourceImageUrl?: string | null;
strength?: number | null;
style?: string | null;
}
export interface GenerationResult {
success: boolean;
outputUrl?: string;
format?: string;
width?: number;
height?: number;
num_inference_steps?: number;
guidance_scale?: number;
seed?: number;
image?: string; // For img2img
prompt_strength?: number;
error?: string;
generationTimeSeconds?: number;
}
export interface Prediction {
id: string;
status: 'starting' | 'processing' | 'succeeded' | 'failed' | 'canceled';
output?: string[] | string;
output?: string[] | string | { url?: string };
error?: string;
metrics?: {
predict_time?: number;
@ -28,39 +41,545 @@ export interface Prediction {
export class ReplicateService {
private readonly logger = new Logger(ReplicateService.name);
private replicate: Replicate | null = null;
private readonly apiToken: string | undefined;
constructor(private configService: ConfigService) {
const apiToken = this.configService.get<string>('REPLICATE_API_TOKEN');
if (apiToken) {
this.replicate = new Replicate({ auth: apiToken });
this.apiToken = this.configService.get<string>('REPLICATE_API_TOKEN');
if (this.apiToken) {
this.replicate = new Replicate({ auth: this.apiToken });
} else {
this.logger.warn('REPLICATE_API_TOKEN not configured');
}
}
/**
* Calculate greatest common divisor for aspect ratio simplification
*/
private gcd(a: number, b: number): number {
return b === 0 ? a : this.gcd(b, a % b);
}
/**
* Simplify aspect ratio to smallest whole numbers (e.g., 1920:1080 -> 16:9)
*/
private simplifyAspectRatio(width: number, height: number): string {
const divisor = this.gcd(width, height);
const simplifiedWidth = width / divisor;
const simplifiedHeight = height / divisor;
return `${simplifiedWidth}:${simplifiedHeight}`;
}
/**
* Convert image URL to base64 data URI for img2img
*/
private async convertImageToBase64(imageUrl: string): Promise<string> {
this.logger.debug(`Converting image to base64: ${imageUrl}`);
const imageResponse = await fetch(imageUrl);
if (!imageResponse.ok) {
throw new Error('Failed to fetch source image');
}
const imageBuffer = await imageResponse.arrayBuffer();
const base64String = Buffer.from(imageBuffer).toString('base64');
const contentType = imageResponse.headers.get('content-type') || 'image/jpeg';
const dataUri = `data:${contentType};base64,${base64String}`;
this.logger.debug(`Image converted to base64, length: ${dataUri.length}`);
return dataUri;
}
/**
* Build model-specific input parameters for Replicate API
*/
private buildModelInput(
params: GenerationParams,
sourceImageBase64?: string | null,
): { input: any; finalWidth: number; finalHeight: number } {
const {
prompt,
modelId,
width,
height,
steps,
guidanceScale,
seed,
strength,
} = params;
let finalWidth = width;
let finalHeight = height;
const simplifiedRatio = this.simplifyAspectRatio(width, height);
this.logger.debug(`Building input for model: ${modelId}`);
this.logger.debug(`Dimensions: ${finalWidth}x${finalHeight}`);
this.logger.debug(`Aspect ratio: ${simplifiedRatio}`);
let input: any = {};
// FLUX Schnell - Uses aspect_ratio with specific supported ratios
if (modelId.includes('flux-schnell')) {
const supportedRatios = [
'1:1', '16:9', '21:9', '3:2', '2:3', '4:5', '5:4', '3:4', '4:3', '9:16', '9:21',
];
// Find closest supported ratio
let fluxAspectRatio = simplifiedRatio;
if (!supportedRatios.includes(simplifiedRatio)) {
const [w, h] = simplifiedRatio.split(':').map(Number);
const targetRatio = w / h;
let closestRatio = '1:1';
let minDiff = Infinity;
for (const ratio of supportedRatios) {
const [rw, rh] = ratio.split(':').map(Number);
const r = rw / rh;
const diff = Math.abs(r - targetRatio);
if (diff < minDiff) {
minDiff = diff;
closestRatio = ratio;
}
}
fluxAspectRatio = closestRatio;
this.logger.debug(`Mapped ${simplifiedRatio} to closest supported ratio: ${fluxAspectRatio}`);
}
// Calculate actual dimensions (Flux Schnell uses 1024px on shorter side)
const [aspectW, aspectH] = fluxAspectRatio.split(':').map(Number);
if (aspectW > aspectH) {
finalHeight = 1024;
finalWidth = Math.round((finalHeight * aspectW) / aspectH);
} else if (aspectW < aspectH) {
finalWidth = 1024;
finalHeight = Math.round((finalWidth * aspectH) / aspectW);
} else {
finalWidth = 1024;
finalHeight = 1024;
}
input = {
prompt,
num_inference_steps: steps,
guidance: guidanceScale,
num_outputs: 1,
aspect_ratio: fluxAspectRatio,
output_format: 'webp',
output_quality: 90,
};
}
// FLUX Dev / FLUX Krea Dev - Supports dimensions and img2img
else if (modelId.includes('flux-krea-dev') || modelId.includes('flux-dev')) {
input = {
prompt,
num_inference_steps: steps,
guidance_scale: guidanceScale,
num_outputs: 1,
width: finalWidth,
height: finalHeight,
output_format: 'webp',
output_quality: 90,
};
if (sourceImageBase64 && strength !== null && strength !== undefined) {
input.image = sourceImageBase64;
input.prompt_strength = 1 - strength; // Flux uses inverse
this.logger.debug(`Added img2img params for Flux Dev, prompt_strength: ${input.prompt_strength}`);
}
}
// Ideogram V3 Turbo - Uses aspect_ratio
else if (modelId.includes('ideogram-v3-turbo') || modelId.includes('ideogram')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
model: 'turbo',
style_type: 'auto',
};
if (seed) input.seed = seed;
}
// Imagen 4 Fast - Uses aspect_ratio
else if (modelId.includes('imagen-4-fast') || modelId.includes('imagen')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
safety_tolerance: 2,
output_format: 'png',
};
}
// SDXL Lightning - 4 steps, no guidance, supports img2img
else if (modelId.includes('sdxl-lightning')) {
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps: 4, // Always 4 for Lightning
guidance_scale: 0, // No guidance for Lightning
disable_safety_checker: false,
output_format: 'webp',
output_quality: 90,
};
if (sourceImageBase64 && strength !== null && strength !== undefined) {
input.image = sourceImageBase64;
input.strength = strength;
this.logger.debug(`Added img2img params for SDXL Lightning, strength: ${input.strength}`);
}
if (seed) input.seed = seed;
}
// Regular SDXL - Full parameters, supports img2img
else if (modelId.includes('sdxl')) {
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps: steps,
guidance_scale: guidanceScale,
refine: 'expert_ensemble_refiner',
high_noise_frac: 0.8,
output_format: 'webp',
output_quality: 90,
};
if (sourceImageBase64 && strength !== null && strength !== undefined) {
input.image = sourceImageBase64;
input.prompt_strength = strength;
this.logger.debug(`Added img2img params for SDXL, prompt_strength: ${input.prompt_strength}`);
}
if (seed) input.seed = seed;
}
// SeeDream 4 - Uses size preset and aspect_ratio
else if (modelId.includes('seedream-4')) {
let sizePreset = '2K';
if (finalWidth >= 4096 || finalHeight >= 4096) {
sizePreset = '4K';
} else if (finalWidth <= 1024 && finalHeight <= 1024) {
sizePreset = '1K';
}
input = {
prompt,
size: sizePreset,
width: finalWidth,
height: finalHeight,
max_images: 1,
aspect_ratio: simplifiedRatio,
};
if (sourceImageBase64 && strength !== null && strength !== undefined) {
input.image_input = [sourceImageBase64];
this.logger.debug('Added img2img params for SeeDream 4');
}
}
// SeeDream 3 - Standard dimensions
else if (modelId.includes('seedream-3') || modelId.includes('seedream')) {
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps: steps,
guidance_scale: guidanceScale,
};
if (seed) input.seed = seed;
}
// FLUX 1.1 Pro - Uses aspect_ratio
else if (modelId.includes('flux-1.1-pro')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
output_format: 'webp',
output_quality: 90,
safety_tolerance: 2,
};
if (seed) input.seed = seed;
}
// Recraft V3 SVG - Vector output
else if (modelId.includes('recraft-v3-svg')) {
input = {
prompt,
width: finalWidth,
height: finalHeight,
output_format: 'svg',
style: 'vector_illustration',
};
if (seed) input.seed = seed;
}
// Recraft V3 - Uses size parameter
else if (modelId.includes('recraft-v3') || modelId.includes('recraft')) {
input = {
prompt,
size: `${finalWidth}x${finalHeight}`,
style: 'realistic_image',
};
}
// Stable Diffusion 3.5 Large
else if (modelId.includes('stable-diffusion-3.5') || modelId.includes('sd-3-5')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
cfg: guidanceScale,
steps: steps,
output_format: 'webp',
output_quality: 90,
};
if (seed) input.seed = seed;
}
// Qwen Image - Specific parameter requirements
else if (modelId.includes('qwen-image') || modelId.includes('qwen')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
num_inference_steps: steps,
guidance: guidanceScale,
go_fast: true,
image_size: 'optimize_for_quality',
output_format: 'webp',
output_quality: 90,
enhance_prompt: false,
disable_safety_checker: false,
};
if (seed) input.seed = seed;
}
// Default/fallback for unknown models
else {
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps: steps,
guidance_scale: guidanceScale,
};
if (seed) input.seed = seed;
}
return { input, finalWidth, finalHeight };
}
/**
* Determine output format from model ID and output URL
*/
private determineOutputFormat(
modelId: string,
outputUrl: string,
): { format: string; contentType: string } {
if (modelId.includes('recraft-v3-svg')) {
return { format: 'svg', contentType: 'image/svg+xml' };
}
if (modelId.includes('imagen-4')) {
return { format: 'png', contentType: 'image/png' };
}
if (outputUrl.includes('.png')) {
return { format: 'png', contentType: 'image/png' };
}
if (outputUrl.includes('.jpg') || outputUrl.includes('.jpeg')) {
return { format: 'jpeg', contentType: 'image/jpeg' };
}
// Default to webp
return { format: 'webp', contentType: 'image/webp' };
}
/**
* Extract output URL from various response formats
*/
private extractOutputUrl(output: string[] | string | { url?: string } | any): string {
if (Array.isArray(output)) {
return output[0];
}
if (typeof output === 'string') {
return output;
}
if (output && typeof output === 'object' && output.url) {
return output.url;
}
throw new Error('Unexpected output format from model');
}
/**
* Main function: Process image generation via Replicate API
* Handles all model-specific parameter mapping and polling
*/
async processGeneration(params: GenerationParams): Promise<GenerationResult> {
const startTime = Date.now();
if (!this.apiToken) {
return {
success: false,
error: 'Replicate not configured',
};
}
try {
this.logger.log('=== PROCESS GENERATION START ===');
this.logger.log(`Model: ${params.modelId}`);
this.logger.debug(`Prompt: ${params.prompt.substring(0, 100)}...`);
// Handle image-to-image conversion if needed
let sourceImageBase64: string | null = null;
if (params.sourceImageUrl && params.strength !== null && params.strength !== undefined) {
this.logger.log('Image-to-image mode detected');
sourceImageBase64 = await this.convertImageToBase64(params.sourceImageUrl);
}
// Build model-specific input
const { input, finalWidth, finalHeight } = this.buildModelInput(params, sourceImageBase64);
this.logger.debug(`Replicate API input: ${JSON.stringify(input, null, 2)}`);
// Prepare Replicate API request
const requestBody: any = { input };
if (params.modelVersion) {
requestBody.version = params.modelVersion;
this.logger.debug(`Using version hash: ${params.modelVersion}`);
} else {
requestBody.model = params.modelId;
this.logger.debug(`Using model ID (official model): ${params.modelId}`);
}
// Call Replicate API to start prediction
this.logger.log('Calling Replicate API...');
const replicateResponse = await fetch('https://api.replicate.com/v1/predictions', {
method: 'POST',
headers: {
Authorization: `Token ${this.apiToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(requestBody),
});
if (!replicateResponse.ok) {
const errorText = await replicateResponse.text();
this.logger.error(`Replicate API error: ${errorText}`);
throw new Error(`Replicate API error (${replicateResponse.status}): ${errorText}`);
}
const prediction = await replicateResponse.json();
this.logger.log(`Prediction created: ${prediction.id}, Status: ${prediction.status}`);
// Poll for completion
const maxAttempts = 120; // 10 minutes max (5 second intervals)
let attempts = 0;
while (attempts < maxAttempts) {
await new Promise((resolve) => setTimeout(resolve, 5000)); // Poll every 5 seconds
attempts++;
const statusResponse = await fetch(
`https://api.replicate.com/v1/predictions/${prediction.id}`,
{
headers: {
Authorization: `Token ${this.apiToken}`,
},
},
);
if (!statusResponse.ok) {
this.logger.warn('Failed to get prediction status');
continue; // Retry
}
const result = await statusResponse.json();
this.logger.debug(`Poll ${attempts}: ${result.status}`);
// Success - Extract output URL
if (result.status === 'succeeded' && result.output) {
const outputUrl = this.extractOutputUrl(result.output);
this.logger.log(`Generation succeeded! Output URL: ${outputUrl}`);
const { format } = this.determineOutputFormat(params.modelId, outputUrl);
const generationTime = Math.floor((Date.now() - startTime) / 1000);
this.logger.log('=== PROCESS GENERATION COMPLETE ===');
this.logger.log(`Time taken: ${generationTime} seconds`);
return {
success: true,
outputUrl,
format,
width: finalWidth,
height: finalHeight,
generationTimeSeconds: generationTime,
};
}
// Failed or canceled
if (result.status === 'failed' || result.status === 'canceled') {
const errorMsg = result.error || `Generation ${result.status}`;
this.logger.error(`Generation failed: ${errorMsg}`);
throw new Error(errorMsg);
}
}
// Timeout after max attempts
throw new Error('Generation timeout after 10 minutes');
} catch (error: any) {
this.logger.error(`Error in processGeneration: ${error.message}`);
return {
success: false,
error: error.message || 'Unknown error during generation',
};
}
}
/**
* Create a prediction and return immediately (for webhook-based flow)
*/
async createPrediction(
modelId: string,
version: string,
input: PredictionInput,
params: GenerationParams,
webhookUrl?: string,
): Promise<Prediction> {
if (!this.replicate) {
if (!this.apiToken) {
throw new Error('Replicate not configured');
}
try {
const prediction = await this.replicate.predictions.create({
version,
// Handle image-to-image conversion if needed
let sourceImageBase64: string | null = null;
if (params.sourceImageUrl && params.strength !== null && params.strength !== undefined) {
sourceImageBase64 = await this.convertImageToBase64(params.sourceImageUrl);
}
// Build model-specific input
const { input } = this.buildModelInput(params, sourceImageBase64);
const requestBody: any = {
input,
webhook: webhookUrl,
webhook_events_filter: ['completed'],
};
if (version) {
requestBody.version = version;
} else {
requestBody.model = modelId;
}
const response = await fetch('https://api.replicate.com/v1/predictions', {
method: 'POST',
headers: {
Authorization: `Token ${this.apiToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(requestBody),
});
if (!response.ok) {
const errorText = await response.text();
throw new Error(`Replicate API error: ${errorText}`);
}
const prediction = await response.json();
return {
id: prediction.id,
status: prediction.status as Prediction['status'],
output: prediction.output as string[] | string | undefined,
error: prediction.error as string | undefined,
output: prediction.output,
error: prediction.error,
};
} catch (error) {
this.logger.error('Error creating prediction', error);
@ -69,19 +588,32 @@ export class ReplicateService {
}
async getPrediction(predictionId: string): Promise<Prediction> {
if (!this.replicate) {
if (!this.apiToken) {
throw new Error('Replicate not configured');
}
try {
const prediction = await this.replicate.predictions.get(predictionId);
const response = await fetch(
`https://api.replicate.com/v1/predictions/${predictionId}`,
{
headers: {
Authorization: `Token ${this.apiToken}`,
},
},
);
if (!response.ok) {
throw new Error(`Failed to get prediction: ${response.status}`);
}
const prediction = await response.json();
return {
id: prediction.id,
status: prediction.status as Prediction['status'],
output: prediction.output as string[] | string | undefined,
error: prediction.error as string | undefined,
metrics: prediction.metrics as Prediction['metrics'],
output: prediction.output,
error: prediction.error,
metrics: prediction.metrics,
};
} catch (error) {
this.logger.error(`Error getting prediction ${predictionId}`, error);
@ -90,35 +622,20 @@ export class ReplicateService {
}
async cancelPrediction(predictionId: string): Promise<void> {
if (!this.replicate) {
if (!this.apiToken) {
throw new Error('Replicate not configured');
}
try {
await this.replicate.predictions.cancel(predictionId);
await fetch(`https://api.replicate.com/v1/predictions/${predictionId}/cancel`, {
method: 'POST',
headers: {
Authorization: `Token ${this.apiToken}`,
},
});
} catch (error) {
this.logger.error(`Error canceling prediction ${predictionId}`, error);
throw error;
}
}
async waitForPrediction(
predictionId: string,
timeoutMs: number = 300000, // 5 minutes
pollIntervalMs: number = 2000,
): Promise<Prediction> {
const startTime = Date.now();
while (Date.now() - startTime < timeoutMs) {
const prediction = await this.getPrediction(predictionId);
if (prediction.status === 'succeeded' || prediction.status === 'failed' || prediction.status === 'canceled') {
return prediction;
}
await new Promise((resolve) => setTimeout(resolve, pollIntervalMs));
}
throw new Error('Prediction timed out');
}
}

View file

@ -36,3 +36,9 @@ export class ToggleFavoriteDto {
@IsBoolean()
isFavorite: boolean;
}
export class BatchImageIdsDto {
@IsArray()
@IsString({ each: true })
imageIds: string[];
}

View file

@ -1,6 +1,7 @@
import {
Controller,
Get,
Post,
Patch,
Delete,
Param,
@ -14,7 +15,7 @@ import {
CurrentUser,
CurrentUserData,
} from '../common/decorators/current-user.decorator';
import { GetImagesQueryDto, ToggleFavoriteDto } from './dto/image.dto';
import { GetImagesQueryDto, ToggleFavoriteDto, BatchImageIdsDto } from './dto/image.dto';
@Controller('images')
@UseGuards(JwtAuthGuard)
@ -85,4 +86,69 @@ export class ImageController {
) {
return this.imageService.toggleFavorite(id, user.userId, dto.isFavorite);
}
@Get('archived/count')
async getArchivedCount(@CurrentUser() user: CurrentUserData) {
return this.imageService.getArchivedCount(user.userId);
}
@Post('batch/archive')
async batchArchive(
@CurrentUser() user: CurrentUserData,
@Body() dto: BatchImageIdsDto,
) {
return this.imageService.batchArchiveImages(dto.imageIds, user.userId);
}
@Post('batch/restore')
async batchRestore(
@CurrentUser() user: CurrentUserData,
@Body() dto: BatchImageIdsDto,
) {
return this.imageService.batchRestoreImages(dto.imageIds, user.userId);
}
@Post('batch/delete')
async batchDelete(
@CurrentUser() user: CurrentUserData,
@Body() dto: BatchImageIdsDto,
) {
return this.imageService.batchDeleteImages(dto.imageIds, user.userId);
}
// ==================== LIKES ====================
@Post(':id/like')
async likeImage(
@CurrentUser() user: CurrentUserData,
@Param('id') id: string,
) {
return this.imageService.likeImage(id, user.userId);
}
@Delete(':id/like')
async unlikeImage(
@CurrentUser() user: CurrentUserData,
@Param('id') id: string,
) {
return this.imageService.unlikeImage(id, user.userId);
}
@Get(':id/likes')
async getLikeStatus(
@CurrentUser() user: CurrentUserData,
@Param('id') id: string,
) {
return this.imageService.getLikeStatus(id, user.userId);
}
// ==================== GENERATION DETAILS ====================
@Get('generation/:generationId')
async getGenerationDetails(
@CurrentUser() user: CurrentUserData,
@Param('generationId') generationId: string,
) {
return this.imageService.getGenerationDetails(generationId, user.userId);
}
}

View file

@ -8,7 +8,14 @@ import {
import { eq, and, isNull, isNotNull, desc, inArray, sql } from 'drizzle-orm';
import { DATABASE_CONNECTION } from '../db/database.module';
import { type Database } from '../db/connection';
import { images, imageTags, type Image } from '../db/schema';
import {
images,
imageTags,
imageLikes,
imageGenerations,
type Image,
type ImageGeneration,
} from '../db/schema';
import { GetImagesQueryDto } from './dto/image.dto';
@Injectable()
@ -268,6 +275,263 @@ export class ImageService {
}
}
async getArchivedCount(userId: string): Promise<{ count: number }> {
try {
const result = await this.db
.select({ count: sql<number>`count(*)` })
.from(images)
.where(and(eq(images.userId, userId), isNotNull(images.archivedAt)));
return { count: Number(result[0]?.count || 0) };
} catch (error) {
this.logger.error('Error getting archived count', error);
throw error;
}
}
async batchArchiveImages(
imageIds: string[],
userId: string,
): Promise<{ affected: number }> {
try {
const result = await this.db
.update(images)
.set({
archivedAt: new Date(),
updatedAt: new Date(),
})
.where(and(inArray(images.id, imageIds), eq(images.userId, userId)))
.returning();
return { affected: result.length };
} catch (error) {
this.logger.error('Error batch archiving images', error);
throw error;
}
}
async batchRestoreImages(
imageIds: string[],
userId: string,
): Promise<{ affected: number }> {
try {
const result = await this.db
.update(images)
.set({
archivedAt: null,
updatedAt: new Date(),
})
.where(and(inArray(images.id, imageIds), eq(images.userId, userId)))
.returning();
return { affected: result.length };
} catch (error) {
this.logger.error('Error batch restoring images', error);
throw error;
}
}
async batchDeleteImages(
imageIds: string[],
userId: string,
): Promise<{ affected: number }> {
try {
// Delete image-tag relations first
await this.db.delete(imageTags).where(inArray(imageTags.imageId, imageIds));
// Delete the images (only those owned by user)
const result = await this.db
.delete(images)
.where(and(inArray(images.id, imageIds), eq(images.userId, userId)))
.returning();
return { affected: result.length };
} catch (error) {
this.logger.error('Error batch deleting images', error);
throw error;
}
}
// ==================== LIKES ====================
async likeImage(
imageId: string,
userId: string,
): Promise<{ liked: boolean; likeCount: number }> {
try {
// Check if image exists and is public (or owned by user)
const image = await this.db
.select()
.from(images)
.where(eq(images.id, imageId))
.limit(1);
if (image.length === 0) {
throw new NotFoundException(`Image with id ${imageId} not found`);
}
// Only allow liking public images (or own images)
if (!image[0].isPublic && image[0].userId !== userId) {
throw new ForbiddenException('Cannot like a private image');
}
// Check if already liked
const existingLike = await this.db
.select()
.from(imageLikes)
.where(
and(eq(imageLikes.imageId, imageId), eq(imageLikes.userId, userId)),
)
.limit(1);
if (existingLike.length > 0) {
// Already liked, return current state
const count = await this.getLikeCount(imageId);
return { liked: true, likeCount: count };
}
// Add like
await this.db.insert(imageLikes).values({
imageId,
userId,
});
const count = await this.getLikeCount(imageId);
return { liked: true, likeCount: count };
} catch (error) {
if (
error instanceof NotFoundException ||
error instanceof ForbiddenException
) {
throw error;
}
this.logger.error(`Error liking image ${imageId}`, error);
throw error;
}
}
async unlikeImage(
imageId: string,
userId: string,
): Promise<{ liked: boolean; likeCount: number }> {
try {
// Check if image exists
const image = await this.db
.select()
.from(images)
.where(eq(images.id, imageId))
.limit(1);
if (image.length === 0) {
throw new NotFoundException(`Image with id ${imageId} not found`);
}
// Delete like
await this.db
.delete(imageLikes)
.where(
and(eq(imageLikes.imageId, imageId), eq(imageLikes.userId, userId)),
);
const count = await this.getLikeCount(imageId);
return { liked: false, likeCount: count };
} catch (error) {
if (error instanceof NotFoundException) {
throw error;
}
this.logger.error(`Error unliking image ${imageId}`, error);
throw error;
}
}
async getLikeStatus(
imageId: string,
userId: string,
): Promise<{ liked: boolean; likeCount: number }> {
try {
// Check if image exists
const image = await this.db
.select()
.from(images)
.where(eq(images.id, imageId))
.limit(1);
if (image.length === 0) {
throw new NotFoundException(`Image with id ${imageId} not found`);
}
// Check if liked by user
const existingLike = await this.db
.select()
.from(imageLikes)
.where(
and(eq(imageLikes.imageId, imageId), eq(imageLikes.userId, userId)),
)
.limit(1);
const count = await this.getLikeCount(imageId);
return {
liked: existingLike.length > 0,
likeCount: count,
};
} catch (error) {
if (error instanceof NotFoundException) {
throw error;
}
this.logger.error(`Error getting like status for image ${imageId}`, error);
throw error;
}
}
private async getLikeCount(imageId: string): Promise<number> {
const result = await this.db
.select({ count: sql<number>`count(*)` })
.from(imageLikes)
.where(eq(imageLikes.imageId, imageId));
return Number(result[0]?.count || 0);
}
// ==================== GENERATION DETAILS ====================
async getGenerationDetails(
generationId: string,
userId: string,
): Promise<Partial<ImageGeneration> | null> {
try {
const result = await this.db
.select({
steps: imageGenerations.steps,
guidanceScale: imageGenerations.guidanceScale,
generationTimeSeconds: imageGenerations.generationTimeSeconds,
status: imageGenerations.status,
})
.from(imageGenerations)
.where(
and(
eq(imageGenerations.id, generationId),
eq(imageGenerations.userId, userId),
),
)
.limit(1);
if (result.length === 0) {
return null;
}
return result[0];
} catch (error) {
this.logger.error(
`Error fetching generation details ${generationId}`,
error,
);
throw error;
}
}
// ==================== PRIVATE HELPERS ====================
private async verifyOwnership(id: string, userId: string): Promise<void> {
const result = await this.db
.select({ userId: images.userId })

View file

@ -0,0 +1,30 @@
import { IsString, IsOptional, MaxLength, MinLength } from 'class-validator';
export class UpdateProfileDto {
@IsOptional()
@IsString()
@MinLength(2)
@MaxLength(50)
username?: string;
@IsOptional()
@IsString()
@MaxLength(500)
avatarUrl?: string;
}
export interface ProfileResponse {
id: string;
username: string | null;
email: string;
avatarUrl: string | null;
createdAt: Date;
updatedAt: Date;
}
export interface UserStatsResponse {
totalImages: number;
favoriteImages: number;
archivedImages: number;
publicImages: number;
}

View file

@ -0,0 +1,43 @@
import {
Controller,
Get,
Patch,
Body,
UseGuards,
} from '@nestjs/common';
import { ProfileService } from './profile.service';
import { JwtAuthGuard } from '../common/guards/jwt-auth.guard';
import {
CurrentUser,
CurrentUserData,
} from '../common/decorators/current-user.decorator';
import { UpdateProfileDto, ProfileResponse, UserStatsResponse } from './dto/profile.dto';
@Controller('profiles')
@UseGuards(JwtAuthGuard)
export class ProfileController {
constructor(private readonly profileService: ProfileService) {}
@Get('me')
async getMyProfile(
@CurrentUser() user: CurrentUserData,
): Promise<ProfileResponse> {
// Get or create profile (ensures profile exists)
return this.profileService.getOrCreateProfile(user.userId, user.email);
}
@Patch('me')
async updateMyProfile(
@CurrentUser() user: CurrentUserData,
@Body() dto: UpdateProfileDto,
): Promise<ProfileResponse> {
return this.profileService.updateProfile(user.userId, dto);
}
@Get('stats')
async getMyStats(
@CurrentUser() user: CurrentUserData,
): Promise<UserStatsResponse> {
return this.profileService.getUserStats(user.userId);
}
}

View file

@ -0,0 +1,10 @@
import { Module } from '@nestjs/common';
import { ProfileController } from './profile.controller';
import { ProfileService } from './profile.service';
@Module({
controllers: [ProfileController],
providers: [ProfileService],
exports: [ProfileService],
})
export class ProfileModule {}

View file

@ -0,0 +1,155 @@
import {
Injectable,
Inject,
NotFoundException,
Logger,
} from '@nestjs/common';
import { eq, and, isNull, isNotNull, sql } from 'drizzle-orm';
import { DATABASE_CONNECTION } from '../db/database.module';
import { type Database } from '../db/connection';
import { profiles, images, type Profile } from '../db/schema';
import { UpdateProfileDto, ProfileResponse, UserStatsResponse } from './dto/profile.dto';
@Injectable()
export class ProfileService {
private readonly logger = new Logger(ProfileService.name);
constructor(@Inject(DATABASE_CONNECTION) private readonly db: Database) {}
async getProfile(userId: string): Promise<ProfileResponse> {
try {
const result = await this.db
.select()
.from(profiles)
.where(eq(profiles.id, userId))
.limit(1);
if (result.length === 0) {
throw new NotFoundException('Profile not found');
}
return result[0];
} catch (error) {
if (error instanceof NotFoundException) {
throw error;
}
this.logger.error(`Error fetching profile for user ${userId}`, error);
throw error;
}
}
async getOrCreateProfile(userId: string, email: string): Promise<ProfileResponse> {
try {
// Try to get existing profile
const existing = await this.db
.select()
.from(profiles)
.where(eq(profiles.id, userId))
.limit(1);
if (existing.length > 0) {
return existing[0];
}
// Create new profile
const newProfile = await this.db
.insert(profiles)
.values({
id: userId,
email,
username: null,
})
.returning();
return newProfile[0];
} catch (error) {
this.logger.error(`Error getting/creating profile for user ${userId}`, error);
throw error;
}
}
async updateProfile(
userId: string,
dto: UpdateProfileDto,
): Promise<ProfileResponse> {
try {
// Check if profile exists
const existing = await this.db
.select()
.from(profiles)
.where(eq(profiles.id, userId))
.limit(1);
if (existing.length === 0) {
throw new NotFoundException('Profile not found');
}
const result = await this.db
.update(profiles)
.set({
...dto,
updatedAt: new Date(),
})
.where(eq(profiles.id, userId))
.returning();
return result[0];
} catch (error) {
if (error instanceof NotFoundException) {
throw error;
}
this.logger.error(`Error updating profile for user ${userId}`, error);
throw error;
}
}
async getUserStats(userId: string): Promise<UserStatsResponse> {
try {
// Get total images (non-archived)
const totalResult = await this.db
.select({ count: sql<number>`count(*)` })
.from(images)
.where(and(eq(images.userId, userId), isNull(images.archivedAt)));
// Get favorite images
const favoriteResult = await this.db
.select({ count: sql<number>`count(*)` })
.from(images)
.where(
and(
eq(images.userId, userId),
eq(images.isFavorite, true),
isNull(images.archivedAt),
),
);
// Get archived images
const archivedResult = await this.db
.select({ count: sql<number>`count(*)` })
.from(images)
.where(and(eq(images.userId, userId), isNotNull(images.archivedAt)));
// Get public images
const publicResult = await this.db
.select({ count: sql<number>`count(*)` })
.from(images)
.where(
and(
eq(images.userId, userId),
eq(images.isPublic, true),
isNull(images.archivedAt),
),
);
return {
totalImages: Number(totalResult[0]?.count || 0),
favoriteImages: Number(favoriteResult[0]?.count || 0),
archivedImages: Number(archivedResult[0]?.count || 0),
publicImages: Number(publicResult[0]?.count || 0),
};
} catch (error) {
this.logger.error(`Error fetching stats for user ${userId}`, error);
throw error;
}
}
}

View file

@ -1,8 +1,8 @@
import { useState, useEffect } from 'react';
import { useState } from 'react';
import { Alert, KeyboardAvoidingView, Platform, TextInput, View } from 'react-native';
import { Link, router } from 'expo-router';
import { SafeAreaView } from 'react-native-safe-area-context';
import { supabase } from '~/utils/supabase';
import { useAuth } from '~/contexts/AuthContext';
import { useTheme } from '~/contexts/ThemeContext';
import { Button } from '~/components/Button';
import { Text } from '~/components/Text';
@ -10,43 +10,14 @@ import { Container } from '~/components/Container';
export default function LoginScreen() {
const { theme } = useTheme();
const { signIn } = useAuth();
const [email, setEmail] = useState('');
const [password, setPassword] = useState('');
const [loading, setLoading] = useState(false);
// Test if JavaScript is running
useEffect(() => {
console.log('LoginScreen mounted - JavaScript is running');
console.log('Platform:', Platform.OS);
if (Platform.OS === 'web') {
// Add click handler directly to window to test
const testHandler = (e: any) => {
console.log('Window click detected at:', e.clientX, e.clientY);
};
window.addEventListener('click', testHandler);
// Add visible debug element to web page
const debugDiv = document.createElement('div');
debugDiv.style.position = 'fixed';
debugDiv.style.top = '10px';
debugDiv.style.left = '10px';
debugDiv.style.backgroundColor = 'red';
debugDiv.style.color = 'white';
debugDiv.style.padding = '10px';
debugDiv.style.zIndex = '9999';
debugDiv.textContent = 'React Native Web is running!';
document.body.appendChild(debugDiv);
return () => {
window.removeEventListener('click', testHandler);
debugDiv.remove();
};
}
}, []);
async function signInWithEmail() {
console.log('signInWithEmail called', { email, password: '***' });
if (!email || !password) {
if (Platform.OS === 'web') {
alert('Bitte E-Mail und Passwort eingeben');
@ -57,17 +28,15 @@ export default function LoginScreen() {
}
setLoading(true);
const { error } = await supabase.auth.signInWithPassword({
email: email.trim(),
password: password,
});
const { error } = await signIn(email.trim(), password);
if (error) {
console.error('Login error:', error);
const errorMessage = error.message || 'Anmeldung fehlgeschlagen';
if (Platform.OS === 'web') {
alert(`Login fehlgeschlagen: ${error.message}`);
alert(`Login fehlgeschlagen: ${errorMessage}`);
} else {
Alert.alert('Login fehlgeschlagen', error.message);
Alert.alert('Login fehlgeschlagen', errorMessage);
}
} else {
console.log('Login successful, redirecting...');
@ -89,7 +58,7 @@ export default function LoginScreen() {
return (
<SafeAreaView style={{ flex: 1, backgroundColor: theme.colors.background }}>
<Container>
<KeyboardAvoidingView
<KeyboardAvoidingView
behavior={Platform.OS === 'ios' ? 'padding' : 'height'}
className="flex-1"
>
@ -128,7 +97,7 @@ export default function LoginScreen() {
/>
</View>
<Button
<Button
title={loading ? "Anmelden..." : "Anmelden"}
onPress={signInWithEmail}
disabled={loading}
@ -153,4 +122,4 @@ export default function LoginScreen() {
</Container>
</SafeAreaView>
);
}
}

View file

@ -2,7 +2,7 @@ import { useState } from 'react';
import { Alert, KeyboardAvoidingView, Platform, TextInput, View } from 'react-native';
import { Link, router } from 'expo-router';
import { SafeAreaView } from 'react-native-safe-area-context';
import { supabase } from '~/utils/supabase';
import { useAuth } from '~/contexts/AuthContext';
import { useTheme } from '~/contexts/ThemeContext';
import { Button } from '~/components/Button';
import { Text } from '~/components/Text';
@ -10,6 +10,7 @@ import { Container } from '~/components/Container';
export default function RegisterScreen() {
const { theme } = useTheme();
const { signUp } = useAuth();
const [email, setEmail] = useState('');
const [password, setPassword] = useState('');
const [username, setUsername] = useState('');
@ -27,39 +28,32 @@ export default function RegisterScreen() {
}
setLoading(true);
const { data, error } = await supabase.auth.signUp({
email: email.trim(),
password: password,
options: {
data: {
username: username.trim(),
},
},
});
const { data, error } = await signUp(email.trim(), password, username.trim());
if (error) {
Alert.alert('Registrierung fehlgeschlagen', error.message);
const errorMessage = error.message || 'Registrierung fehlgeschlagen';
Alert.alert('Registrierung fehlgeschlagen', errorMessage);
setLoading(false);
return;
}
if (data?.user) {
// Profile will be created automatically by database trigger
if (data) {
// User is now signed in after successful signup
Alert.alert(
'Registrierung erfolgreich!',
'Bitte überprüfe deine E-Mail, um dein Konto zu bestätigen.',
[{ text: 'OK', onPress: () => router.replace('/(auth)/login') }]
'Du bist jetzt angemeldet.',
[{ text: 'OK', onPress: () => router.replace('/(tabs)/generate') }]
);
}
setLoading(false);
}
return (
<SafeAreaView style={{ flex: 1, backgroundColor: theme.colors.background }}>
<Container>
<KeyboardAvoidingView
<KeyboardAvoidingView
behavior={Platform.OS === 'ios' ? 'padding' : 'height'}
className="flex-1"
>
@ -111,7 +105,7 @@ export default function RegisterScreen() {
/>
</View>
<Button
<Button
title={loading ? "Registrieren..." : "Registrieren"}
onPress={signUpWithEmail}
disabled={loading}
@ -130,4 +124,4 @@ export default function RegisterScreen() {
</Container>
</SafeAreaView>
);
}
}

View file

@ -2,7 +2,7 @@ import { useState } from 'react';
import { Alert, KeyboardAvoidingView, Platform, TextInput, View } from 'react-native';
import { Link, router } from 'expo-router';
import { SafeAreaView } from 'react-native-safe-area-context';
import { supabase } from '~/utils/supabase';
import { useAuth } from '~/contexts/AuthContext';
import { useTheme } from '~/contexts/ThemeContext';
import { Button } from '~/components/Button';
import { Text } from '~/components/Text';
@ -10,23 +10,23 @@ import { Container } from '~/components/Container';
export default function ResetPasswordScreen() {
const { theme } = useTheme();
const { resetPassword: resetPasswordAuth } = useAuth();
const [email, setEmail] = useState('');
const [loading, setLoading] = useState(false);
async function resetPassword() {
async function handleResetPassword() {
if (!email) {
Alert.alert('Fehler', 'Bitte E-Mail-Adresse eingeben');
return;
}
setLoading(true);
const { error } = await supabase.auth.resetPasswordForEmail(email.trim(), {
redirectTo: 'picture://reset-password',
});
const { error } = await resetPasswordAuth(email.trim());
if (error) {
Alert.alert('Fehler', error.message);
const errorMessage = error.message || 'Passwort-Zurücksetzung fehlgeschlagen';
Alert.alert('Fehler', errorMessage);
} else {
Alert.alert(
'E-Mail gesendet!',
@ -34,7 +34,7 @@ export default function ResetPasswordScreen() {
[{ text: 'OK', onPress: () => router.back() }]
);
}
setLoading(false);
}
@ -68,9 +68,9 @@ export default function ResetPasswordScreen() {
/>
</View>
<Button
<Button
title={loading ? "Senden..." : "Link senden"}
onPress={resetPassword}
onPress={handleResetPassword}
disabled={loading}
className="mt-4"
/>

View file

@ -91,17 +91,17 @@ export default function ExploreScreen() {
const renderImage = useCallback(({ item }: { item: ExploreImageItem }) => (
<ImageCard
id={item.id}
publicUrl={item.public_url}
publicUrl={item.publicUrl}
prompt={item.prompt}
createdAt={item.created_at}
createdAt={item.createdAt}
model={item.model}
tags={item.tags}
viewMode={exploreViewMode}
blurhash={item.blurhash}
creatorUsername={item.creator?.username || undefined}
likesCount={item.likes_count}
userHasLiked={item.user_has_liked}
onToggleLike={() => toggleLike(item.id, item.user_has_liked || false)}
likesCount={item.likesCount}
userHasLiked={item.userHasLiked}
onToggleLike={() => toggleLike(item.id, item.userHasLiked || false)}
/>
), [exploreViewMode, toggleLike]);

View file

@ -96,16 +96,13 @@ export default function GalleryScreen() {
.filter(img => img.status === 'generating' || img.status === 'completed')
.map(img => ({
id: img.status === 'completed' && img.realImageId ? img.realImageId : img.tempId,
user_id: user?.id || '',
prompt: img.prompt,
image_url: img.status === 'completed' && img.imageUrl ? img.imageUrl : '',
public_url: img.status === 'completed' && img.imageUrl ? img.imageUrl : null,
width: img.width,
height: img.height,
created_at: new Date(img.startTime).toISOString(),
is_favorite: false,
is_public: false,
publicUrl: img.status === 'completed' && img.imageUrl ? img.imageUrl : null,
createdAt: new Date(img.startTime).toISOString(),
isFavorite: false,
model: img.model,
tags: [],
blurhash: null,
// Mark as generating for special rendering (only while generating)
_isGenerating: img.status === 'generating',
} as any));
@ -152,16 +149,16 @@ export default function GalleryScreen() {
const renderImage = useCallback(({ item }: { item: ImageItem }) => (
<ImageCard
id={item.id}
publicUrl={item.public_url}
publicUrl={item.publicUrl}
prompt={item.prompt}
createdAt={item.created_at}
isFavorite={item.is_favorite}
createdAt={item.createdAt}
isFavorite={item.isFavorite}
model={item.model}
tags={item.tags}
viewMode={galleryViewMode}
blurhash={item.blurhash}
isGenerating={(item as any)._isGenerating}
onToggleFavorite={() => toggleFavorite(item.id, item.is_favorite)}
onToggleFavorite={() => toggleFavorite(item.id, item.isFavorite)}
/>
), [galleryViewMode, toggleFavorite]);

View file

@ -11,22 +11,20 @@ import { useRouter } from 'expo-router';
import { Ionicons } from '@expo/vector-icons';
import { Button } from '~/components/Button';
import { Text } from '~/components/Text';
import { PageHeader } from '~/components/PageHeader';
import { useAuth } from '~/contexts/AuthContext';
import { supabase } from '~/utils/supabase';
import { useModelSelection } from '~/store/modelStore';
import { ThemePicker } from '~/components/ThemePicker';
import { useTheme } from '~/contexts/ThemeContext';
import { useViewStore } from '~/store/viewStore';
import { ViewToggle } from '~/components/ViewToggle';
import { GenerationSettings } from '~/components/settings/GenerationSettings';
import { getArchivedCount } from '~/services/archiveService';
type Profile = {
username: string;
email: string;
avatar_url: string | null;
};
import {
getMyProfile,
updateProfile as apiUpdateProfile,
getUserStats,
type Profile,
type UserStats,
} from '~/services/api/profiles';
export default function ProfileScreen() {
const { user, signOut } = useAuth();
@ -36,7 +34,6 @@ export default function ProfileScreen() {
const [profile, setProfile] = useState<Profile | null>(null);
const [updating, setUpdating] = useState(false);
const [username, setUsername] = useState('');
const [archivedCount, setArchivedCount] = useState(0);
const { resetLoadingState, isLoading: modelsLoading } = useModelSelection();
const { galleryViewMode, setGalleryViewMode, exploreViewMode, setExploreViewMode } = useViewStore();
@ -61,17 +58,7 @@ export default function ProfileScreen() {
console.log('🔍 Fetching profile for user:', user.id);
try {
const { data, error } = await supabase
.from('profiles')
.select('username, email, avatar_url')
.eq('id', user.id)
.single();
if (error) {
console.error('❌ Profile fetch error:', error);
throw error;
}
const data = await getMyProfile();
console.log('✅ Profile fetched:', data);
setProfile(data);
setUsername(data.username || '');
@ -82,17 +69,11 @@ export default function ProfileScreen() {
const updateProfile = async () => {
if (!user) return;
setUpdating(true);
try {
const { error } = await supabase
.from('profiles')
.update({ username: username.trim() })
.eq('id', user.id);
if (error) throw error;
await apiUpdateProfile({ username: username.trim() });
Alert.alert('Erfolg', 'Profil wurde aktualisiert');
fetchProfile();
} catch (error: any) {
@ -127,27 +108,16 @@ export default function ProfileScreen() {
);
};
const getImageStats = async () => {
if (!user) return { total: 0, favorites: 0 };
const { data: images } = await supabase
.from('images')
.select('id, is_favorite, archived_at')
.eq('user_id', user.id)
.is('archived_at', null); // Only count non-archived images
return {
total: images?.length || 0,
favorites: images?.filter(img => img.is_favorite).length || 0
};
};
const [stats, setStats] = useState({ total: 0, favorites: 0 });
const [stats, setStats] = useState<UserStats>({
totalImages: 0,
favoriteImages: 0,
archivedImages: 0,
publicImages: 0,
});
useEffect(() => {
if (user) {
getImageStats().then(setStats);
getArchivedCount(user.id).then(setArchivedCount);
getUserStats().then(setStats).catch(console.error);
}
}, [user]);
@ -185,12 +155,12 @@ export default function ProfileScreen() {
{/* Stats */}
<View style={{ flexDirection: 'row', justifyContent: 'space-around', paddingTop: 16, borderTopWidth: 1, borderTopColor: theme.colors.border }}>
<View style={{ alignItems: 'center' }}>
<Text variant="h2" weight="bold" style={{ color: theme.colors.primary.default }}>{stats.total}</Text>
<Text variant="h2" weight="bold" style={{ color: theme.colors.primary.default }}>{stats.totalImages}</Text>
<Text variant="bodySmall" color="secondary">Bilder</Text>
</View>
<View style={{ width: 1, backgroundColor: theme.colors.border }} />
<View style={{ alignItems: 'center' }}>
<Text variant="h2" weight="bold" style={{ color: theme.colors.primary.default }}>{stats.favorites}</Text>
<Text variant="h2" weight="bold" style={{ color: theme.colors.primary.default }}>{stats.favoriteImages}</Text>
<Text variant="bodySmall" color="secondary">Favoriten</Text>
</View>
</View>
@ -226,16 +196,16 @@ export default function ProfileScreen() {
<View>
<Text variant="body" weight="semibold">Archiv</Text>
<Text variant="bodySmall" color="secondary">
{archivedCount === 0
{stats.archivedImages === 0
? 'Keine archivierten Bilder'
: archivedCount === 1
: stats.archivedImages === 1
? '1 archiviertes Bild'
: `${archivedCount} archivierte Bilder`}
: `${stats.archivedImages} archivierte Bilder`}
</Text>
</View>
</View>
<View style={{ flexDirection: 'row', alignItems: 'center', gap: 8 }}>
{archivedCount > 0 && (
{stats.archivedImages > 0 && (
<View
style={{
backgroundColor: theme.colors.primary.default,
@ -245,7 +215,7 @@ export default function ProfileScreen() {
}}
>
<Text variant="bodySmall" weight="semibold" style={{ color: '#fff' }}>
{archivedCount}
{stats.archivedImages}
</Text>
</View>
)}

View file

@ -137,10 +137,10 @@ export default function ArchiveScreen() {
<View style={{ position: 'relative' }}>
<ImageCard
id={item.id}
publicUrl={item.public_url}
publicUrl={item.publicUrl}
prompt={item.prompt}
createdAt={item.created_at}
isFavorite={item.is_favorite}
createdAt={item.createdAt}
isFavorite={item.isFavorite}
model={item.model}
tags={item.tags}
viewMode="grid3"

View file

@ -11,7 +11,6 @@ import {
} from 'react-native';
import { Image } from 'expo-image';
import { Stack, useLocalSearchParams, useRouter } from 'expo-router';
import { supabase } from '~/utils/supabase';
import { useAuth } from '~/contexts/AuthContext';
import { useTheme } from '~/contexts/ThemeContext';
import * as FileSystem from 'expo-file-system/legacy';
@ -34,33 +33,36 @@ import { Text } from '~/components/Text';
import { RemixBottomSheet } from '~/components/remix/RemixBottomSheet';
import { getThumbnailUrl } from '~/utils/image';
import { archiveImage, restoreImage } from '~/services/archiveService';
import {
getImages,
toggleFavorite as apiToggleFavorite,
publishImage,
unpublishImage,
deleteImage as apiDeleteImage,
getGenerationDetails as apiGetGenerationDetails,
type Image as ApiImage,
type GenerationDetails,
} from '~/services/api/images';
const { width: screenWidth, height: screenHeight } = Dimensions.get('window');
type ImageDetails = {
id: string;
public_url: string;
publicUrl: string | null;
prompt: string;
negative_prompt?: string;
model: string;
width: number;
height: number;
created_at: string;
is_favorite: boolean;
is_public: boolean;
generation_id: string;
user_id: string;
file_size: number;
format: string;
storage_path?: string;
archived_at?: string | null;
};
type GenerationDetails = {
steps: number;
guidance_scale: number;
generation_time_seconds?: number;
status: string;
negativePrompt?: string;
model: string | null;
width: number | null;
height: number | null;
createdAt: string;
isFavorite: boolean;
isPublic: boolean;
generationId?: string;
userId: string;
fileSize: number | null;
format: string | null;
storagePath: string;
archivedAt?: string | null;
};
// Separate component for zoomable image to use hooks properly
@ -96,7 +98,7 @@ function ZoomableImage({
// Download image to cache directory
const fileUri = `${FileSystem.cacheDirectory}picture_share_${item.id}.jpg`;
const downloadResult = await FileSystem.downloadAsync(item.public_url, fileUri);
const downloadResult = await FileSystem.downloadAsync(item.publicUrl!, fileUri);
if (downloadResult.status !== 200) {
throw new Error('Download failed');
@ -207,12 +209,12 @@ function ZoomableImage({
<GestureDetector gesture={composed}>
<Animated.View style={[{ flex: 1 }, animatedStyle]}>
<Image
source={{ uri: getThumbnailUrl(item.public_url, 'full') }}
source={{ uri: getThumbnailUrl(item.publicUrl, 'full') || undefined }}
style={{ width: screenWidth, height: screenHeight }}
contentFit="contain"
transition={300}
cachePolicy="memory-disk"
placeholder={{ uri: getThumbnailUrl(item.public_url, 'medium') }}
placeholder={{ uri: getThumbnailUrl(item.publicUrl, 'medium') || undefined }}
/>
</Animated.View>
</GestureDetector>
@ -234,6 +236,7 @@ export default function ImageDetailScreen() {
const [image, setImage] = useState<ImageDetails | null>(null);
const [generation, setGeneration] = useState<GenerationDetails | null>(null);
// GenerationDetails type is now imported from API
const [loading, setLoading] = useState(true);
const [detailsLoading, setDetailsLoading] = useState(false);
const [downloading, setDownloading] = useState(false);
@ -265,7 +268,7 @@ export default function ImageDetailScreen() {
// Load generation details and tags in parallel
await Promise.all([
fetchGenerationDetails(currentImage.generation_id),
currentImage.generationId ? fetchGenerationDetails(currentImage.generationId) : Promise.resolve(),
fetchImageTags(currentImage.id).then(() => {
const tags = getImageTags(currentImage.id);
setImageTags(tags);
@ -286,38 +289,39 @@ export default function ImageDetailScreen() {
if (!user) return;
try {
const { data: imageData, error } = await supabase
.from('images')
.select('*')
.eq('user_id', user.id)
.order('created_at', { ascending: false });
// Fetch all images (non-archived) via API
// Using a large limit to get all images for gallery navigation
const imageData = await getImages({
page: 1,
limit: 1000, // Large limit to get all images
archived: false,
});
if (error) throw error;
if (imageData && imageData.length > 0) {
// Map API response to ImageDetails type
const imagesWithDetails: ImageDetails[] = imageData.map(img => ({
id: img.id,
publicUrl: img.publicUrl || null,
prompt: img.prompt,
negativePrompt: img.negativePrompt,
model: img.model || null,
width: img.width || null,
height: img.height || null,
createdAt: img.createdAt,
isFavorite: img.isFavorite,
isPublic: img.isPublic,
generationId: img.generationId,
userId: img.userId,
fileSize: img.fileSize || null,
format: img.format || null,
storagePath: img.storagePath,
archivedAt: img.archivedAt,
}));
if (imageData) {
// Fix public URLs if needed
const imagesWithUrls = await Promise.all(
imageData.map(async (img) => {
if (!img.public_url && img.storage_path) {
const { data: urlData } = supabase.storage
.from('generated-images')
.getPublicUrl(img.storage_path);
img.public_url = urlData.publicUrl;
await supabase
.from('images')
.update({ public_url: urlData.publicUrl })
.eq('id', img.id);
}
return img;
})
);
setAllImages(imagesWithUrls);
setAllImages(imagesWithDetails);
// Find initial index based on the id param
const initialIndex = imagesWithUrls.findIndex(img => img.id === id);
const initialIndex = imagesWithDetails.findIndex(img => img.id === id);
if (initialIndex !== -1) {
setCurrentIndex(initialIndex);
}
@ -333,13 +337,8 @@ export default function ImageDetailScreen() {
if (!generationId) return;
try {
const { data: genData, error } = await supabase
.from('image_generations')
.select('steps, guidance_scale, generation_time_seconds, status')
.eq('id', generationId)
.single();
if (!error && genData) {
const genData = await apiGetGenerationDetails(generationId);
if (genData) {
setGeneration(genData);
}
} catch (error) {
@ -357,14 +356,9 @@ export default function ImageDetailScreen() {
if (!image) return;
try {
const { error } = await supabase
.from('images')
.update({ is_favorite: !image.is_favorite })
.eq('id', image.id);
if (!error) {
setImage({ ...image, is_favorite: !image.is_favorite });
}
const newFavoriteStatus = !image.isFavorite;
await apiToggleFavorite(image.id, newFavoriteStatus);
setImage({ ...image, isFavorite: newFavoriteStatus });
} catch (error) {
console.error('Error toggling favorite:', error);
}
@ -374,13 +368,12 @@ export default function ImageDetailScreen() {
if (!image) return;
try {
const { error } = await supabase
.from('images')
.update({ is_public: !image.is_public })
.eq('id', image.id);
if (!error) {
setImage({ ...image, is_public: !image.is_public });
if (image.isPublic) {
await unpublishImage(image.id);
setImage({ ...image, isPublic: false });
} else {
await publishImage(image.id);
setImage({ ...image, isPublic: true });
}
} catch (error) {
console.error('Error toggling public:', error);
@ -388,7 +381,7 @@ export default function ImageDetailScreen() {
};
const handleDownload = async () => {
if (!image?.public_url) return;
if (!image?.publicUrl) return;
setDownloading(true);
try {
@ -400,7 +393,7 @@ export default function ImageDetailScreen() {
// Use cache directory for temporary download
const fileUri = `${FileSystem.cacheDirectory}picture_${image.id}.${image.format || 'webp'}`;
const downloadResult = await FileSystem.downloadAsync(image.public_url, fileUri);
const downloadResult = await FileSystem.downloadAsync(image.publicUrl, fileUri);
if (downloadResult.status !== 200) throw new Error('Download fehlgeschlagen');
@ -417,7 +410,7 @@ export default function ImageDetailScreen() {
};
const handleShare = async () => {
if (!image?.public_url) return;
if (!image?.publicUrl) return;
try {
// Check if sharing is available
@ -430,7 +423,7 @@ export default function ImageDetailScreen() {
// Download image to cache directory
const fileUri = `${FileSystem.cacheDirectory}picture_share_${image.id}.jpg`;
const downloadResult = await FileSystem.downloadAsync(image.public_url, fileUri);
const downloadResult = await FileSystem.downloadAsync(image.publicUrl, fileUri);
if (downloadResult.status !== 200) {
throw new Error('Download failed');
@ -458,7 +451,7 @@ export default function ImageDetailScreen() {
if (!image) return;
try {
if (image.archived_at) {
if (image.archivedAt) {
// Restore from archive
await restoreImage(image.id);
Alert.alert('✓', 'Bild wurde wiederhergestellt', [
@ -489,18 +482,7 @@ export default function ImageDetailScreen() {
onPress: async () => {
setDeleting(true);
try {
if (image?.storage_path) {
await supabase.storage
.from('generated-images')
.remove([image.storage_path]);
}
const { error: dbError } = await supabase
.from('images')
.delete()
.eq('id', id);
if (dbError) throw dbError;
await apiDeleteImage(id!);
Alert.alert('Erfolg', 'Bild wurde gelöscht', [
{ text: 'OK', onPress: () => router.back() }
@ -517,7 +499,8 @@ export default function ImageDetailScreen() {
);
};
const formatFileSize = (bytes: number) => {
const formatFileSize = (bytes: number | null) => {
if (!bytes) return '-';
if (bytes < 1024) return bytes + ' B';
if (bytes < 1048576) return Math.round(bytes / 1024) + ' KB';
return (bytes / 1048576).toFixed(2) + ' MB';
@ -681,9 +664,9 @@ export default function ImageDetailScreen() {
}}
>
<Ionicons
name={image.is_favorite ? 'heart' : 'heart-outline'}
name={image.isFavorite ? 'heart' : 'heart-outline'}
size={24}
color={image.is_favorite ? '#ef4444' : '#fff'}
color={image.isFavorite ? '#ef4444' : '#fff'}
/>
</Pressable>
</View>
@ -713,7 +696,7 @@ export default function ImageDetailScreen() {
style={styles.actionButton}
>
<Ionicons
name={image.is_public ? 'globe-outline' : 'lock-closed-outline'}
name={image.isPublic ? 'globe-outline' : 'lock-closed-outline'}
size={24}
color="#fff"
/>
@ -751,7 +734,7 @@ export default function ImageDetailScreen() {
onPress: handleCopyPrompt,
},
{
text: image.archived_at ? 'Wiederherstellen' : 'Archivieren',
text: image.archivedAt ? 'Wiederherstellen' : 'Archivieren',
onPress: handleArchiveToggle,
},
{
@ -824,12 +807,12 @@ export default function ImageDetailScreen() {
</View>
{/* Negative Prompt */}
{image.negative_prompt && (
{image.negativePrompt && (
<View style={{ marginBottom: 16 }}>
<Text variant="bodySmall" color="secondary" style={{ marginBottom: 4 }}>
Negativer Prompt
</Text>
<Text variant="body">{image.negative_prompt}</Text>
<Text variant="body">{image.negativePrompt}</Text>
</View>
)}
@ -852,10 +835,10 @@ export default function ImageDetailScreen() {
paddingHorizontal: 12,
paddingVertical: 6,
borderRadius: 12,
backgroundColor: `${tag.color}20`,
backgroundColor: `${tag.color || '#888888'}20`,
}}
>
<Text style={{ color: tag.color, fontSize: 12 }}>
<Text style={{ color: tag.color || '#888888', fontSize: 12 }}>
#{tag.name}
</Text>
</View>
@ -905,7 +888,7 @@ export default function ImageDetailScreen() {
{detailsLoading ? (
<View style={{ width: 30, height: 12, backgroundColor: theme.colors.border, borderRadius: 4 }} />
) : (
<Text variant="bodySmall">{generation?.guidance_scale || '-'}</Text>
<Text variant="bodySmall">{generation?.guidanceScale || '-'}</Text>
)}
</View>
@ -914,18 +897,18 @@ export default function ImageDetailScreen() {
{detailsLoading ? (
<View style={{ width: 40, height: 12, backgroundColor: theme.colors.border, borderRadius: 4 }} />
) : (
<Text variant="bodySmall">{generation?.generation_time_seconds ? `${generation.generation_time_seconds}s` : '-'}</Text>
<Text variant="bodySmall">{generation?.generationTimeSeconds ? `${generation.generationTimeSeconds}s` : '-'}</Text>
)}
</View>
<View style={{ flexDirection: 'row', justifyContent: 'space-between' }}>
<Text variant="bodySmall" color="secondary">Dateigröße</Text>
<Text variant="bodySmall">{formatFileSize(image.file_size)}</Text>
<Text variant="bodySmall">{formatFileSize(image.fileSize)}</Text>
</View>
<View style={{ flexDirection: 'row', justifyContent: 'space-between' }}>
<Text variant="bodySmall" color="secondary">Erstellt</Text>
<Text variant="bodySmall">{formatDate(image.created_at)}</Text>
<Text variant="bodySmall">{formatDate(image.createdAt)}</Text>
</View>
</View>
</View>
@ -947,9 +930,9 @@ export default function ImageDetailScreen() {
)}
{/* Remix Bottom Sheet */}
{image && (
{image && image.publicUrl && (
<RemixBottomSheet
imageUrl={image.public_url}
imageUrl={image.publicUrl}
imageId={image.id}
originalPrompt={image.prompt}
isOpen={showRemixSheet}

View file

@ -17,9 +17,8 @@ import { Ionicons } from '@expo/vector-icons';
import { useSafeAreaInsets } from 'react-native-safe-area-context';
import { Slider } from '~/components/ui/Slider';
import { useAuth } from '~/contexts/AuthContext';
import { supabase } from '~/utils/supabase';
import { useModelSelection } from '~/store/modelStore';
import { generateImage } from '~/services/imageGeneration';
import { generateAndWait } from '~/services/api/generate';
import { useTheme } from '~/contexts/ThemeContext';
const { width: screenWidth } = Dimensions.get('window');
@ -92,65 +91,31 @@ export function RemixBottomSheet({
setIsGenerating(true);
try {
// Create generation record with source image
const { data: generation, error: genError } = await supabase
.from('image_generations')
.insert({
user_id: user.id,
prompt: prompt.trim(),
model: selectedModel.name,
width: 1024,
height: 1024,
steps: selectedModel.default_steps,
guidance_scale: selectedModel.default_guidance_scale,
source_image_url: imageUrl,
generation_strength: strength,
status: 'pending'
})
.select()
.single();
if (genError) throw genError;
// Get session for auth
const { data: { session } } = await supabase.auth.getSession();
if (!session) throw new Error('No active session');
// Call edge function with optional img2img parameters
const requestBody: any = {
// Use Backend API for remix generation
const result = await generateAndWait({
prompt: prompt.trim(),
model_id: selectedModel.replicate_id,
model_version: selectedModel.version,
modelId: selectedModel.id,
width: 1024,
height: 1024,
num_inference_steps: selectedModel.default_steps,
guidance_scale: selectedModel.default_guidance_scale,
generation_id: generation.id,
};
// Add img2img parameters only if model supports it
if (selectedModel.supports_img2img) {
requestBody.source_image_url = imageUrl;
requestBody.strength = strength;
}
const { data, error } = await supabase.functions.invoke('generate-image', {
body: requestBody,
headers: {
'Authorization': `Bearer ${session.access_token}`,
}
steps: selectedModel.defaultSteps || selectedModel.default_steps,
guidanceScale: selectedModel.defaultGuidanceScale || selectedModel.default_guidance_scale,
// Add img2img parameters only if model supports it
sourceImageUrl: selectedModel.supportsImg2Img || selectedModel.supports_img2img ? imageUrl : undefined,
generationStrength: selectedModel.supportsImg2Img || selectedModel.supports_img2img ? strength : undefined,
});
if (error) throw error;
if (result.status === 'failed') {
throw new Error(result.errorMessage || 'Generation failed');
}
Alert.alert(
'Erfolgreich!',
`Remix wurde in ${data.generation_time} Sekunden generiert.`,
'Erfolgreich!',
`Remix wurde in ${result.generationTimeSeconds || 0} Sekunden generiert.`,
[
{
text: 'Anzeigen',
{
text: 'Anzeigen',
onPress: () => {
onSuccess?.(data.image.id);
onSuccess?.(result.image?.id || '');
onClose();
}
},
@ -161,7 +126,7 @@ export function RemixBottomSheet({
} catch (error: any) {
console.error('Remix error:', error);
Alert.alert(
'Fehler',
'Fehler',
error.message || 'Remix konnte nicht erstellt werden'
);
} finally {

View file

@ -1,169 +1,267 @@
import { createContext, useContext, useEffect, useState } from 'react';
import { Session, User } from '@supabase/supabase-js';
import { supabase } from '~/utils/supabase';
import React, { createContext, useContext, useEffect, useState } from 'react';
import { ActivityIndicator, View, Text } from 'react-native';
import * as SecureStore from 'expo-secure-store';
import {
createAuthService,
createTokenManager,
setStorageAdapter,
setDeviceAdapter,
setNetworkAdapter,
type UserData,
} from '@manacore/shared-auth';
import { logger } from '~/utils/logger';
type AuthContextType = {
session: Session | null;
user: User | null;
loading: boolean;
signOut: () => Promise<void>;
// Mana Core Auth URL from environment
const MANA_AUTH_URL = process.env.EXPO_PUBLIC_MANA_CORE_AUTH_URL || 'http://localhost:3001';
// Create SecureStore adapter for React Native
const createSecureStoreAdapter = () => ({
async getItem<T>(key: string): Promise<T | null> {
try {
const value = await SecureStore.getItemAsync(key);
return value ? JSON.parse(value) : null;
} catch {
return null;
}
},
async setItem(key: string, value: unknown): Promise<void> {
await SecureStore.setItemAsync(key, JSON.stringify(value));
},
async removeItem(key: string): Promise<void> {
await SecureStore.deleteItemAsync(key);
},
});
// Create device adapter for React Native
const createReactNativeDeviceAdapter = () => {
let deviceId: string | null = null;
return {
async getDeviceInfo() {
if (!deviceId) {
// Try to get stored device ID
deviceId = await SecureStore.getItemAsync('@device/id');
if (!deviceId) {
// Generate new device ID
deviceId = `rn-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
await SecureStore.setItemAsync('@device/id', deviceId);
}
}
return {
deviceId,
deviceName: 'React Native Device',
deviceType: 'mobile',
platform: 'react-native',
};
},
async getStoredDeviceId() {
return deviceId || (await SecureStore.getItemAsync('@device/id'));
},
};
};
// Create network adapter (basic implementation)
const createReactNativeNetworkAdapter = () => ({
async isDeviceConnected() {
return true; // Always assume connected for now
},
async hasStableConnection() {
return true;
},
});
// Initialize adapters
setStorageAdapter(createSecureStoreAdapter());
setDeviceAdapter(createReactNativeDeviceAdapter());
setNetworkAdapter(createReactNativeNetworkAdapter());
// Create auth service
const authService = createAuthService({ baseUrl: MANA_AUTH_URL });
const tokenManager = createTokenManager(authService);
// Export for use in API client
export { authService, tokenManager };
// Auth context type
type AuthContextType = {
user: UserData | null;
loading: boolean;
signIn: (email: string, password: string) => Promise<{ error: any | null }>;
signUp: (email: string, password: string, username?: string) => Promise<{ error: any | null; data: any | null }>;
signOut: () => Promise<void>;
resetPassword: (email: string) => Promise<{ error: any | null }>;
};
// Create auth context
const AuthContext = createContext<AuthContextType | undefined>(undefined);
// Hook to access auth context
export const useAuth = () => {
const context = useContext(AuthContext);
if (context === undefined) {
// Return safe defaults during initial render before provider is ready
return {
user: null,
loading: true,
signIn: async () => ({ error: null }),
signUp: async () => ({ error: null, data: null }),
signOut: async () => {},
resetPassword: async () => ({ error: null }),
};
}
return context;
};
// AuthProvider component
export function AuthProvider({ children }: { children: React.ReactNode }) {
const [session, setSession] = useState<Session | null>(null);
const [user, setUser] = useState<UserData | null>(null);
const [loading, setLoading] = useState(true);
const ensureProfile = async (user: User) => {
console.log('🔍 Checking if profile exists for:', user.id);
// Test basic Supabase connectivity first
try {
console.log('🧪 Testing Supabase connection...');
const testStart = Date.now();
const { data: testData, error: testError } = await supabase
.from('profiles')
.select('id')
.limit(1);
const testDuration = Date.now() - testStart;
console.log('🧪 Test query completed in', testDuration, 'ms', { hasData: !!testData, error: testError?.message });
} catch (e) {
console.error('🧪 Test query failed:', e);
}
// Check if profile exists
console.log('🔍 Fetching user profile...');
const { data: profile, error } = await supabase
.from('profiles')
.select('id')
.eq('id', user.id)
.single();
console.log('🔍 Profile query done:', { hasProfile: !!profile, error: error?.message });
if (error) {
console.log('⚠️ Profile check error:', error.message, '- Will try to create');
}
// If profile doesn't exist, create it
if (error || !profile) {
console.log(' Creating profile for user:', user.id);
const { error: insertError } = await supabase
.from('profiles')
.insert({
id: user.id,
email: user.email,
username: user.user_metadata?.username || user.email?.split('@')[0] || 'user',
});
if (insertError) {
console.error('❌ Error creating profile:', insertError);
logger.error('Error creating profile:', insertError);
} else {
console.log('✅ Profile created successfully');
}
} else {
console.log('✅ Profile already exists');
}
};
// Initialize auth state
useEffect(() => {
// Get initial session
supabase.auth.getSession().then(async ({ data: { session }, error }) => {
console.log('🔐 Initial session check:', {
hasSession: !!session,
userId: session?.user?.id,
email: session?.user?.email,
error: error?.message,
});
logger.debug('Initial session check:', { hasSession: !!session, error });
setSession(session);
setLoading(false); // Set loading to false IMMEDIATELY - don't wait for ensureProfile
const initialize = async () => {
try {
setLoading(true);
logger.debug('Initializing auth session...');
// Ensure profile in background (don't block)
if (session?.user) {
ensureProfile(session.user).catch(err => {
console.error('Error ensuring profile:', err);
});
}
});
// Check if user is authenticated
const authenticated = await authService.isAuthenticated();
// Listen for auth changes
const { data: { subscription } } = supabase.auth.onAuthStateChange(async (event, session) => {
console.log('🔐 Auth state change:', {
event,
userId: session?.user?.id,
email: session?.user?.email,
});
logger.debug('Auth state change:', event, session?.user?.id);
setSession(session);
setLoading(false); // Set loading to false IMMEDIATELY - don't wait for ensureProfile
// Ensure profile in background (don't block)
if (session?.user) {
ensureProfile(session.user).catch(err => {
console.error('Error ensuring profile:', err);
});
}
});
return () => subscription.unsubscribe();
}, []);
const signOut = async () => {
logger.info('Signing out...');
// Add timeout to prevent hanging
const timeout = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Sign out timeout after 5s')), 5000)
);
try {
const signOutPromise = supabase.auth.signOut();
const { error } = await Promise.race([signOutPromise, timeout]) as any;
if (error) {
logger.error('Error signing out:', error);
throw error;
} else {
logger.success('Successfully signed out');
setSession(null);
if (authenticated) {
const userData = await authService.getUserFromToken();
setUser(userData);
logger.debug('User authenticated:', userData?.id);
} else {
logger.debug('No active session found');
}
} catch (error) {
logger.error('Error initializing auth session:', error);
setUser(null);
} finally {
setLoading(false);
}
};
initialize();
}, []);
// Sign in with email and password
const signIn = async (email: string, password: string) => {
try {
logger.info('Attempting sign in:', email);
const result = await authService.signIn(email, password);
if (!result.success) {
logger.error('Auth error:', result.error);
return { error: { message: result.error } };
}
// Get user data from token
const userData = await authService.getUserFromToken();
setUser(userData);
logger.success('Sign in successful:', userData?.id);
return { error: null };
} catch (error: any) {
// If timeout or error, force logout anyway
logger.error('Sign out failed, forcing local logout:', error);
setSession(null);
setLoading(false);
// Don't throw - we want to force logout even if API fails
logger.error('Unexpected sign in error:', error.message || error);
return { error };
}
};
// Sign up with email and password
const signUp = async (email: string, password: string, username?: string) => {
try {
logger.info('Attempting sign up:', email);
const result = await authService.signUp(email, password);
if (!result.success) {
return { data: null, error: { message: result.error } };
}
// Auto sign in after successful signup
const signInResult = await signIn(email, password);
if (signInResult.error) {
return { data: null, error: signInResult.error };
}
logger.success('Sign up successful');
return { data: user, error: null };
} catch (error) {
logger.error('Sign up error:', error);
return { data: null, error };
}
};
// Sign out
const signOut = async () => {
try {
logger.info('Signing out...');
// Add timeout to prevent hanging
const timeout = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Sign out timeout after 5s')), 5000)
);
try {
const signOutPromise = authService.signOut();
await Promise.race([signOutPromise, timeout]);
logger.success('Successfully signed out');
} catch (error: any) {
logger.error('Sign out failed, forcing local logout:', error);
}
// Always clear local state
setUser(null);
} catch (error) {
logger.error('Sign out error:', error);
setUser(null);
}
};
// Reset password
const resetPassword = async (email: string) => {
try {
logger.info('Requesting password reset for:', email);
const result = await authService.forgotPassword(email);
if (!result.success) {
return { error: { message: result.error } };
}
logger.success('Password reset email sent');
return { error: null };
} catch (error) {
logger.error('Password reset error:', error);
return { error };
}
};
// Show loading indicator during initialization
if (loading) {
return (
<View style={{ flex: 1, justifyContent: 'center', alignItems: 'center', backgroundColor: '#000' }}>
<ActivityIndicator size="large" color="#0A84FF" />
<Text style={{ marginTop: 16, color: '#fff' }}>Authentifizierung wird initialisiert...</Text>
</View>
);
}
// Provide auth context
return (
<AuthContext.Provider
value={{
session,
user: session?.user ?? null,
user,
loading,
signIn,
signUp,
signOut,
resetPassword,
}}
>
{children}
</AuthContext.Provider>
);
}
export const useAuth = () => {
const context = useContext(AuthContext);
if (context === undefined) {
// Return safe defaults during initial render before provider is ready
// This prevents crashes with expo-router's eager rendering
return {
user: null,
loading: true,
signOut: async () => {},
};
}
return context;
};

View file

@ -1,9 +1,9 @@
import { useEffect } from 'react';
import { supabase } from '~/utils/supabase';
import { useTagStore } from '~/store/tagStore';
import { usePagination } from '~/hooks/usePagination';
import { PAGINATION } from '~/constants';
import { ImageItem } from '~/types/gallery';
import { getImages } from '~/services/api/images';
type UseArchiveFetchingProps = {
userId: string | undefined;
@ -30,25 +30,12 @@ export function useArchiveFetching({ userId, onError }: UseArchiveFetchingProps)
pagination.setLoadingMore(true);
}
// Query only archived images (archived_at IS NOT NULL)
let query = supabase
.from('images')
.select('id, public_url, prompt, created_at, is_favorite, model, blurhash, archived_at')
.eq('user_id', userId)
.not('archived_at', 'is', null); // Only archived images
// Apply pagination
const from = pageNum * PAGINATION.GALLERY_PAGE_SIZE;
const to = from + PAGINATION.GALLERY_PAGE_SIZE - 1;
const { data, error } = await query
.order('archived_at', { ascending: false }) // Sort by archive date (newest first)
.range(from, to);
if (error) throw error;
// Fetch tags for all images - PARALLEL for performance
const imageData = data || [];
// Fetch archived images from backend API
const imageData = await getImages({
page: pageNum + 1, // API uses 1-based pagination
limit: PAGINATION.GALLERY_PAGE_SIZE,
archived: true,
});
// Check if there are more images
pagination.setHasMore(imageData.length >= PAGINATION.GALLERY_PAGE_SIZE);
@ -58,10 +45,16 @@ export function useArchiveFetching({ userId, onError }: UseArchiveFetchingProps)
imageData.map(image => fetchImageTags(image.id))
);
// Add tags to images
const imagesWithTags = imageData.map(img => ({
...img,
tags: getImageTags(img.id)
// Map API response to ImageItem format and add tags
const imagesWithTags: ImageItem[] = imageData.map(img => ({
id: img.id,
publicUrl: img.publicUrl || null,
prompt: img.prompt,
createdAt: img.createdAt,
isFavorite: img.isFavorite,
model: img.model,
blurhash: img.blurhash,
tags: getImageTags(img.id),
}));
// Either replace or append images

View file

@ -1,9 +1,9 @@
import { useEffect } from 'react';
import { supabase } from '~/utils/supabase';
import { useTagStore } from '~/store/tagStore';
import { usePagination } from '~/hooks/usePagination';
import { PAGINATION } from '~/constants';
import { ExploreImageItem, SortMode } from '~/types/explore';
import { getExploreImages } from '~/services/api/explore';
type UseExploreFetchingProps = {
userId: string | undefined;
@ -29,49 +29,12 @@ export function useExploreFetching({ userId, sortMode, onError }: UseExploreFetc
pagination.setLoadingMore(true);
}
// Fetch public images with creator info
let query = supabase
.from('images')
.select(`
id,
public_url,
prompt,
created_at,
is_favorite,
user_id,
model,
blurhash,
profiles!images_user_id_fkey (
id,
username,
avatar_url
)
`)
.eq('is_public', true);
// Apply sorting
switch (sortMode) {
case 'recent':
query = query.order('created_at', { ascending: false });
break;
case 'popular':
query = query.order('created_at', { ascending: false });
break;
case 'trending':
query = query.order('created_at', { ascending: false });
break;
}
// Apply pagination
const from = pageNum * PAGINATION.EXPLORE_PAGE_SIZE;
const to = from + PAGINATION.EXPLORE_PAGE_SIZE - 1;
const { data, error } = await query.range(from, to);
if (error) throw error;
// Fetch tags and likes for all images
const imageData = data || [];
// Fetch public images from backend API
const imageData = await getExploreImages({
page: pageNum + 1, // API uses 1-based pagination
limit: PAGINATION.EXPLORE_PAGE_SIZE,
sortBy: sortMode,
});
// Check if there are more images
pagination.setHasMore(imageData.length >= PAGINATION.EXPLORE_PAGE_SIZE);
@ -79,50 +42,28 @@ export function useExploreFetching({ userId, sortMode, onError }: UseExploreFetc
// Batch fetch all tags in parallel
await Promise.all(imageData.map(img => fetchImageTags(img.id)));
// Batch fetch all likes in ONE query
const imageIds = imageData.map(img => img.id);
const [likesCountData, userLikesData] = await Promise.all([
// Get counts for all images at once
supabase
.from('image_likes')
.select('image_id')
.in('image_id', imageIds),
// Get user's likes for all images at once (only if logged in)
userId ? supabase
.from('image_likes')
.select('image_id')
.in('image_id', imageIds)
.eq('user_id', userId) : Promise.resolve({ data: [] })
]);
// Create lookup maps for O(1) access
const likesCountMap = new Map<string, number>();
likesCountData.data?.forEach(like => {
likesCountMap.set(like.image_id, (likesCountMap.get(like.image_id) || 0) + 1);
});
const userLikesSet = new Set(userLikesData.data?.map(like => like.image_id) || []);
// Combine all data
const enhancedImages = imageData.map(img => ({
...img,
// Map API response to ExploreImageItem format
const enhancedImages: ExploreImageItem[] = imageData.map(img => ({
id: img.id,
publicUrl: img.publicUrl || null,
prompt: img.prompt,
createdAt: img.createdAt,
isFavorite: img.isFavorite,
userId: img.userId,
model: img.model,
blurhash: img.blurhash,
tags: getImageTags(img.id),
creator: img.profiles,
likes_count: likesCountMap.get(img.id) || 0,
user_has_liked: userLikesSet.has(img.id)
// TODO: Backend should return creator info and likes
creator: undefined,
likesCount: 0,
userHasLiked: false,
}));
// Sort by likes if popular mode (only for current batch)
let finalImages = enhancedImages;
if (sortMode === 'popular') {
finalImages = [...enhancedImages].sort((a, b) => (b.likes_count || 0) - (a.likes_count || 0));
}
// Either replace or append images
if (append) {
pagination.appendItems(finalImages);
pagination.appendItems(enhancedImages);
} else {
pagination.setItems(finalImages);
pagination.setItems(enhancedImages);
pagination.resetPage();
}
} catch (error) {

View file

@ -1,11 +1,11 @@
import { useEffect } from 'react';
import { Image } from 'expo-image';
import { supabase } from '~/utils/supabase';
import { PAGINATION } from '~/constants';
import { ViewMode } from '~/types/gallery';
import { SortMode } from '~/types/explore';
import { getThumbnailUrl, getSizeForViewMode } from '~/utils/image';
import { PREFETCH_DEBOUNCE_MS } from '~/constants/gallery';
import { getExploreImages } from '~/services/api/explore';
type UseExplorePrefetchProps = {
hasMore: boolean;
@ -40,34 +40,21 @@ export function useExplorePrefetch({
// Get the last few images that would be from the next page
const prefetchCount = Math.min(6, PAGINATION.EXPLORE_PAGE_SIZE);
// Calculate what the next page would be
const nextPageNum = currentPage + 1;
// Calculate what the next page would be (API uses 1-based pagination)
const nextPageNum = currentPage + 2;
let query = supabase
.from('images')
.select('id, public_url')
.eq('is_public', true);
// Apply same sorting as main query
switch (sortMode) {
case 'recent':
case 'popular':
case 'trending':
query = query.order('created_at', { ascending: false });
break;
}
const from = nextPageNum * PAGINATION.EXPLORE_PAGE_SIZE;
const to = from + prefetchCount - 1;
const { data } = await query.range(from, to);
const data = await getExploreImages({
page: nextPageNum,
limit: prefetchCount,
sortBy: sortMode,
});
if (data && data.length > 0) {
// Prefetch thumbnails for next page
const thumbnailSize = getSizeForViewMode(viewMode);
data.forEach(img => {
if (img.public_url) {
const thumbnailUrl = getThumbnailUrl(img.public_url, thumbnailSize);
if (img.publicUrl) {
const thumbnailUrl = getThumbnailUrl(img.publicUrl, thumbnailSize);
if (thumbnailUrl) {
Image.prefetch(thumbnailUrl);
}

View file

@ -1,9 +1,9 @@
import { useEffect } from 'react';
import { supabase } from '~/utils/supabase';
import { useTagStore } from '~/store/tagStore';
import { usePagination } from '~/hooks/usePagination';
import { PAGINATION } from '~/constants';
import { ImageItem, FilterMode } from '~/types/gallery';
import { getImages, toggleFavorite as apiToggleFavorite } from '~/services/api/images';
type UseImageFetchingProps = {
userId: string | undefined;
@ -43,48 +43,17 @@ export function useImageFetching({ userId, filterMode, onError }: UseImageFetchi
pagination.setLoadingMore(true);
}
console.log('🔍 Building Supabase query...');
let query = supabase
.from('images')
.select('id, public_url, prompt, created_at, is_favorite, model, blurhash')
.eq('user_id', userId)
.is('archived_at', null); // Only show non-archived images
console.log('🔍 Fetching images via API...');
// Apply favorite filter if needed
if (filterMode === 'favorites') {
console.log('🔍 Adding favorites filter');
query = query.eq('is_favorite', true);
}
// Fetch images from backend API
const imageData = await getImages({
page: pageNum + 1, // API uses 1-based pagination
limit: PAGINATION.GALLERY_PAGE_SIZE,
archived: false,
favoritesOnly: filterMode === 'favorites',
});
// Apply pagination
const from = pageNum * PAGINATION.GALLERY_PAGE_SIZE;
const to = from + PAGINATION.GALLERY_PAGE_SIZE - 1;
console.log('🔍 Fetching range:', { from, to, pageSize: PAGINATION.GALLERY_PAGE_SIZE });
console.log('⏳ Executing Supabase query...');
// Add timeout to detect hanging queries
const timeout = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Query timeout after 10s')), 10000)
);
const queryPromise = query
.order('created_at', { ascending: false })
.range(from, to);
const { data, error } = await Promise.race([queryPromise, timeout]) as any;
console.log('✅ Supabase query completed');
if (error) {
console.error('❌ Error fetching images from Supabase:', error);
throw error;
}
console.log('✅ Images fetched:', data?.length || 0, 'images');
// Fetch tags for all images - PARALLEL for massive speed boost
const imageData = data || [];
console.log('✅ Images fetched:', imageData.length, 'images');
// Check if there are more images
pagination.setHasMore(imageData.length >= PAGINATION.GALLERY_PAGE_SIZE);
@ -94,10 +63,16 @@ export function useImageFetching({ userId, filterMode, onError }: UseImageFetchi
imageData.map(image => fetchImageTags(image.id))
);
// Add tags to images
const imagesWithTags = imageData.map(img => ({
...img,
tags: getImageTags(img.id)
// Map API response to ImageItem format and add tags
const imagesWithTags: ImageItem[] = imageData.map(img => ({
id: img.id,
publicUrl: img.publicUrl || null,
prompt: img.prompt,
createdAt: img.createdAt,
isFavorite: img.isFavorite,
model: img.model,
blurhash: img.blurhash,
tags: getImageTags(img.id),
}));
// Either replace or append images
@ -133,16 +108,11 @@ export function useImageFetching({ userId, filterMode, onError }: UseImageFetchi
const toggleFavorite = async (imageId: string, currentStatus: boolean) => {
try {
const { error } = await supabase
.from('images')
.update({ is_favorite: !currentStatus })
.eq('id', imageId);
if (!error) {
pagination.setItems(pagination.items.map(img =>
img.id === imageId ? { ...img, is_favorite: !currentStatus } : img
));
}
await apiToggleFavorite(imageId, !currentStatus);
// Update local state
pagination.setItems(pagination.items.map(img =>
img.id === imageId ? { ...img, isFavorite: !currentStatus } : img
));
} catch (error) {
console.error('Error toggling favorite:', error);
onError?.(error as Error);

View file

@ -1,7 +1,7 @@
import { useState, useEffect } from 'react';
import { Alert } from 'react-native';
import { router } from 'expo-router';
import { generateImage } from '~/services/imageGeneration';
import { generateAndWait, type GenerationStatus } from '~/services/api/generate';
import { useAuth } from '~/contexts/AuthContext';
import { useModelSelection } from '~/store/modelStore';
import { useTagStore, Tag } from '~/store/tagStore';
@ -153,14 +153,14 @@ export function useImageGeneration() {
}
try {
// Generate image in background
const result = await generateImage({
// Generate image via Backend API (synchronous mode)
const result = await generateAndWait({
prompt: prompt.trim(),
model_id: selectedModel.id,
modelId: selectedModel.id,
width,
height,
steps,
guidance_scale: guidanceScale,
guidanceScale,
});
// Add tags if needed
@ -169,14 +169,14 @@ export function useImageGeneration() {
}
// Mark as completed with real image data
completeGeneratingImage(tempId, result.image, result.generation_time);
completeGeneratingImage(tempId, result.image, result.generationTimeSeconds || 0);
// Clear form
setPrompt('');
setSelectedTags([]);
// Call success callback with generation time
options?.onSuccess?.(result.generation_time);
options?.onSuccess?.(result.generationTimeSeconds || 0);
} catch (error: any) {
console.error('Generation error:', error);

View file

@ -1,6 +1,6 @@
import { supabase } from '~/utils/supabase';
import { router } from 'expo-router';
import { ExploreImageItem } from '~/types/explore';
import { likeImage, unlikeImage } from '~/services/api/images';
type UseImageLikesProps = {
userId: string | undefined;
@ -17,30 +17,17 @@ export function useImageLikes({ userId, items, setItems, onError }: UseImageLike
}
try {
if (userHasLiked) {
// Unlike
await supabase
.from('image_likes')
.delete()
.eq('image_id', imageId)
.eq('user_id', userId);
} else {
// Like
await supabase
.from('image_likes')
.insert({
image_id: imageId,
user_id: userId
});
}
const result = userHasLiked
? await unlikeImage(imageId)
: await likeImage(imageId);
// Update local state
// Update local state with API response
setItems(items.map(img =>
img.id === imageId
? {
...img,
user_has_liked: !userHasLiked,
likes_count: (img.likes_count || 0) + (userHasLiked ? -1 : 1)
userHasLiked: result.liked,
likesCount: result.likeCount,
}
: img
));

View file

@ -1,10 +1,10 @@
import { useEffect } from 'react';
import { Image } from 'expo-image';
import { supabase } from '~/utils/supabase';
import { PAGINATION } from '~/constants';
import { FilterMode, ViewMode } from '~/types/gallery';
import { getThumbnailUrl, getSizeForViewMode } from '~/utils/image';
import { PREFETCH_DEBOUNCE_MS } from '~/constants/gallery';
import { getImages } from '~/services/api/images';
type UseImagePrefetchProps = {
hasMore: boolean;
@ -41,31 +41,22 @@ export function useImagePrefetch({
// Get the last few images that would be from the next page
const prefetchCount = Math.min(6, PAGINATION.GALLERY_PAGE_SIZE);
// Calculate what the next page would be
const nextPageNum = currentPage + 1;
// Calculate what the next page would be (API uses 1-based pagination)
const nextPageNum = currentPage + 2;
let query = supabase
.from('images')
.select('id, public_url')
.eq('user_id', userId);
if (filterMode === 'favorites') {
query = query.eq('is_favorite', true);
}
const from = nextPageNum * PAGINATION.GALLERY_PAGE_SIZE;
const to = from + prefetchCount - 1;
const { data } = await query
.order('created_at', { ascending: false })
.range(from, to);
const data = await getImages({
page: nextPageNum,
limit: prefetchCount,
archived: false,
favoritesOnly: filterMode === 'favorites',
});
if (data && data.length > 0) {
// Prefetch thumbnails for next page
const thumbnailSize = getSizeForViewMode(viewMode);
data.forEach(img => {
if (img.public_url) {
const thumbnailUrl = getThumbnailUrl(img.public_url, thumbnailSize);
if (img.publicUrl) {
const thumbnailUrl = getThumbnailUrl(img.publicUrl, thumbnailSize);
if (thumbnailUrl) {
Image.prefetch(thumbnailUrl);
}

View file

@ -19,6 +19,7 @@
"dependencies": {
"@callstack/liquid-glass": "^0.4.2",
"@expo/vector-icons": "^15.0.2",
"@manacore/shared-auth": "workspace:*",
"@picture/design-tokens": "workspace:*",
"@picture/shared": "workspace:*",
"@react-native-async-storage/async-storage": "2.2.0",
@ -37,6 +38,7 @@
"expo-location": "~19.0.7",
"expo-media-library": "~18.2.0",
"expo-router": "~6.0.10",
"expo-secure-store": "~15.0.7",
"expo-sharing": "~14.0.7",
"expo-status-bar": "~3.0.8",
"expo-symbols": "^1.0.7",

View file

@ -1,4 +1,18 @@
import { supabase } from '~/utils/supabase';
/**
* Archive Service - Using NestJS Backend API
*/
import {
archiveImage as apiArchiveImage,
restoreImage as apiRestoreImage,
deleteImage as apiDeleteImage,
getArchivedCount as apiGetArchivedCount,
getArchivedImages as apiGetArchivedImages,
batchArchiveImages as apiBatchArchiveImages,
batchRestoreImages as apiBatchRestoreImages,
batchDeleteImages as apiBatchDeleteImages,
type Image,
} from './api/images';
import { logger } from '~/utils/logger';
/**
@ -7,17 +21,7 @@ import { logger } from '~/utils/logger';
export async function archiveImage(imageId: string): Promise<void> {
try {
logger.info('Archiving image:', imageId);
const { error } = await supabase
.from('images')
.update({ archived_at: new Date().toISOString() })
.eq('id', imageId);
if (error) {
logger.error('Failed to archive image:', error);
throw error;
}
await apiArchiveImage(imageId);
logger.success('Image archived successfully');
} catch (error) {
logger.error('Archive error:', error);
@ -31,17 +35,7 @@ export async function archiveImage(imageId: string): Promise<void> {
export async function restoreImage(imageId: string): Promise<void> {
try {
logger.info('Restoring image:', imageId);
const { error } = await supabase
.from('images')
.update({ archived_at: null })
.eq('id', imageId);
if (error) {
logger.error('Failed to restore image:', error);
throw error;
}
await apiRestoreImage(imageId);
logger.success('Image restored successfully');
} catch (error) {
logger.error('Restore error:', error);
@ -50,48 +44,12 @@ export async function restoreImage(imageId: string): Promise<void> {
}
/**
* Delete an archived image permanently (from storage and database)
* Delete an archived image permanently
*/
export async function deleteArchivedImage(imageId: string): Promise<void> {
try {
logger.info('Deleting archived image:', imageId);
// 1. Get image details for storage_path
const { data: image, error: fetchError } = await supabase
.from('images')
.select('storage_path')
.eq('id', imageId)
.single();
if (fetchError) {
logger.error('Failed to fetch image details:', fetchError);
throw fetchError;
}
// 2. Delete from storage if path exists
if (image?.storage_path) {
logger.debug('Deleting from storage:', image.storage_path);
const { error: storageError } = await supabase.storage
.from('generated-images')
.remove([image.storage_path]);
if (storageError) {
logger.warn('Storage deletion failed (file may not exist):', storageError);
// Don't throw - continue with DB deletion
}
}
// 3. Delete from database
const { error: dbError } = await supabase
.from('images')
.delete()
.eq('id', imageId);
if (dbError) {
logger.error('Failed to delete from database:', dbError);
throw dbError;
}
await apiDeleteImage(imageId);
logger.success('Image deleted successfully');
} catch (error) {
logger.error('Delete error:', error);
@ -104,18 +62,8 @@ export async function deleteArchivedImage(imageId: string): Promise<void> {
*/
export async function getArchivedCount(userId: string): Promise<number> {
try {
const { count, error } = await supabase
.from('images')
.select('*', { count: 'exact', head: true })
.eq('user_id', userId)
.not('archived_at', 'is', null);
if (error) {
logger.error('Failed to get archived count:', error);
return 0;
}
return count || 0;
// Note: userId is no longer needed as the backend uses the JWT token
return await apiGetArchivedCount();
} catch (error) {
logger.error('Count error:', error);
return 0;
@ -127,27 +75,9 @@ export async function getArchivedCount(userId: string): Promise<number> {
*/
export async function getArchivedImages(userId: string, page: number = 0, limit: number = 20) {
try {
const from = page * limit;
const to = from + limit - 1;
const { data, error, count } = await supabase
.from('images')
.select('*', { count: 'exact' })
.eq('user_id', userId)
.not('archived_at', 'is', null)
.order('archived_at', { ascending: false })
.range(from, to);
if (error) {
logger.error('Failed to fetch archived images:', error);
throw error;
}
return {
items: data || [],
total: count || 0,
hasMore: count ? to < count - 1 : false,
};
// Note: userId is no longer needed as the backend uses the JWT token
// API uses 1-based pagination, so add 1 to page
return await apiGetArchivedImages(page + 1, limit);
} catch (error) {
logger.error('Fetch archived images error:', error);
throw error;
@ -160,17 +90,7 @@ export async function getArchivedImages(userId: string, page: number = 0, limit:
export async function batchArchiveImages(imageIds: string[]): Promise<void> {
try {
logger.info('Batch archiving images:', imageIds.length);
const { error } = await supabase
.from('images')
.update({ archived_at: new Date().toISOString() })
.in('id', imageIds);
if (error) {
logger.error('Failed to batch archive:', error);
throw error;
}
await apiBatchArchiveImages(imageIds);
logger.success('Batch archive successful');
} catch (error) {
logger.error('Batch archive error:', error);
@ -184,17 +104,7 @@ export async function batchArchiveImages(imageIds: string[]): Promise<void> {
export async function batchRestoreImages(imageIds: string[]): Promise<void> {
try {
logger.info('Batch restoring images:', imageIds.length);
const { error } = await supabase
.from('images')
.update({ archived_at: null })
.in('id', imageIds);
if (error) {
logger.error('Failed to batch restore:', error);
throw error;
}
await apiBatchRestoreImages(imageIds);
logger.success('Batch restore successful');
} catch (error) {
logger.error('Batch restore error:', error);
@ -208,43 +118,7 @@ export async function batchRestoreImages(imageIds: string[]): Promise<void> {
export async function batchDeleteArchivedImages(imageIds: string[]): Promise<void> {
try {
logger.info('Batch deleting images:', imageIds.length);
// 1. Get all storage paths
const { data: images, error: fetchError } = await supabase
.from('images')
.select('storage_path')
.in('id', imageIds);
if (fetchError) {
logger.error('Failed to fetch images for batch delete:', fetchError);
throw fetchError;
}
// 2. Delete from storage
const storagePaths = images?.map(img => img.storage_path).filter(Boolean) || [];
if (storagePaths.length > 0) {
logger.debug('Deleting from storage:', storagePaths.length, 'files');
const { error: storageError } = await supabase.storage
.from('generated-images')
.remove(storagePaths);
if (storageError) {
logger.warn('Some storage deletions failed:', storageError);
// Don't throw - continue with DB deletion
}
}
// 3. Delete from database
const { error: dbError } = await supabase
.from('images')
.delete()
.in('id', imageIds);
if (dbError) {
logger.error('Failed to batch delete from database:', dbError);
throw dbError;
}
await apiBatchDeleteImages(imageIds);
logger.success('Batch delete successful');
} catch (error) {
logger.error('Batch delete error:', error);

View file

@ -1,146 +0,0 @@
import { supabase } from '~/utils/supabase';
import { getModelById } from './models';
import { logger, networkLogger } from '~/utils/logger';
export interface GenerationParams {
prompt: string;
model_id: string;
width?: number;
height?: number;
steps?: number;
guidance_scale?: number;
style?: string;
}
export async function generateImage(params: GenerationParams) {
try {
logger.info('=== Starting Image Generation ===');
logger.debug('Parameters:', params);
// Get current user
const { data: { user }, error: userError } = await supabase.auth.getUser();
if (userError || !user) {
logger.error('User authentication failed:', userError);
throw new Error('User not authenticated');
}
logger.debug('User authenticated:', user.id);
// Get model configuration
const model = await getModelById(params.model_id);
if (!model) {
throw new Error('Invalid model selected');
}
logger.debug('Using model:', model.name, model.replicate_id);
// Create generation record
logger.debug('Creating generation record...');
const { data: generation, error: generationError } = await supabase
.from('image_generations')
.insert({
user_id: user.id,
prompt: params.prompt,
negative_prompt: null,
model: model.name,
style: params.style || null,
width: params.width || model.default_width,
height: params.height || model.default_height,
steps: params.steps || model.default_steps,
guidance_scale: params.guidance_scale || model.default_guidance_scale,
status: 'pending'
})
.select()
.single();
if (generationError) {
logger.error('Failed to create generation record:', generationError);
throw generationError;
}
logger.info('Generation record created:', generation.id);
// No need to manually get session - supabase.functions.invoke() handles it
logger.debug('Calling edge function...');
networkLogger.request('generate-image', 'POST', {
generation_id: generation.id,
model: model.replicate_id,
});
// Call Edge Function to generate image
// Explicitly pass the authorization header
const requestBody = {
prompt: params.prompt,
negative_prompt: undefined,
model_id: model.replicate_id,
model_version: model.version,
width: params.width || model.default_width,
height: params.height || model.default_height,
num_inference_steps: params.steps || model.default_steps,
guidance_scale: params.guidance_scale || model.default_guidance_scale,
generation_id: generation.id
};
logger.debug('Request body:', requestBody);
// Use supabase.functions.invoke which handles auth properly
// It automatically includes both the apikey and Authorization headers
const { data, error } = await supabase.functions.invoke('generate-image', {
body: requestBody,
});
if (error) {
logger.error('Edge function error:', error);
networkLogger.error('generate-image', error);
// Update generation status to failed
await supabase
.from('image_generations')
.update({
status: 'failed',
error_message: error.message
})
.eq('id', generation.id);
throw error;
}
networkLogger.response('generate-image', 200, data);
logger.success('Image generation successful');
return data;
} catch (error: any) {
logger.error('Generation error:', error);
logger.debug('Error stack:', error.stack);
throw error;
}
}
export async function getGenerationStatus(generationId: string) {
const { data, error } = await supabase
.from('image_generations')
.select('*')
.eq('id', generationId)
.single();
if (error) {
throw error;
}
return data;
}
export async function getUserImages(userId: string) {
const { data, error } = await supabase
.from('images')
.select('*')
.eq('user_id', userId)
.order('created_at', { ascending: false });
if (error) {
throw error;
}
return data;
}

View file

@ -1,12 +1,12 @@
/**
* Async Image Generation Service for Mobile App
*
* This service provides React hooks for async image generation using the job queue system.
* This service provides React hooks for async image generation using the NestJS backend.
* It handles:
* - Starting image generation via start-generation Edge Function
* - Subscribing to real-time updates via Supabase Realtime
* - Starting image generation via Backend API
* - Polling for status updates
* - Managing loading states and progress
* - Error handling and retries
* - Error handling
*
* Usage:
* ```tsx
@ -14,40 +14,39 @@
*
* await generate({
* prompt: 'A beautiful sunset',
* model_id: 'black-forest-labs/flux-dev'
* modelId: 'uuid-of-model'
* });
* ```
*/
import { useState, useEffect, useCallback, useRef } from 'react';
import { supabase } from '~/utils/supabase';
import { useState, useCallback, useRef, useEffect } from 'react';
import {
startImageGeneration,
subscribeToGeneration,
type GenerateImageJobParams
} from '@picture/shared/queue';
generateImage,
checkGenerationStatus,
cancelGeneration,
type GenerateImageParams,
type GenerationStatus,
} from './api';
import { logger } from '~/utils/logger';
// ============================================================================
// TYPES
// ============================================================================
export type GenerationStatus = 'idle' | 'queued' | 'processing' | 'downloading' | 'completed' | 'failed';
export type GenerationStatusType = 'idle' | 'pending' | 'queued' | 'processing' | 'completed' | 'failed' | 'cancelled';
export interface GenerationResult {
generationId: string;
jobId: string;
imageUrl: string | null;
status: GenerationStatus;
status: GenerationStatusType;
}
export interface GenerationState {
status: GenerationStatus;
status: GenerationStatusType;
progress: number; // 0-100
imageUrl: string | null;
error: string | null;
generationId: string | null;
jobId: string | null;
}
// ============================================================================
@ -55,7 +54,7 @@ export interface GenerationState {
// ============================================================================
/**
* React hook for async image generation with real-time updates
* React hook for async image generation with polling updates
*
* @example
* ```tsx
@ -66,7 +65,7 @@ export interface GenerationState {
* try {
* await generate({
* prompt: 'A beautiful sunset over mountains',
* model_id: 'black-forest-labs/flux-dev',
* modelId: 'your-model-uuid',
* width: 1024,
* height: 1024
* });
@ -102,17 +101,19 @@ export function useImageGeneration() {
imageUrl: null,
error: null,
generationId: null,
jobId: null
});
const unsubscribeRef = useRef<(() => void) | null>(null);
const pollingRef = useRef<NodeJS.Timeout | null>(null);
const mountedRef = useRef(true);
// Cleanup subscription on unmount
// Cleanup on unmount
useEffect(() => {
mountedRef.current = true;
return () => {
if (unsubscribeRef.current) {
unsubscribeRef.current();
unsubscribeRef.current = null;
mountedRef.current = false;
if (pollingRef.current) {
clearInterval(pollingRef.current);
pollingRef.current = null;
}
};
}, []);
@ -120,133 +121,142 @@ export function useImageGeneration() {
/**
* Calculate progress based on status
*/
const calculateProgress = (status: GenerationStatus): number => {
const calculateProgress = (status: GenerationStatusType): number => {
switch (status) {
case 'idle': return 0;
case 'queued': return 10;
case 'processing': return 50;
case 'downloading': return 80;
case 'completed': return 100;
case 'failed': return 0;
default: return 0;
case 'idle':
return 0;
case 'pending':
return 5;
case 'queued':
return 10;
case 'processing':
return 50;
case 'completed':
return 100;
case 'failed':
case 'cancelled':
return 0;
default:
return 0;
}
};
/**
* Poll for generation status
*/
const startPolling = useCallback((generationId: string) => {
if (pollingRef.current) {
clearInterval(pollingRef.current);
}
pollingRef.current = setInterval(async () => {
if (!mountedRef.current) {
if (pollingRef.current) {
clearInterval(pollingRef.current);
pollingRef.current = null;
}
return;
}
try {
const status = await checkGenerationStatus(generationId);
logger.debug('Generation status update:', status);
const statusType = status.status as GenerationStatusType;
const progress = calculateProgress(statusType);
setState((prev) => ({
...prev,
status: statusType,
progress,
error: status.errorMessage || null,
imageUrl: status.image?.publicUrl || null,
}));
// Stop polling when done
if (
statusType === 'completed' ||
statusType === 'failed' ||
statusType === 'cancelled'
) {
if (pollingRef.current) {
clearInterval(pollingRef.current);
pollingRef.current = null;
}
}
} catch (error: any) {
logger.error('Error polling generation status:', error);
}
}, 2000);
}, []);
/**
* Start image generation
*/
const generate = useCallback(async (params: GenerateImageJobParams) => {
try {
logger.info('Starting image generation...');
logger.debug('Parameters:', params);
const generate = useCallback(
async (params: Omit<GenerateImageParams, 'waitForResult'>) => {
try {
logger.info('Starting image generation...');
logger.debug('Parameters:', params);
// Reset state
setState({
status: 'queued',
progress: 10,
imageUrl: null,
error: null,
generationId: null,
jobId: null
});
// Reset state
setState({
status: 'pending',
progress: 5,
imageUrl: null,
error: null,
generationId: null,
});
// Start generation via Edge Function
const { generationId, jobId } = await startImageGeneration(supabase, params);
// Start generation via Backend API
// Use async mode (waitForResult: false) for mobile to show progress
const response = await generateImage({ ...params, waitForResult: false });
logger.info('Generation started:', { generationId, jobId });
logger.info('Generation started:', { generationId: response.generationId });
// Update state with IDs
setState(prev => ({
...prev,
generationId,
jobId,
status: 'queued',
progress: 10
}));
// Subscribe to real-time updates
const unsubscribe = subscribeToGeneration(supabase, generationId, (generation) => {
logger.debug('Generation update:', generation);
const status = generation.status as GenerationStatus;
const progress = calculateProgress(status);
setState(prev => ({
// Update state with generation ID
setState((prev) => ({
...prev,
status,
progress,
error: generation.error_message || null
generationId: response.generationId,
status: response.status as GenerationStatusType,
progress: calculateProgress(response.status as GenerationStatusType),
}));
// If completed, fetch the image
if (status === 'completed') {
fetchCompletedImage(generationId);
// If already completed (sync mode), we're done
if (response.status === 'completed' && response.image) {
setState((prev) => ({
...prev,
status: 'completed',
progress: 100,
imageUrl: response.image?.publicUrl || null,
}));
return;
}
// Cleanup subscription when done
if (status === 'completed' || status === 'failed') {
if (unsubscribeRef.current) {
unsubscribeRef.current();
unsubscribeRef.current = null;
}
}
});
unsubscribeRef.current = unsubscribe;
} catch (error: any) {
logger.error('Generation error:', error);
setState(prev => ({
...prev,
status: 'failed',
error: error.message || 'Failed to start generation',
progress: 0
}));
throw error;
}
}, []);
/**
* Fetch completed image from database
*/
const fetchCompletedImage = async (generationId: string) => {
try {
const { data: image, error } = await supabase
.from('images')
.select('public_url')
.eq('generation_id', generationId)
.single();
if (error) {
// Start polling for status updates
startPolling(response.generationId);
} catch (error: any) {
logger.error('Generation error:', error);
setState((prev) => ({
...prev,
status: 'failed',
error: error.message || 'Failed to start generation',
progress: 0,
}));
throw error;
}
if (image?.public_url) {
setState(prev => ({
...prev,
imageUrl: image.public_url,
status: 'completed',
progress: 100
}));
}
} catch (error: any) {
logger.error('Failed to fetch completed image:', error);
setState(prev => ({
...prev,
error: 'Image generated but failed to retrieve URL',
status: 'failed'
}));
}
};
},
[startPolling],
);
/**
* Reset state to idle
*/
const reset = useCallback(() => {
// Cleanup subscription
if (unsubscribeRef.current) {
unsubscribeRef.current();
unsubscribeRef.current = null;
// Stop polling
if (pollingRef.current) {
clearInterval(pollingRef.current);
pollingRef.current = null;
}
setState({
@ -255,7 +265,6 @@ export function useImageGeneration() {
imageUrl: null,
error: null,
generationId: null,
jobId: null
});
}, []);
@ -268,12 +277,7 @@ export function useImageGeneration() {
}
try {
// Update generation to cancelled status
await supabase
.from('image_generations')
.update({ status: 'failed', error_message: 'Cancelled by user' })
.eq('id', state.generationId);
await cancelGeneration(state.generationId);
reset();
} catch (error: any) {
logger.error('Failed to cancel generation:', error);
@ -286,11 +290,14 @@ export function useImageGeneration() {
imageUrl: state.imageUrl,
error: state.error,
generationId: state.generationId,
jobId: state.jobId,
generate,
reset,
cancel,
isGenerating: state.status !== 'idle' && state.status !== 'completed' && state.status !== 'failed'
isGenerating:
state.status !== 'idle' &&
state.status !== 'completed' &&
state.status !== 'failed' &&
state.status !== 'cancelled',
};
}
@ -299,53 +306,19 @@ export function useImageGeneration() {
// ============================================================================
/**
* React hook for fetching user's generation history with real-time updates
*
* @example
* ```tsx
* function HistoryScreen() {
* const { generations, loading, error, refresh } = useGenerationHistory();
*
* return (
* <FlatList
* data={generations}
* refreshing={loading}
* onRefresh={refresh}
* renderItem={({ item }) => (
* <GenerationCard generation={item} />
* )}
* />
* );
* }
* ```
* React hook for fetching user's generation history
* Note: This hook now uses the backend API instead of Supabase Realtime
*/
export function useGenerationHistory(limit: number = 20) {
const [generations, setGenerations] = useState<any[]>([]);
const [generations, setGenerations] = useState<GenerationStatus[]>([]);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
const fetchGenerations = async () => {
try {
setLoading(true);
setError(null);
const { data, error: fetchError } = await supabase
.from('image_generations')
.select('*, images(*)')
.order('created_at', { ascending: false })
.limit(limit);
if (fetchError) {
throw fetchError;
}
setGenerations(data || []);
} catch (err: any) {
logger.error('Failed to fetch generations:', err);
setError(err.message);
} finally {
setLoading(false);
}
// TODO: Add /generate/history endpoint to backend
// For now, return empty array
setLoading(false);
setGenerations([]);
};
// Initial fetch
@ -353,48 +326,11 @@ export function useGenerationHistory(limit: number = 20) {
fetchGenerations();
}, [limit]);
// Subscribe to new generations
useEffect(() => {
const channel = supabase
.channel('user-generations')
.on(
'postgres_changes',
{
event: 'INSERT',
schema: 'public',
table: 'image_generations'
},
(payload) => {
logger.debug('New generation:', payload.new);
setGenerations(prev => [payload.new, ...prev]);
}
)
.on(
'postgres_changes',
{
event: 'UPDATE',
schema: 'public',
table: 'image_generations'
},
(payload) => {
logger.debug('Generation updated:', payload.new);
setGenerations(prev =>
prev.map(gen => (gen.id === payload.new.id ? payload.new : gen))
);
}
)
.subscribe();
return () => {
channel.unsubscribe();
};
}, []);
return {
generations,
loading,
error,
refresh: fetchGenerations
refresh: fetchGenerations,
};
}
@ -405,58 +341,14 @@ export function useGenerationHistory(limit: number = 20) {
/**
* Get generation status with details
*/
export async function getGenerationStatus(generationId: string) {
const { data, error } = await supabase
.from('image_generations')
.select('*')
.eq('id', generationId)
.single();
if (error) {
throw error;
}
return data;
export async function getGenerationStatus(generationId: string): Promise<GenerationStatus> {
return checkGenerationStatus(generationId);
}
/**
* Get completed image for a generation
*/
export async function getGenerationImage(generationId: string) {
const { data, error } = await supabase
.from('images')
.select('*')
.eq('generation_id', generationId)
.single();
if (error) {
throw error;
}
return data;
}
/**
* Check if user can generate (rate limiting)
*/
export async function checkCanGenerate(userId: string): Promise<{ canGenerate: boolean; reason?: string }> {
try {
const { data, error } = await supabase.rpc('get_user_limits', {
p_user_id: userId
});
if (error) {
throw error;
}
return {
canGenerate: data.can_generate,
reason: data.limit_reason
};
} catch (error: any) {
logger.error('Failed to check rate limit:', error);
return {
canGenerate: true // Fail open
};
}
const status = await checkGenerationStatus(generationId);
return status.image || null;
}

View file

@ -1,80 +1,25 @@
import { supabase } from '~/utils/supabase';
/**
* Models Service for Mobile App
*
* Uses the NestJS backend API instead of direct Supabase calls.
*/
export interface Model {
id: string;
name: string;
display_name: string;
replicate_id: string;
version?: string;
description?: string;
default_steps: number;
default_guidance_scale: number;
default_width: number;
default_height: number;
supports_negative_prompt: boolean;
supports_seed: boolean;
supports_image_to_image: boolean;
min_width: number;
max_width: number;
min_height: number;
max_height: number;
max_steps: number;
estimated_time_seconds: number;
cost_per_generation: number;
is_active: boolean;
is_default: boolean;
sort_order: number;
supported_aspect_ratios?: string[];
import { getActiveModels as fetchModels, getModelById as fetchModelById, type Model } from './api/models';
export type { Model };
export async function getActiveModels(): Promise<Model[]> {
console.log('Fetching models from backend...');
const models = await fetchModels();
console.log(`Fetched ${models.length} models from backend`);
return models;
}
export async function getActiveModels() {
console.log('🔍 Fetching models from Supabase...');
const { data, error } = await supabase
.from('models')
.select('*')
.eq('is_active', true)
.order('sort_order', { ascending: true });
if (error) {
console.error('❌ Error fetching models:', error);
throw error;
}
console.log(`📊 Fetched ${data?.length || 0} models from database`);
// Return empty array if data is null or undefined
return (data || []) as Model[];
export async function getDefaultModel(): Promise<Model | null> {
const models = await getActiveModels();
return models.find((m) => m.isDefault) || models[0] || null;
}
export async function getDefaultModel() {
const { data, error } = await supabase
.from('models')
.select('*')
.eq('is_active', true)
.eq('is_default', true)
.single();
if (error) {
console.error('Error fetching default model:', error);
throw error;
}
return data as Model;
export async function getModelById(id: string): Promise<Model> {
return fetchModelById(id);
}
export async function getModelById(id: string) {
const { data, error } = await supabase
.from('models')
.select('*')
.eq('id', id)
.eq('is_active', true)
.single();
if (error) {
console.error('Error fetching model:', error);
throw error;
}
return data as Model;
}

View file

@ -1,18 +1,22 @@
import { create } from 'zustand';
import { supabase } from '~/utils/supabase';
import {
getTags as apiGetTags,
createTag as apiCreateTag,
updateTag as apiUpdateTag,
deleteTag as apiDeleteTag,
getImageTags as apiGetImageTags,
addTagToImage as apiAddTagToImage,
removeTagFromImage as apiRemoveTagFromImage,
type Tag,
} from '~/services/api/tags';
export interface Tag {
id: string;
name: string;
color: string | null;
created_at: string;
}
export type { Tag };
export interface ImageTag {
id: string;
image_id: string;
tag_id: string;
added_at: string;
imageId: string;
tagId: string;
addedAt: string;
tag?: Tag;
}
@ -29,16 +33,16 @@ interface TagStore {
createTag: (name: string, color?: string) => Promise<Tag | null>;
updateTag: (id: string, updates: Partial<Tag>) => Promise<boolean>;
deleteTag: (id: string) => Promise<boolean>;
// Actions - Image Tags
fetchImageTags: (imageId: string) => Promise<void>;
addTagsToImage: (imageId: string, tagIds: string[]) => Promise<boolean>;
removeTagFromImage: (imageId: string, tagId: string) => Promise<boolean>;
// Actions - Filtering
toggleTagFilter: (tagId: string) => void;
clearTagFilters: () => void;
// Helpers
getTagByName: (name: string) => Tag | undefined;
getImageTags: (imageId: string) => Tag[];
@ -56,13 +60,8 @@ export const useTagStore = create<TagStore>((set, get) => ({
fetchTags: async () => {
set({ isLoading: true, error: null });
try {
const { data, error } = await supabase
.from('tags')
.select('*')
.order('name');
if (error) throw error;
set({ tags: data || [], isLoading: false });
const tags = await apiGetTags();
set({ tags, isLoading: false });
} catch (error: any) {
set({ error: error.message, isLoading: false });
}
@ -78,32 +77,19 @@ export const useTagStore = create<TagStore>((set, get) => ({
// Generate random color if not provided
const tagColor = color || `#${Math.floor(Math.random()*16777215).toString(16).padStart(6, '0')}`;
const { data, error } = await supabase
.from('tags')
.insert({ name: name.toLowerCase().trim(), color: tagColor })
.select()
.single();
const newTag = await apiCreateTag({
name: name.toLowerCase().trim(),
color: tagColor,
});
if (error) {
// If unique constraint error, fetch existing tag
if (error.code === '23505') {
const { data: existingTag } = await supabase
.from('tags')
.select('*')
.eq('name', name.toLowerCase().trim())
.single();
if (existingTag) {
set(state => ({ tags: [...state.tags.filter(t => t.id !== existingTag.id), existingTag] }));
return existingTag;
}
}
throw error;
}
set(state => ({ tags: [...state.tags, data] }));
return data;
set(state => ({ tags: [...state.tags, newTag] }));
return newTag;
} catch (error: any) {
// If already exists error, re-fetch tags
if (error.message?.includes('already exists')) {
await get().fetchTags();
return get().getTagByName(name) || null;
}
set({ error: error.message });
return null;
}
@ -112,16 +98,16 @@ export const useTagStore = create<TagStore>((set, get) => ({
// Update tag
updateTag: async (id: string, updates: Partial<Tag>) => {
try {
const { error } = await supabase
.from('tags')
.update(updates)
.eq('id', id);
if (error) throw error;
// Only pass name and color to API (convert null to undefined)
const updateData = {
name: updates.name,
color: updates.color ?? undefined,
};
const updatedTag = await apiUpdateTag(id, updateData);
set(state => ({
tags: state.tags.map(tag =>
tag.id === id ? { ...tag, ...updates } : tag
tags: state.tags.map(tag =>
tag.id === id ? updatedTag : tag
)
}));
return true;
@ -134,12 +120,7 @@ export const useTagStore = create<TagStore>((set, get) => ({
// Delete tag
deleteTag: async (id: string) => {
try {
const { error } = await supabase
.from('tags')
.delete()
.eq('id', id);
if (error) throw error;
await apiDeleteTag(id);
set(state => ({
tags: state.tags.filter(tag => tag.id !== id),
@ -161,18 +142,8 @@ export const useTagStore = create<TagStore>((set, get) => ({
// Fetch tags for specific image
fetchImageTags: async (imageId: string) => {
try {
const { data, error } = await supabase
.from('image_tags')
.select(`
*,
tag:tags(*)
`)
.eq('image_id', imageId);
const tags = await apiGetImageTags(imageId);
if (error) throw error;
const tags = data?.map(it => it.tag).filter(Boolean) as Tag[] || [];
set(state => {
const newImageTags = new Map(state.imageTags);
newImageTags.set(imageId, tags);
@ -186,24 +157,17 @@ export const useTagStore = create<TagStore>((set, get) => ({
// Add tags to image
addTagsToImage: async (imageId: string, tagIds: string[]) => {
try {
// Prepare insert data
const imageTagsData = tagIds.map(tagId => ({
image_id: imageId,
tag_id: tagId
}));
const { error } = await supabase
.from('image_tags')
.insert(imageTagsData);
if (error) throw error;
// Add tags sequentially (API doesn't support batch)
await Promise.all(
tagIds.map(tagId => apiAddTagToImage(imageId, tagId))
);
// Update local state
const allTags = get().tags;
const newTags = tagIds
.map(id => allTags.find(t => t.id === id))
.filter(Boolean) as Tag[];
set(state => {
const newImageTags = new Map(state.imageTags);
const currentTags = newImageTags.get(imageId) || [];
@ -224,13 +188,7 @@ export const useTagStore = create<TagStore>((set, get) => ({
// Remove tag from image
removeTagFromImage: async (imageId: string, tagId: string) => {
try {
const { error } = await supabase
.from('image_tags')
.delete()
.eq('image_id', imageId)
.eq('tag_id', tagId);
if (error) throw error;
await apiRemoveTagFromImage(imageId, tagId);
set(state => {
const newImageTags = new Map(state.imageTags);
@ -262,7 +220,7 @@ export const useTagStore = create<TagStore>((set, get) => ({
// Get tag by name
getTagByName: (name: string) => {
return get().tags.find(tag =>
return get().tags.find(tag =>
tag.name.toLowerCase() === name.toLowerCase().trim()
);
},

View file

@ -1 +0,0 @@
v2.48.3

View file

@ -1 +0,0 @@
v2.179.0

View file

@ -1 +0,0 @@
postgresql://postgres.mjuvnnjxwfwlmxjsgkqu:[YOUR-PASSWORD]@aws-1-eu-central-1.pooler.supabase.com:6543/postgres

View file

@ -1 +0,0 @@
17.4.1.074

View file

@ -1 +0,0 @@
mjuvnnjxwfwlmxjsgkqu

View file

@ -1 +0,0 @@
v13.0.4

View file

@ -1 +0,0 @@
fix-object-level

File diff suppressed because it is too large Load diff

View file

@ -1,154 +0,0 @@
# A string used to distinguish different Supabase projects on the same host. Defaults to the
# working directory name when running `supabase init`.
project_id = "picture"
[api]
# Port to use for the API URL.
port = 54321
# Schemas to expose in your API. Tables, views and stored procedures in this schema will get API
# endpoints. public and storage are always included.
schemas = ["public", "storage", "graphql_public"]
# Extra schemas to add to the search_path of every request. public is always included.
extra_search_path = ["public", "extensions"]
# The maximum number of rows returns from a view, table, or stored procedure. Limits payload size
# for accidental or malicious requests.
max_rows = 1000
[db]
# Port to use for the local database URL.
port = 54322
# Port used by db diff command to initialize the shadow database.
shadow_port = 54320
# The database major version to use. This has to be the same as your remote database's. Run `SHOW
# server_version;` on the remote database to check.
major_version = 15
[db.pooler]
enabled = false
# Port to use for the local connection pooler.
port = 54329
# Specifies when a server connection can be reused by other clients.
# Configure one of the supported pooler modes: `transaction`, `session`.
pool_mode = "transaction"
# How many server connections to allow per user/database pair.
default_pool_size = 20
# Maximum number of client connections allowed.
max_client_conn = 100
[realtime]
# Enable the RLS connection modifier
enabled = true
[studio]
enabled = true
# Port to use for Supabase Studio.
port = 54323
# External URL of the API server that frontend connects to.
api_url = "http://127.0.0.1"
# OpenAI API Key to use for Supabase AI in the Supabase Studio.
openai_api_key = "env(OPENAI_API_KEY)"
# Email testing server. Emails sent with the local dev setup are not actually sent - rather, they
# are monitored, and you can view the emails that would have been sent from the web interface.
[inbucket]
enabled = true
# Port to use for the email testing server web interface.
port = 54324
# Uncomment to expose additional ports for testing user applications that send emails.
# smtp_port = 54325
# pop3_port = 54326
[storage]
enabled = true
# The maximum file size allowed (e.g. "5MB", "500KB").
file_size_limit = "50MiB"
[storage.image_transformation]
enabled = true
[auth]
enabled = true
# The base URL of your website. Used as an allow-list for redirects and for constructing URLs used
# in emails.
site_url = "http://127.0.0.1:3000"
# A list of *exact* URLs that auth providers are permitted to redirect to post authentication.
additional_redirect_urls = ["https://127.0.0.1:3000"]
# How long tokens are valid for, in seconds. Defaults to 3600 (1 hour), maximum 604,800 (1 week).
jwt_expiry = 3600
# If disabled, the refresh token will never expire.
enable_refresh_token_rotation = true
# Allows refresh tokens to be reused after expiry, up to the specified interval in seconds.
# Requires enable_refresh_token_rotation = true.
refresh_token_reuse_interval = 10
# Allow/disallow new user signups to your project.
enable_signup = true
# Allow/disallow anonymous sign-ins to your project.
enable_anonymous_sign_ins = false
[auth.email]
# Allow/disallow new user signups via email to your project.
enable_signup = true
# If enabled, a user will be required to confirm any email change on both the old, and new email
# addresses. If disabled, only the new email is required to confirm.
double_confirm_changes = true
# If enabled, users need to confirm their email address before signing in.
enable_confirmations = false
[auth.sms]
# Allow/disallow new user signups via SMS to your project.
enable_signup = true
# If enabled, users need to confirm their phone number before signing in.
enable_confirmations = false
# Template for sending OTP to users
template = "Your code is {{ .Code }} ."
# Use pre-defined map of phone number to OTP for testing.
[auth.sms.test_otp]
# 4152127777 = "123456"
# Configure one of the supported SMS providers: `twilio`, `twilio_verify`, `messagebird`, `textlocal`, `vonage`.
[auth.sms.twilio]
enabled = false
account_sid = ""
message_service_sid = ""
# DO NOT commit your Twilio auth token to git. Use environment variable substitution instead:
auth_token = "env(SUPABASE_AUTH_SMS_TWILIO_AUTH_TOKEN)"
# Use an external OAuth provider. The full list of providers are: `apple`, `azure`, `bitbucket`,
# `discord`, `facebook`, `github`, `gitlab`, `google`, `keycloak`, `linkedin_oidc`, `notion`, `twitch`,
# `twitter`, `slack`, `spotify`, `workos`, `zoom`.
[auth.external.apple]
enabled = false
client_id = ""
# DO NOT commit your OAuth provider secret to git. Use environment variable substitution instead:
secret = "env(SUPABASE_AUTH_EXTERNAL_APPLE_SECRET)"
# Overrides the default auth redirectUrl.
redirect_uri = ""
# Overrides the default auth provider URL. Used to support self-hosted gitlab, single-tenant Azure,
# or any other third-party OIDC providers.
url = ""
[analytics]
enabled = false
port = 54327
# Configure one of the supported backends: `postgres`, `bigquery`.
backend = "postgres"
# Experimental features may be deprecated any time
[experimental]
# Configures Postgres storage engine to use OrioleDB (S3)
orioledb_version = ""
# Configures S3 bucket URL, see https://supabase.com/docs/guides/database/orioledb
s3_host = "env(S3_HOST)"
s3_region = "env(S3_REGION)"
s3_access_key = "env(S3_ACCESS_KEY)"
s3_secret_key = "env(S3_SECRET_KEY)"
[edge_runtime]
enabled = true
# Policy for handling Edge Function runtime workload isolation
policy = "oneshot"
# Edge Function specific configuration
[functions.generate-image]
verify_jwt = false

View file

@ -1,5 +0,0 @@
# Edge Functions Environment Variables
# Copy this file to .env and fill in your values
# Get your Replicate API key from https://replicate.com/account/api-tokens
REPLICATE_API_KEY=your_replicate_api_key_here

View file

@ -1,6 +0,0 @@
# Never commit environment variables
.env
.env.local
# Deno
deno.lock

View file

@ -1,692 +0,0 @@
# Image Generation System Architecture
## Overview
This is a **refactored asynchronous image generation system** that uses a job queue pattern to handle image generation via Replicate API. The system is designed to be scalable, reliable, and maintainable.
## System Components
```
┌─────────────────────────────────────────────────────────────────────┐
│ CLIENT (Mobile/Web) │
└────────────────────────────┬────────────────────────────────────────┘
POST /start-generation
┌────────────────────────────┴────────────────────────────────────────┐
│ START GENERATION FUNCTION │
│ • Validates user auth │
│ • Creates generation record │
│ • Enqueues 'generate-image' job │
│ • Returns immediately with generation_id │
└────────────────────────────┬────────────────────────────────────────┘
↓ Job inserted into queue
┌────────────────────────────┴────────────────────────────────────────┐
│ JOB QUEUE (Database) │
│ • job_queue table │
│ • Stores: job_type, payload, status, priority │
│ • Atomic claiming with SKIP LOCKED │
└────────────────────────────┬────────────────────────────────────────┘
↓ pg_cron triggers every minute
┌────────────────────────────┴────────────────────────────────────────┐
│ PROCESS JOBS WORKER │
│ • Claims up to 3 jobs in parallel │
│ • Routes to appropriate handler │
│ • Handles errors and retries │
└──────┬──────────────────────────────────────────────┬───────────────┘
│ │
↓ generate-image job ↓ download-image job
┌──────┴──────────────────┐ ┌──────────┴───────────────┐
│ PROCESS GENERATION │ │ DOWNLOAD & STORE │
│ • Builds model params │ │ • Downloads image │
│ • Calls Replicate API │ │ • Uploads to Storage │
│ • Polls for completion │──────────────│ • Creates image record │
│ • Enqueues download job │ │ • Marks as completed │
└─────────────────────────┘ └──────────────────────────┘
```
## Edge Functions
### 1. start-generation
**Purpose**: Accept generation request and enqueue for processing
**Flow**:
1. Validate user authentication
2. Validate model configuration
3. Create generation record (status: 'pending')
4. Enqueue 'generate-image' job
5. Return immediately with generation_id
**Key Feature**: No waiting! Returns in ~100ms
**Location**: `supabase/functions/start-generation/index.ts`
---
### 2. process-jobs (Worker)
**Purpose**: Background worker that processes queued jobs
**Flow**:
1. Triggered by pg_cron every minute
2. Claims next 3 available jobs (parallel processing)
3. Routes to appropriate handler based on job_type
4. Updates job status and handles retries
5. Returns summary of processed jobs
**Supported Job Types**:
- `generate-image`: Start Replicate generation
- `download-image`: Download and store result
**Configuration**:
- `MAX_PARALLEL_JOBS = 3`
- `JOB_TIMEOUT_MS = 600000` (10 minutes)
**Location**: `supabase/functions/process-jobs/index.ts`
---
### 3. process-generation (Module)
**Purpose**: Handle Replicate API interaction
**Flow**:
1. Calculate aspect ratios for model
2. Handle img2img conversion if needed
3. Build model-specific input parameters
4. Call Replicate API to start prediction
5. Poll every 2 seconds until complete
6. Return output URL and metadata
**Supported Models** (15+):
- FLUX (Schnell, Dev, Krea Dev, 1.1 Pro)
- SDXL (Regular, Lightning)
- Ideogram V3 Turbo
- Imagen 4 Fast
- Stable Diffusion 3.5
- SeeDream 3/4
- Recraft V3 (raster & SVG)
- Qwen Image
**Key Features**:
- Model-specific parameter handling
- Automatic aspect ratio mapping
- Image-to-image support
- Format detection
**Location**: `supabase/functions/process-generation/index.ts`
---
### 4. generate-image (Legacy)
**Status**: Keep for now, will be deprecated
The original 667-line monolithic function. Still works but doesn't use the queue system. Will be gradually phased out as the queue system proves stable.
**Location**: `supabase/functions/generate-image/index.ts`
## Database Schema
### Tables
#### image_generations
Tracks generation requests and status.
```sql
CREATE TABLE image_generations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES auth.users(id),
prompt TEXT NOT NULL,
negative_prompt TEXT,
model TEXT NOT NULL,
style TEXT,
width INTEGER NOT NULL,
height INTEGER NOT NULL,
steps INTEGER NOT NULL,
guidance_scale NUMERIC NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
error_message TEXT,
generation_time_seconds INTEGER,
replicate_prediction_id TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
completed_at TIMESTAMPTZ
);
-- Status values: pending, queued, processing, downloading, completed, failed
```
#### job_queue
Queue for async job processing.
```sql
CREATE TABLE job_queue (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
job_type TEXT NOT NULL,
payload JSONB NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
priority INTEGER NOT NULL DEFAULT 0,
attempt_number INTEGER NOT NULL DEFAULT 0,
max_attempts INTEGER NOT NULL DEFAULT 3,
result JSONB,
error_message TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
completed_at TIMESTAMPTZ
);
CREATE INDEX idx_job_queue_pending
ON job_queue(status, priority DESC, created_at ASC)
WHERE status = 'pending';
-- Status values: pending, processing, completed, failed
```
#### images
Stores generated image metadata.
```sql
CREATE TABLE images (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
generation_id UUID REFERENCES image_generations(id),
user_id UUID NOT NULL REFERENCES auth.users(id),
filename TEXT NOT NULL,
storage_path TEXT NOT NULL,
public_url TEXT NOT NULL,
file_size INTEGER NOT NULL,
width INTEGER NOT NULL,
height INTEGER NOT NULL,
format TEXT NOT NULL,
prompt TEXT,
negative_prompt TEXT,
model TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
```
### Database Functions
#### enqueue_job(job_type, payload, priority, max_attempts)
Adds a new job to the queue.
```sql
CREATE OR REPLACE FUNCTION enqueue_job(
p_job_type TEXT,
p_payload JSONB,
p_priority INTEGER DEFAULT 0,
p_max_attempts INTEGER DEFAULT 3
)
RETURNS UUID AS $$
DECLARE
v_job_id UUID;
BEGIN
INSERT INTO job_queue (job_type, payload, priority, max_attempts)
VALUES (p_job_type, p_payload, p_priority, p_max_attempts)
RETURNING id INTO v_job_id;
RETURN v_job_id;
END;
$$ LANGUAGE plpgsql;
```
#### claim_next_job()
Atomically claims the next available job.
```sql
CREATE OR REPLACE FUNCTION claim_next_job()
RETURNS TABLE(
id UUID,
job_type TEXT,
payload JSONB,
attempt_number INTEGER,
max_attempts INTEGER
) AS $$
BEGIN
RETURN QUERY
UPDATE job_queue
SET
status = 'processing',
attempt_number = attempt_number + 1,
updated_at = now()
WHERE id = (
SELECT id FROM job_queue
WHERE status = 'pending'
ORDER BY priority DESC, created_at ASC
FOR UPDATE SKIP LOCKED
LIMIT 1
)
RETURNING
job_queue.id,
job_queue.job_type,
job_queue.payload,
job_queue.attempt_number,
job_queue.max_attempts;
END;
$$ LANGUAGE plpgsql;
```
#### complete_job(job_id, result, error)
Marks job as completed or failed. Handles retries.
```sql
CREATE OR REPLACE FUNCTION complete_job(
p_job_id UUID,
p_result JSONB DEFAULT NULL,
p_error TEXT DEFAULT NULL
)
RETURNS VOID AS $$
DECLARE
v_job RECORD;
BEGIN
SELECT * INTO v_job FROM job_queue WHERE id = p_job_id;
IF NOT FOUND THEN
RAISE EXCEPTION 'Job not found: %', p_job_id;
END IF;
-- If error and retries remain, reset to pending
IF p_error IS NOT NULL AND v_job.attempt_number < v_job.max_attempts THEN
UPDATE job_queue
SET
status = 'pending',
error_message = p_error,
updated_at = now()
WHERE id = p_job_id;
-- If error and no retries, mark as failed
ELSIF p_error IS NOT NULL THEN
UPDATE job_queue
SET
status = 'failed',
error_message = p_error,
completed_at = now(),
updated_at = now()
WHERE id = p_job_id;
-- Success - mark as completed
ELSE
UPDATE job_queue
SET
status = 'completed',
result = p_result,
completed_at = now(),
updated_at = now()
WHERE id = p_job_id;
END IF;
END;
$$ LANGUAGE plpgsql;
```
## Job Flow Example
### End-to-End Flow
```
1. User submits generation request
└─> POST /functions/v1/start-generation
{
"prompt": "A beautiful sunset",
"model_id": "black-forest-labs/flux-schnell",
"width": 1024,
"height": 1024
}
2. start-generation function
├─> Creates image_generations record (id: gen-123, status: 'pending')
├─> Calls enqueue_job('generate-image', {...})
├─> Updates generation (status: 'queued')
└─> Returns { generation_id: 'gen-123', status: 'queued' }
⏱️ ~100ms response time
3. job_queue table
└─> New row: { id: 'job-456', job_type: 'generate-image', status: 'pending' }
4. pg_cron triggers (every minute)
└─> POST /functions/v1/process-jobs
5. process-jobs worker
├─> Calls claim_next_job() → Returns job-456
├─> Updates job (status: 'processing', attempt: 1)
└─> Routes to processGenerateImageJob()
6. processGenerateImageJob
├─> Updates generation (status: 'processing')
├─> Calls processGeneration() from process-generation module
│ ├─> Builds model input
│ ├─> Calls Replicate API → prediction-789
│ ├─> Polls every 2 seconds
│ └─> Returns { output_url: 'https://...', format: 'webp' }
├─> Calls enqueue_job('download-image', {...})
├─> Updates generation (status: 'downloading')
└─> Calls complete_job(job-456, result)
⏱️ ~30 seconds for FLUX Schnell
7. job_queue table
└─> New row: { id: 'job-789', job_type: 'download-image', status: 'pending' }
8. Next pg_cron trigger
└─> process-jobs claims job-789
9. processDownloadImageJob
├─> Downloads image from output_url
├─> Uploads to Supabase Storage (bucket: generated-images)
├─> Creates images record (id: img-999)
├─> Updates generation (status: 'completed')
└─> Calls complete_job(job-789, result)
⏱️ ~2-5 seconds
10. User sees completed image
└─> Polling generation status or real-time subscription
{ status: 'completed', image_url: 'https://...' }
```
## Status Flow
### Generation Status Lifecycle
```
pending
queued (job enqueued)
processing (Replicate API called)
downloading (image generation complete, downloading)
completed (image stored and ready)
OR
failed (error at any step)
```
### Job Status Lifecycle
```
pending
processing (claimed by worker)
completed (success)
OR
failed (max attempts reached)
OR
pending (retry if attempts remain)
```
## Monitoring & Observability
### Key Metrics
1. **Queue Depth**
```sql
SELECT COUNT(*) FROM job_queue WHERE status = 'pending';
```
2. **Processing Rate**
```sql
SELECT
COUNT(*) as total_jobs,
COUNT(*) FILTER (WHERE completed_at > now() - interval '1 hour') as last_hour
FROM job_queue
WHERE status = 'completed';
```
3. **Error Rate**
```sql
SELECT
COUNT(*) FILTER (WHERE status = 'failed') * 100.0 / COUNT(*) as error_rate_pct
FROM job_queue
WHERE created_at > now() - interval '24 hours';
```
4. **Average Generation Time**
```sql
SELECT AVG(generation_time_seconds) as avg_time
FROM image_generations
WHERE status = 'completed'
AND created_at > now() - interval '24 hours';
```
### Logs
All Edge Functions log to Supabase Edge Function Logs:
- Job claiming and processing
- Replicate API calls
- Database updates
- Errors with stack traces
Access via: Supabase Dashboard → Edge Functions → Logs
### Alerts
Set up alerts for:
- Queue depth > threshold (e.g., 100 jobs)
- High error rate (> 10%)
- Jobs stuck in 'processing' (> 15 minutes)
- No jobs processed in last 5 minutes
## Performance Characteristics
### Current Configuration
- **Throughput**: ~180 generations/hour
- 60 invocations/hour × 3 jobs/invocation = 180 jobs/hour
- **Latency**:
- Enqueue: ~100ms
- FLUX Schnell: ~30 seconds
- SDXL: ~60 seconds
- Download/Store: ~2-5 seconds
- **Concurrency**: 3 parallel jobs
### Scaling Strategies
#### Vertical Scaling (Single Worker)
```typescript
// Increase parallel jobs
const MAX_PARALLEL_JOBS = 10; // 600 jobs/hour
```
#### Horizontal Scaling (Multiple Workers)
```sql
-- Increase cron frequency
SELECT cron.schedule('...', '*/30 * * * * *', ...); -- Every 30 seconds
-- Result: ~360 jobs/hour with 3 parallel jobs
```
#### Hybrid Scaling
- 10 parallel jobs + 30-second interval = ~1,200 jobs/hour
- Queue system uses SKIP LOCKED for safe concurrency
### Bottlenecks
1. **Replicate API**: Rate limits vary by model
2. **Edge Function Runtime**: Max 150 seconds default (configurable)
3. **Database Connections**: Connection pool size
4. **Storage Bandwidth**: Image upload/download speed
## Error Handling & Recovery
### Retry Strategy
1. **Automatic Retries**:
- Jobs retry up to `max_attempts` (default: 3)
- Exponential backoff via pg_cron interval
2. **Manual Recovery**:
```sql
-- Reset stuck jobs
UPDATE job_queue
SET status = 'pending', attempt_number = 0
WHERE status = 'processing'
AND updated_at < now() - interval '15 minutes';
```
3. **Generation Cleanup**:
```sql
-- Mark abandoned generations as failed
UPDATE image_generations
SET status = 'failed', error_message = 'Timeout'
WHERE status IN ('processing', 'downloading')
AND updated_at < now() - interval '30 minutes';
```
### Common Issues
#### Jobs Not Processing
- **Check**: pg_cron installed and scheduled
- **Fix**: `SELECT cron.schedule(...);`
#### High Queue Depth
- **Check**: Worker processing rate vs. incoming rate
- **Fix**: Increase `MAX_PARALLEL_JOBS` or cron frequency
#### Failed Jobs
- **Check**: Job error messages in `job_queue.error_message`
- **Fix**: Address root cause, then reset jobs to pending
## Security
### Authentication
- start-generation: Requires valid user auth token
- process-jobs: Service role access (no user context needed)
### Authorization
- Users can only create generations for themselves
- RLS policies on tables enforce user isolation
### API Keys
- Replicate API token stored in Edge Function secrets
- Never exposed to client
## Testing
### Local Development
```bash
# Start Supabase locally
npx supabase start
# Serve functions
npx supabase functions serve
# Test in separate terminals
curl -X POST http://localhost:54321/functions/v1/start-generation \
-H "Authorization: Bearer YOUR_ANON_KEY" \
-d '{"prompt":"test","model_id":"black-forest-labs/flux-schnell",...}'
curl -X POST http://localhost:54321/functions/v1/process-jobs \
-H "Authorization: Bearer YOUR_ANON_KEY"
```
### Integration Tests
1. Enqueue job via start-generation
2. Manually trigger process-jobs
3. Verify generation status progression
4. Verify image is stored correctly
## Deployment
### Deploy Functions
```bash
# Deploy all functions
npx supabase functions deploy start-generation
npx supabase functions deploy process-generation
npx supabase functions deploy process-jobs
```
### Set Up pg_cron
```sql
-- Enable pg_cron extension
CREATE EXTENSION IF NOT EXISTS pg_cron;
-- Schedule worker to run every minute
SELECT cron.schedule(
'process-jobs-worker',
'* * * * *',
$$
SELECT net.http_post(
'https://your-project.supabase.co/functions/v1/process-jobs',
'{}',
'{"Content-Type": "application/json"}'::jsonb
)
$$
);
-- Verify schedule
SELECT * FROM cron.job;
```
### Environment Variables
Required in Supabase Edge Function settings:
- `REPLICATE_API_TOKEN` or `REPLICATE_API_KEY`
- `SUPABASE_URL` (auto-provided)
- `SUPABASE_ANON_KEY` (auto-provided)
- `SUPABASE_SERVICE_ROLE_KEY` (auto-provided)
## Migration from Legacy System
### Current State
- Legacy `generate-image` function still active
- New queue system running in parallel
### Migration Steps
1. **Phase 1: Parallel Run** (Current)
- Both systems active
- New features use queue system
- Monitor queue system stability
2. **Phase 2: Gradual Cutover**
- Update mobile/web clients to use start-generation
- Monitor error rates and performance
- Keep legacy function for fallback
3. **Phase 3: Deprecation**
- Disable legacy function
- Remove old code
- Update documentation
### Rollback Plan
If issues arise, simply revert clients to use legacy `generate-image` function.
## Future Enhancements
### Short Term
- [ ] Add job priority scheduling
- [ ] Implement progress tracking (0-100%)
- [ ] Add webhook notifications
- [ ] Implement job cancellation
### Medium Term
- [ ] Batch generation support
- [ ] Advanced retry strategies (exponential backoff)
- [ ] Dead letter queue for failed jobs
- [ ] Real-time status updates via Supabase Realtime
### Long Term
- [ ] Multi-region deployment
- [ ] Cost tracking per generation
- [ ] A/B testing framework for models
- [ ] ML-based queue optimization
## References
### Documentation
- [Supabase Edge Functions](https://supabase.com/docs/guides/functions)
- [Replicate API](https://replicate.com/docs)
- [pg_cron](https://github.com/citusdata/pg_cron)
### Related Files
- `/apps/mobile/supabase/functions/start-generation/index.ts`
- `/apps/mobile/supabase/functions/process-jobs/index.ts`
- `/apps/mobile/supabase/functions/process-generation/index.ts`
- `/apps/mobile/supabase/functions/generate-image/index.ts` (legacy)

View file

@ -1,694 +0,0 @@
# Deployment Guide - Image Generation Queue System
## Prerequisites
Before deploying, ensure you have:
1. **Supabase CLI** installed and authenticated
```bash
npm install -g supabase
supabase login
supabase link --project-ref YOUR_PROJECT_REF
```
2. **Replicate API Token**
- Sign up at [replicate.com](https://replicate.com)
- Generate API token from dashboard
- Have it ready for Edge Function secrets
3. **Database Extensions**
- `pg_cron` extension enabled
- `http` extension enabled (for net.http_post)
## Step 1: Create Database Schema
Run these SQL commands in Supabase SQL Editor:
### 1.1 Enable Required Extensions
```sql
-- Enable pg_cron for scheduled jobs
CREATE EXTENSION IF NOT EXISTS pg_cron;
-- Enable http for making HTTP requests from cron
CREATE EXTENSION IF NOT EXISTS http;
```
### 1.2 Create job_queue Table
```sql
CREATE TABLE IF NOT EXISTS job_queue (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
job_type TEXT NOT NULL,
payload JSONB NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
priority INTEGER NOT NULL DEFAULT 0,
attempt_number INTEGER NOT NULL DEFAULT 0,
max_attempts INTEGER NOT NULL DEFAULT 3,
result JSONB,
error_message TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
completed_at TIMESTAMPTZ,
CONSTRAINT job_queue_status_check CHECK (status IN ('pending', 'processing', 'completed', 'failed'))
);
-- Index for efficient job claiming
CREATE INDEX IF NOT EXISTS idx_job_queue_pending
ON job_queue(status, priority DESC, created_at ASC)
WHERE status = 'pending';
-- Index for monitoring
CREATE INDEX IF NOT EXISTS idx_job_queue_created_at
ON job_queue(created_at DESC);
-- Index for job type filtering
CREATE INDEX IF NOT EXISTS idx_job_queue_type
ON job_queue(job_type, status);
```
### 1.3 Create Database Functions
**enqueue_job()** - Add job to queue:
```sql
CREATE OR REPLACE FUNCTION enqueue_job(
p_job_type TEXT,
p_payload JSONB,
p_priority INTEGER DEFAULT 0,
p_max_attempts INTEGER DEFAULT 3
)
RETURNS UUID AS $$
DECLARE
v_job_id UUID;
BEGIN
INSERT INTO job_queue (job_type, payload, priority, max_attempts)
VALUES (p_job_type, p_payload, p_priority, p_max_attempts)
RETURNING id INTO v_job_id;
RETURN v_job_id;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
```
**claim_next_job()** - Atomically claim next job:
```sql
CREATE OR REPLACE FUNCTION claim_next_job()
RETURNS TABLE(
id UUID,
job_type TEXT,
payload JSONB,
attempt_number INTEGER,
max_attempts INTEGER
) AS $$
BEGIN
RETURN QUERY
UPDATE job_queue
SET
status = 'processing',
attempt_number = attempt_number + 1,
updated_at = now()
WHERE id = (
SELECT job_queue.id
FROM job_queue
WHERE job_queue.status = 'pending'
ORDER BY job_queue.priority DESC, job_queue.created_at ASC
FOR UPDATE SKIP LOCKED
LIMIT 1
)
RETURNING
job_queue.id,
job_queue.job_type,
job_queue.payload,
job_queue.attempt_number,
job_queue.max_attempts;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
```
**complete_job()** - Mark job as complete or failed:
```sql
CREATE OR REPLACE FUNCTION complete_job(
p_job_id UUID,
p_result JSONB DEFAULT NULL,
p_error TEXT DEFAULT NULL
)
RETURNS VOID AS $$
DECLARE
v_job RECORD;
BEGIN
-- Get current job state
SELECT * INTO v_job FROM job_queue WHERE id = p_job_id;
IF NOT FOUND THEN
RAISE EXCEPTION 'Job not found: %', p_job_id;
END IF;
-- If error and retries remain, reset to pending
IF p_error IS NOT NULL AND v_job.attempt_number < v_job.max_attempts THEN
UPDATE job_queue
SET
status = 'pending',
error_message = p_error,
updated_at = now()
WHERE id = p_job_id;
-- If error and no retries left, mark as failed
ELSIF p_error IS NOT NULL THEN
UPDATE job_queue
SET
status = 'failed',
error_message = p_error,
completed_at = now(),
updated_at = now()
WHERE id = p_job_id;
-- Success - mark as completed
ELSE
UPDATE job_queue
SET
status = 'completed',
result = p_result,
error_message = NULL,
completed_at = now(),
updated_at = now()
WHERE id = p_job_id;
END IF;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
```
### 1.4 Update image_generations Table
Add new status values if not already present:
```sql
-- Add 'queued' and 'downloading' statuses
-- Adjust the check constraint if it exists
ALTER TABLE image_generations
DROP CONSTRAINT IF EXISTS image_generations_status_check;
ALTER TABLE image_generations
ADD CONSTRAINT image_generations_status_check
CHECK (status IN ('pending', 'queued', 'processing', 'downloading', 'completed', 'failed'));
```
## Step 2: Deploy Edge Functions
### 2.1 Deploy Functions
```bash
# From the root of your project
cd apps/mobile
# Deploy all functions
npx supabase functions deploy start-generation
npx supabase functions deploy process-generation
npx supabase functions deploy process-jobs
```
### 2.2 Set Environment Secrets
```bash
# Set Replicate API token
npx supabase secrets set REPLICATE_API_TOKEN=your_replicate_token_here
# Verify secrets are set
npx supabase secrets list
```
## Step 3: Set Up Cron Job
### 3.1 Schedule process-jobs Worker
Run in Supabase SQL Editor:
```sql
-- Schedule worker to run every minute
SELECT cron.schedule(
'process-jobs-worker',
'* * * * *', -- Every minute
$$
SELECT net.http_post(
url := 'https://YOUR_PROJECT_REF.supabase.co/functions/v1/process-jobs',
body := '{}'::jsonb,
headers := '{"Content-Type": "application/json"}'::jsonb
) as request_id;
$$
);
```
**Important**: Replace `YOUR_PROJECT_REF` with your actual Supabase project reference.
### 3.2 Verify Cron Job
```sql
-- List all cron jobs
SELECT * FROM cron.job;
-- View recent cron job runs
SELECT * FROM cron.job_run_details
ORDER BY start_time DESC
LIMIT 10;
```
### 3.3 (Optional) Adjust Frequency
For higher throughput, run more frequently:
```sql
-- Every 30 seconds (requires pg_cron 1.5+)
SELECT cron.schedule(
'process-jobs-worker',
'*/30 * * * * *', -- Every 30 seconds
$$ ... $$
);
-- To update existing job
SELECT cron.unschedule('process-jobs-worker');
-- Then create new schedule
```
## Step 4: Testing
### 4.1 Manual Function Test
Test start-generation:
```bash
curl -X POST https://YOUR_PROJECT_REF.supabase.co/functions/v1/start-generation \
-H "Authorization: Bearer YOUR_ANON_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "A beautiful sunset over mountains",
"model_id": "black-forest-labs/flux-schnell",
"width": 1024,
"height": 1024,
"num_inference_steps": 4,
"guidance_scale": 7.5
}'
```
Expected response:
```json
{
"success": true,
"generation_id": "uuid-here",
"job_id": "uuid-here",
"status": "queued",
"message": "Image generation started. You will be notified when complete."
}
```
### 4.2 Manually Trigger Worker
```bash
curl -X POST https://YOUR_PROJECT_REF.supabase.co/functions/v1/process-jobs \
-H "Authorization: Bearer YOUR_ANON_KEY"
```
Expected response:
```json
{
"success": true,
"processed": 1,
"errors": 0,
"message": "Processed 1 job(s) with 0 error(s)"
}
```
### 4.3 Check Job Queue
```sql
-- View pending jobs
SELECT * FROM job_queue
WHERE status = 'pending'
ORDER BY created_at DESC;
-- View recent completed jobs
SELECT * FROM job_queue
WHERE status = 'completed'
ORDER BY completed_at DESC
LIMIT 10;
-- View failed jobs
SELECT id, job_type, error_message, attempt_number
FROM job_queue
WHERE status = 'failed'
ORDER BY created_at DESC;
```
### 4.4 Check Generation Status
```sql
-- View recent generations
SELECT id, prompt, status, error_message, created_at, completed_at
FROM image_generations
ORDER BY created_at DESC
LIMIT 10;
-- Check specific generation
SELECT * FROM image_generations
WHERE id = 'YOUR_GENERATION_ID';
```
### 4.5 End-to-End Test
1. Submit generation request via start-generation
2. Note the generation_id and job_id
3. Wait ~1 minute for cron to trigger (or manually trigger process-jobs)
4. Check generation status (should go: queued → processing → downloading → completed)
5. Verify image appears in images table
6. Verify image is in Storage bucket
## Step 5: Monitoring Setup
### 5.1 Create Monitoring Views
```sql
-- Queue health view
CREATE OR REPLACE VIEW queue_health AS
SELECT
COUNT(*) FILTER (WHERE status = 'pending') as pending_jobs,
COUNT(*) FILTER (WHERE status = 'processing') as processing_jobs,
COUNT(*) FILTER (WHERE status = 'completed' AND completed_at > now() - interval '1 hour') as completed_last_hour,
COUNT(*) FILTER (WHERE status = 'failed' AND updated_at > now() - interval '1 hour') as failed_last_hour,
AVG(EXTRACT(EPOCH FROM (completed_at - created_at))) FILTER (WHERE status = 'completed' AND completed_at > now() - interval '1 hour') as avg_processing_time_seconds
FROM job_queue;
-- View queue health
SELECT * FROM queue_health;
```
### 5.2 Set Up Alerts
Create alerts for:
1. **High Queue Depth**
```sql
SELECT COUNT(*) FROM job_queue WHERE status = 'pending';
-- Alert if > 50
```
2. **Stuck Jobs**
```sql
SELECT COUNT(*) FROM job_queue
WHERE status = 'processing'
AND updated_at < now() - interval '15 minutes';
-- Alert if > 0
```
3. **High Error Rate**
```sql
SELECT
COUNT(*) FILTER (WHERE status = 'failed') * 100.0 / COUNT(*) as error_rate
FROM job_queue
WHERE created_at > now() - interval '1 hour';
-- Alert if > 10%
```
### 5.3 Edge Function Logs
View logs in Supabase Dashboard:
1. Go to Edge Functions
2. Select function (process-jobs, start-generation, etc.)
3. Click "Logs" tab
4. Filter by time range and log level
## Step 6: Client Integration
### 6.1 Update API Calls
**Before (Old System):**
```typescript
// Direct call that waits for completion
const response = await supabase.functions.invoke('generate-image', {
body: { prompt, model_id, ... }
});
// Wait ~30-60 seconds for response
```
**After (New Queue System):**
```typescript
// 1. Enqueue generation (instant response)
const { data } = await supabase.functions.invoke('start-generation', {
body: { prompt, model_id, ... }
});
const generationId = data.generation_id;
// 2. Poll for completion
const checkStatus = async () => {
const { data: generation } = await supabase
.from('image_generations')
.select('*, images(*)')
.eq('id', generationId)
.single();
return generation;
};
// Poll every 2 seconds
const pollInterval = setInterval(async () => {
const generation = await checkStatus();
if (generation.status === 'completed') {
clearInterval(pollInterval);
// Show image: generation.images[0].public_url
} else if (generation.status === 'failed') {
clearInterval(pollInterval);
// Show error: generation.error_message
}
}, 2000);
```
### 6.2 Real-Time Subscription (Better UX)
```typescript
// 1. Enqueue generation
const { data } = await supabase.functions.invoke('start-generation', {
body: { prompt, model_id, ... }
});
const generationId = data.generation_id;
// 2. Subscribe to real-time updates
const subscription = supabase
.channel(`generation:${generationId}`)
.on(
'postgres_changes',
{
event: 'UPDATE',
schema: 'public',
table: 'image_generations',
filter: `id=eq.${generationId}`
},
(payload) => {
const generation = payload.new;
if (generation.status === 'completed') {
// Fetch image record
supabase
.from('images')
.select('*')
.eq('generation_id', generationId)
.single()
.then(({ data: image }) => {
// Show image: image.public_url
});
} else if (generation.status === 'failed') {
// Show error: generation.error_message
}
// Update UI with current status
console.log('Status:', generation.status);
}
)
.subscribe();
```
## Step 7: Scaling Configuration
### 7.1 Increase Parallel Jobs
Edit `apps/mobile/supabase/functions/process-jobs/index.ts`:
```typescript
const MAX_PARALLEL_JOBS = 10; // Increase from 3 to 10
```
Then redeploy:
```bash
npx supabase functions deploy process-jobs
```
### 7.2 Increase Cron Frequency
```sql
-- Every 30 seconds instead of 60
SELECT cron.unschedule('process-jobs-worker');
SELECT cron.schedule(
'process-jobs-worker',
'*/30 * * * * *',
$$ ... $$
);
```
### 7.3 Resource Monitoring
Monitor these metrics:
- Edge Function invocation count
- Edge Function duration
- Database CPU usage
- Database connection count
- Storage bandwidth
Adjust scaling parameters based on:
- Replicate API rate limits
- Database capacity
- Budget constraints
## Rollback Plan
If issues arise, rollback to legacy system:
1. **Stop Cron Job**
```sql
SELECT cron.unschedule('process-jobs-worker');
```
2. **Revert Client Code**
Use direct calls to `generate-image` function
3. **Investigation**
- Check Edge Function logs
- Check job_queue table for errors
- Check cron.job_run_details for cron issues
4. **Re-enable When Fixed**
```sql
SELECT cron.schedule(...);
```
## Troubleshooting
### Jobs Not Being Processed
**Check 1**: Is cron job scheduled?
```sql
SELECT * FROM cron.job WHERE jobname = 'process-jobs-worker';
```
**Check 2**: Are cron jobs running?
```sql
SELECT * FROM cron.job_run_details
WHERE jobid = (SELECT jobid FROM cron.job WHERE jobname = 'process-jobs-worker')
ORDER BY start_time DESC
LIMIT 5;
```
**Check 3**: Can cron make HTTP requests?
```sql
-- Test net.http_post
SELECT net.http_post(
url := 'https://YOUR_PROJECT_REF.supabase.co/functions/v1/process-jobs',
body := '{}'::jsonb,
headers := '{"Content-Type": "application/json"}'::jsonb
);
```
### High Error Rate
**Check**: What errors are occurring?
```sql
SELECT error_message, COUNT(*)
FROM job_queue
WHERE status = 'failed'
AND created_at > now() - interval '24 hours'
GROUP BY error_message
ORDER BY count DESC;
```
Common fixes:
- Replicate API token invalid/expired
- Invalid model_id in payload
- Network issues (transient, will retry)
### Stuck in Processing
**Check**: Jobs stuck in 'processing'?
```sql
SELECT id, job_type, attempt_number, updated_at
FROM job_queue
WHERE status = 'processing'
AND updated_at < now() - interval '15 minutes';
```
**Fix**: Reset to pending
```sql
UPDATE job_queue
SET status = 'pending', attempt_number = 0
WHERE status = 'processing'
AND updated_at < now() - interval '15 minutes';
```
## Performance Benchmarks
Expected performance with default configuration:
- **Enqueue latency**: ~100ms
- **Queue throughput**: ~180 jobs/hour (3 parallel × 60 invocations)
- **FLUX Schnell generation**: ~30 seconds
- **SDXL generation**: ~60 seconds
- **Download/store**: ~2-5 seconds
- **Total (FLUX Schnell)**: ~35-40 seconds end-to-end
Scaled configuration (10 parallel, 30-second interval):
- **Queue throughput**: ~1,200 jobs/hour
## Maintenance
### Regular Cleanup
Clean up old completed jobs (optional):
```sql
-- Delete completed jobs older than 7 days
DELETE FROM job_queue
WHERE status = 'completed'
AND completed_at < now() - interval '7 days';
-- Or archive them
CREATE TABLE job_queue_archive AS
SELECT * FROM job_queue
WHERE status IN ('completed', 'failed')
AND completed_at < now() - interval '30 days';
DELETE FROM job_queue
WHERE id IN (SELECT id FROM job_queue_archive);
```
Set up as a cron job:
```sql
SELECT cron.schedule(
'cleanup-old-jobs',
'0 2 * * *', -- Daily at 2 AM
$$
DELETE FROM job_queue
WHERE status = 'completed'
AND completed_at < now() - interval '7 days';
$$
);
```
## Support
For issues or questions:
1. Check Edge Function logs in Supabase Dashboard
2. Check `job_queue` table for error messages
3. Review ARCHITECTURE.md for system design
4. Check function-specific README.md files

View file

@ -1,369 +0,0 @@
# Quick Reference - Image Generation Queue System
## Quick Commands
### Deploy Functions
```bash
cd apps/mobile
npx supabase functions deploy start-generation
npx supabase functions deploy process-generation
npx supabase functions deploy process-jobs
```
### Set Secrets
```bash
npx supabase secrets set REPLICATE_API_TOKEN=your_token_here
npx supabase secrets list
```
### Test Functions
```bash
# Test start-generation
curl -X POST https://YOUR_PROJECT.supabase.co/functions/v1/start-generation \
-H "Authorization: Bearer YOUR_ANON_KEY" \
-H "Content-Type: application/json" \
-d '{"prompt":"test","model_id":"black-forest-labs/flux-schnell","width":1024,"height":1024,"num_inference_steps":4,"guidance_scale":7.5}'
# Manually trigger worker
curl -X POST https://YOUR_PROJECT.supabase.co/functions/v1/process-jobs \
-H "Authorization: Bearer YOUR_ANON_KEY"
```
## Quick SQL Queries
### Monitor Queue
```sql
-- Queue overview
SELECT status, COUNT(*) FROM job_queue GROUP BY status;
-- Recent jobs
SELECT id, job_type, status, created_at, completed_at
FROM job_queue
ORDER BY created_at DESC
LIMIT 20;
-- Failed jobs with errors
SELECT id, job_type, error_message, attempt_number, created_at
FROM job_queue
WHERE status = 'failed'
ORDER BY created_at DESC
LIMIT 10;
-- Jobs in progress
SELECT id, job_type, attempt_number, updated_at,
EXTRACT(EPOCH FROM (now() - updated_at))::INTEGER as seconds_in_processing
FROM job_queue
WHERE status = 'processing'
ORDER BY updated_at;
```
### Monitor Generations
```sql
-- Recent generations
SELECT id, prompt, status, created_at, completed_at,
generation_time_seconds
FROM image_generations
ORDER BY created_at DESC
LIMIT 20;
-- Status breakdown
SELECT status, COUNT(*) FROM image_generations GROUP BY status;
-- Failed generations
SELECT id, prompt, error_message, created_at
FROM image_generations
WHERE status = 'failed'
ORDER BY created_at DESC
LIMIT 10;
```
### Monitor Cron
```sql
-- List cron jobs
SELECT * FROM cron.job;
-- Recent cron runs
SELECT jobid, start_time, end_time, status, return_message
FROM cron.job_run_details
WHERE jobid = (SELECT jobid FROM cron.job WHERE jobname = 'process-jobs-worker')
ORDER BY start_time DESC
LIMIT 20;
```
## Quick Fixes
### Reset Stuck Jobs
```sql
-- Reset jobs stuck in processing (over 15 minutes)
UPDATE job_queue
SET status = 'pending', attempt_number = 0
WHERE status = 'processing'
AND updated_at < now() - interval '15 minutes';
```
### Reset Failed Job
```sql
-- Reset a specific failed job to retry
UPDATE job_queue
SET status = 'pending', attempt_number = 0, error_message = NULL
WHERE id = 'JOB_UUID_HERE';
```
### Clean Old Jobs
```sql
-- Delete completed jobs older than 7 days
DELETE FROM job_queue
WHERE status = 'completed'
AND completed_at < now() - interval '7 days';
```
### Re-schedule Cron Job
```sql
-- Remove existing
SELECT cron.unschedule('process-jobs-worker');
-- Re-add (every minute)
SELECT cron.schedule(
'process-jobs-worker',
'* * * * *',
$$
SELECT net.http_post(
url := 'https://YOUR_PROJECT_REF.supabase.co/functions/v1/process-jobs',
body := '{}'::jsonb,
headers := '{"Content-Type": "application/json"}'::jsonb
) as request_id;
$$
);
```
## Client Code Snippets
### Submit Generation (React Native/Web)
```typescript
import { supabase } from '@picture/shared';
async function generateImage(prompt: string, modelId: string) {
const { data, error } = await supabase.functions.invoke('start-generation', {
body: {
prompt,
model_id: modelId,
width: 1024,
height: 1024,
num_inference_steps: 4,
guidance_scale: 7.5
}
});
if (error) throw error;
return data.generation_id;
}
```
### Poll for Completion
```typescript
async function pollGeneration(generationId: string) {
return new Promise((resolve, reject) => {
const interval = setInterval(async () => {
const { data: generation } = await supabase
.from('image_generations')
.select('*, images(*)')
.eq('id', generationId)
.single();
if (generation.status === 'completed') {
clearInterval(interval);
resolve(generation.images[0]);
} else if (generation.status === 'failed') {
clearInterval(interval);
reject(new Error(generation.error_message));
}
}, 2000);
});
}
```
### Real-Time Subscription
```typescript
function subscribeToGeneration(
generationId: string,
onUpdate: (status: string) => void,
onComplete: (imageUrl: string) => void,
onError: (error: string) => void
) {
const subscription = supabase
.channel(`generation:${generationId}`)
.on(
'postgres_changes',
{
event: 'UPDATE',
schema: 'public',
table: 'image_generations',
filter: `id=eq.${generationId}`
},
async (payload) => {
const generation = payload.new;
onUpdate(generation.status);
if (generation.status === 'completed') {
const { data: image } = await supabase
.from('images')
.select('public_url')
.eq('generation_id', generationId)
.single();
onComplete(image.public_url);
subscription.unsubscribe();
} else if (generation.status === 'failed') {
onError(generation.error_message);
subscription.unsubscribe();
}
}
)
.subscribe();
return () => subscription.unsubscribe();
}
```
## Configuration Values
### Default Settings
```typescript
// process-jobs/index.ts
const MAX_PARALLEL_JOBS = 3; // Jobs processed per invocation
const JOB_TIMEOUT_MS = 600000; // 10 minutes per job
// Cron schedule
'* * * * *' // Every minute
// Job defaults
max_attempts: 3 // Retry up to 3 times
priority: 0 // Default priority
```
### Scaling Settings
```typescript
// For higher throughput
const MAX_PARALLEL_JOBS = 10; // Process 10 jobs at once
'*/30 * * * * *' // Every 30 seconds
// Result: ~1,200 jobs/hour
```
## Status Values
### Generation Status
- `pending` - Just created
- `queued` - Job enqueued
- `processing` - Replicate API called
- `downloading` - Image being downloaded
- `completed` - Done successfully
- `failed` - Error occurred
### Job Status
- `pending` - Waiting to be processed
- `processing` - Currently being worked on
- `completed` - Successfully finished
- `failed` - Failed after max attempts
## Model IDs Reference
### Fast Models (< 5 seconds)
```typescript
'black-forest-labs/flux-schnell' // FLUX Schnell (4 steps)
'bytedance/sdxl-lightning-4step' // SDXL Lightning
```
### Quality Models (30-60 seconds)
```typescript
'black-forest-labs/flux-dev' // FLUX Dev
'black-forest-labs/flux-1.1-pro' // FLUX 1.1 Pro
'stability-ai/sdxl' // SDXL
'ideogram-ai/ideogram-v3-turbo' // Ideogram V3
'google-deepmind/imagen-4-fast' // Imagen 4
```
### Specialized Models
```typescript
'fofr/recraft-v3-svg' // Vector SVG output
'stability-ai/stable-diffusion-3.5-large' // SD 3.5
'qwen/qwen-image' // Qwen Image
```
## File Structure
```
apps/mobile/supabase/functions/
├── ARCHITECTURE.md # System design overview
├── DEPLOYMENT_GUIDE.md # Step-by-step deployment
├── QUICK_REFERENCE.md # This file
├── start-generation/
│ ├── index.ts # Entry point function
│ └── README.md
├── process-jobs/
│ ├── index.ts # Background worker
│ └── README.md
├── process-generation/
│ ├── index.ts # Replicate API handler
│ └── README.md
└── generate-image/ # Legacy function (keep for now)
└── index.ts
```
## Monitoring Checklist
Daily:
- [ ] Check queue depth (should be < 10)
- [ ] Check error rate (should be < 5%)
- [ ] Check cron job runs (should run every minute)
- [ ] Check stuck jobs (should be 0)
Weekly:
- [ ] Review failed jobs for patterns
- [ ] Clean up old completed jobs
- [ ] Check Edge Function logs for errors
- [ ] Verify storage bucket size
Monthly:
- [ ] Review performance metrics
- [ ] Optimize queue settings if needed
- [ ] Update models if new versions available
## Key Metrics Targets
- **Queue depth**: < 10 pending jobs
- **Processing time**: < 60 seconds average
- **Error rate**: < 5%
- **Stuck jobs**: 0
- **Throughput**: ~180 jobs/hour (default config)
## Common Error Messages
**"Replicate API token not configured"**
- Fix: `npx supabase secrets set REPLICATE_API_TOKEN=...`
**"No authorization header"**
- Fix: Include `Authorization: Bearer YOUR_ANON_KEY` in request
**"Replicate API error (401)"**
- Fix: Token invalid/expired, generate new token
**"Generation timeout after 10 minutes"**
- Model too slow or Replicate issue
- Check Replicate status page
**"Failed to download generated image"**
- Transient network issue
- Will retry automatically
## Support Links
- [Supabase Edge Functions Docs](https://supabase.com/docs/guides/functions)
- [Replicate API Docs](https://replicate.com/docs)
- [pg_cron GitHub](https://github.com/citusdata/pg_cron)
## Version Info
- Supabase CLI: `supabase --version`
- Node/Deno: Edge Functions run on Deno runtime
- PostgreSQL Extensions: pg_cron, http

View file

@ -1,371 +0,0 @@
# Supabase Edge Functions - Image Generation System
## Overview
This directory contains the **refactored asynchronous image generation system** using a job queue pattern. The system is designed for scalability, reliability, and maintainability.
## What Changed?
### Before (Legacy System)
- Single monolithic 667-line `generate-image` function
- Client waits 30-60 seconds for response (blocking)
- Difficult to scale or add features
- No retry mechanism
- Single point of failure
### After (Queue System)
- 3 focused Edge Functions + job queue
- Client gets instant response (~100ms)
- Jobs processed by background worker
- Automatic retries on failure
- Easy to scale horizontally
- Clean separation of concerns
## System Components
### 1. start-generation
**Purpose**: Accept generation request and return immediately
- Validates user authentication
- Creates generation record
- Enqueues job for background processing
- Returns instantly with generation_id
**[View Code](./start-generation/index.ts)**
---
### 2. process-jobs (Worker)
**Purpose**: Background worker that processes queued jobs
- Triggered by pg_cron every minute
- Claims and processes up to 3 jobs in parallel
- Handles 'generate-image' and 'download-image' jobs
- Manages retries and error handling
**[View Code](./process-jobs/index.ts)** | **[Documentation](./process-jobs/README.md)**
---
### 3. process-generation (Module)
**Purpose**: Handle Replicate API interaction
- Extracted from original 667-line function
- Supports 15+ AI models with model-specific logic
- Handles aspect ratios, img2img, polling
- Can be imported as module or called standalone
**[View Code](./process-generation/index.ts)** | **[Documentation](./process-generation/README.md)**
---
### 4. generate-image (Legacy)
**Status**: Deprecated but kept for backward compatibility
The original monolithic function. Still works but doesn't use the queue system. Will be gradually phased out.
**[View Code](./generate-image/index.ts)**
## Documentation
### 📘 [ARCHITECTURE.md](./ARCHITECTURE.md)
Complete system architecture, database schema, and design decisions.
**Read this to understand:**
- How the system works end-to-end
- Database tables and functions
- Job flow and state transitions
- Performance characteristics
- Monitoring and scaling strategies
### 🚀 [DEPLOYMENT_GUIDE.md](./DEPLOYMENT_GUIDE.md)
Step-by-step deployment instructions.
**Follow this to deploy:**
1. Create database schema
2. Deploy Edge Functions
3. Set up pg_cron job
4. Test the system
5. Monitor and scale
### ⚡ [QUICK_REFERENCE.md](./QUICK_REFERENCE.md)
Quick commands and code snippets for daily use.
**Use this for:**
- Common SQL queries
- Quick fixes
- Client code examples
- Configuration values
- Troubleshooting
## Quick Start
### Prerequisites
```bash
npm install -g supabase
supabase login
supabase link --project-ref YOUR_PROJECT_REF
```
### Deploy
```bash
# From apps/mobile directory
npx supabase functions deploy start-generation
npx supabase functions deploy process-generation
npx supabase functions deploy process-jobs
# Set secrets
npx supabase secrets set REPLICATE_API_TOKEN=your_token_here
```
### Set Up Database
Run SQL from [DEPLOYMENT_GUIDE.md](./DEPLOYMENT_GUIDE.md) to create:
- `job_queue` table
- `enqueue_job()` function
- `claim_next_job()` function
- `complete_job()` function
- pg_cron schedule
### Test
```bash
# Test start-generation
curl -X POST https://YOUR_PROJECT.supabase.co/functions/v1/start-generation \
-H "Authorization: Bearer YOUR_ANON_KEY" \
-H "Content-Type: application/json" \
-d '{"prompt":"test","model_id":"black-forest-labs/flux-schnell","width":1024,"height":1024}'
# Manually trigger worker
curl -X POST https://YOUR_PROJECT.supabase.co/functions/v1/process-jobs \
-H "Authorization: Bearer YOUR_ANON_KEY"
```
## Architecture Diagram
```
┌─────────────┐
│ Client │
└─────┬───────┘
│ POST /start-generation
┌─────────────────────┐
│ start-generation │ ← Returns immediately (~100ms)
│ • Auth check │
│ • Create record │
│ • Enqueue job │
└─────┬───────────────┘
↓ Job inserted
┌─────────────────────┐
│ job_queue table │
└─────┬───────────────┘
↓ pg_cron every minute
┌─────────────────────┐
│ process-jobs │ ← Background worker
│ • Claim 3 jobs │
│ • Process in || │
└───┬─────────┬───────┘
│ │
↓ ↓
┌───────┐ ┌──────────┐
│ Gen │ │ Download │
│ Image │ │ Image │
└───┬───┘ └────┬─────┘
│ │
↓ ↓
┌────────────────────┐
│ process-generation │ ← Replicate API
│ • Model params │
│ • API calls │
│ • Polling │
└────────────────────┘
```
## Key Features
### 🚀 Instant Response
- Client gets response in ~100ms
- No more 30-60 second waits
- Better UX, faster perceived performance
### 🔄 Automatic Retries
- Jobs retry up to 3 times on failure
- Transient errors handled gracefully
- Clear error messages for debugging
### 📈 Scalable
- Process multiple jobs in parallel
- Easy to increase throughput
- Horizontal scaling via pg_cron frequency
### 🛠 Maintainable
- Clean separation of concerns
- Each function has single responsibility
- Well-documented code
- Easy to add new features
### 🔍 Observable
- Comprehensive logging
- Database-backed job history
- Easy to monitor and debug
- Clear status progression
## Supported Models
The system supports 15+ AI models:
**Fast Models (< 5 seconds)**
- FLUX Schnell
- SDXL Lightning
**Quality Models (30-60 seconds)**
- FLUX Dev, FLUX 1.1 Pro
- SDXL, Ideogram V3
- Imagen 4, SD 3.5
**Specialized**
- Recraft V3 (SVG output)
- SeeDream, Qwen Image
All models include:
- Automatic aspect ratio handling
- Model-specific parameter optimization
- Image-to-image support (where available)
## Performance
### Default Configuration
- **Throughput**: ~180 generations/hour
- **Latency**: 30-60 seconds (depends on model)
- **Concurrency**: 3 parallel jobs
- **Reliability**: 95%+ success rate
### Scaled Configuration
With 10 parallel jobs and 30-second intervals:
- **Throughput**: ~1,200 generations/hour
## Monitoring
### Quick Health Check
```sql
-- Check queue
SELECT status, COUNT(*) FROM job_queue GROUP BY status;
-- Check recent generations
SELECT status, COUNT(*) FROM image_generations
WHERE created_at > now() - interval '1 hour'
GROUP BY status;
```
### Key Metrics
- Queue depth (pending jobs)
- Processing time
- Error rate
- Throughput (jobs/hour)
See [QUICK_REFERENCE.md](./QUICK_REFERENCE.md) for more queries.
## Troubleshooting
### Jobs Not Processing
1. Check pg_cron is scheduled: `SELECT * FROM cron.job;`
2. Check function logs in Supabase Dashboard
3. Manually trigger: `curl .../process-jobs`
### High Error Rate
1. Check job errors: `SELECT error_message FROM job_queue WHERE status='failed';`
2. Verify Replicate API token is valid
3. Check Replicate service status
### Stuck Jobs
Reset jobs stuck in processing:
```sql
UPDATE job_queue SET status='pending', attempt_number=0
WHERE status='processing' AND updated_at < now() - interval '15 minutes';
```
## Migration Path
### Current State
- Both legacy and queue systems are running
- New features should use queue system
- Existing code still works with legacy function
### Next Steps
1. Update mobile app to use start-generation
2. Update web app to use start-generation
3. Monitor queue system for 1-2 weeks
4. Deprecate legacy generate-image function
5. Remove legacy code
### Rollback
If issues occur, simply revert clients to use `generate-image` function.
## Development
### Local Testing
```bash
# Start Supabase locally
npx supabase start
# Serve functions
npx supabase functions serve
# Test in another terminal
curl http://localhost:54321/functions/v1/start-generation ...
```
### Adding New Job Types
1. Add handler in `processJob()` function
2. Create processing logic
3. Update documentation
4. Deploy
Example:
```typescript
case 'my-new-job-type':
await processMyNewJobType(job, supabaseAdmin);
break;
```
## Code Statistics
- **start-generation**: ~220 lines
- **process-jobs**: ~500 lines
- **process-generation**: ~565 lines
- **Total new code**: ~1,285 lines
- **Legacy function**: ~667 lines
- **Lines saved**: Cleaner, more maintainable
## Contributing
When making changes:
1. Update relevant README.md
2. Update ARCHITECTURE.md if design changes
3. Test locally first
4. Deploy and verify in production
5. Monitor for 24 hours
## Related Documentation
- [Supabase Edge Functions](https://supabase.com/docs/guides/functions)
- [Replicate API](https://replicate.com/docs)
- [pg_cron](https://github.com/citusdata/pg_cron)
## Support
For issues:
1. Check [QUICK_REFERENCE.md](./QUICK_REFERENCE.md) for common fixes
2. Review function logs in Supabase Dashboard
3. Check job_queue table for error details
4. Review [ARCHITECTURE.md](./ARCHITECTURE.md) for design questions
## License
Part of the Picture monorepo. See root LICENSE file.
---
**Status**: ✅ Production Ready
Last Updated: 2025-10-09

View file

@ -1,668 +0,0 @@
import "jsr:@supabase/functions-js/edge-runtime.d.ts";
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2.45.0';
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type',
};
Deno.serve(async (req: Request) => {
// Handle CORS preflight
if (req.method === 'OPTIONS') {
return new Response('ok', { headers: corsHeaders });
}
try {
// Get auth token from request
const authHeader = req.headers.get('Authorization');
if (!authHeader) {
throw new Error('No authorization header');
}
// Initialize Supabase client for auth verification (with user context)
const supabase = createClient(
Deno.env.get('SUPABASE_URL') ?? '',
Deno.env.get('SUPABASE_ANON_KEY') ?? '',
{
global: { headers: { Authorization: authHeader } },
}
);
// Verify user is authenticated
const { data: { user }, error: authError } = await supabase.auth.getUser();
if (authError || !user) {
throw new Error('Unauthorized');
}
// Create a service role client for database operations that bypass RLS
const supabaseAdmin = createClient(
Deno.env.get('SUPABASE_URL') ?? '',
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY') ?? '',
{
auth: {
autoRefreshToken: false,
persistSession: false
}
}
);
// Parse request body
const {
prompt,
model_id,
model_version,
width,
height,
num_inference_steps,
guidance_scale,
generation_id,
seed = null,
// Image-to-image specific parameters
source_image_url = null,
strength = null
} = await req.json();
if (!prompt) {
throw new Error('Prompt is required');
}
if (!model_id) {
throw new Error('Model ID is required');
}
// Use provided model information
let finalWidth = width || 1024;
let finalHeight = height || 1024;
const finalSteps = num_inference_steps || 30;
const finalGuidance = guidance_scale || 7.5;
// Update generation record if ID provided
if (generation_id) {
await supabaseAdmin
.from('image_generations')
.update({
status: 'processing'
})
.eq('id', generation_id)
.eq('user_id', user.id);
}
// Get Replicate API token
const REPLICATE_API_TOKEN = Deno.env.get('REPLICATE_API_TOKEN') || Deno.env.get('REPLICATE_API_KEY');
if (!REPLICATE_API_TOKEN) {
throw new Error('Replicate API token not configured');
}
console.log('Using model:', model_id);
console.log('Model version:', model_version);
console.log('Dimensions:', finalWidth, 'x', finalHeight);
// Calculate aspect ratio for models that need it
const aspectRatio = `${finalWidth}:${finalHeight}`;
// Simplify aspect ratio to common formats
let simplifiedRatio = aspectRatio;
const gcd = (a: number, b: number): number => b === 0 ? a : gcd(b, a % b);
const divisor = gcd(finalWidth, finalHeight);
const simplifiedWidth = finalWidth / divisor;
const simplifiedHeight = finalHeight / divisor;
simplifiedRatio = `${simplifiedWidth}:${simplifiedHeight}`;
console.log('Calculated aspect ratio:', simplifiedRatio);
// Handle image-to-image if source image is provided
let sourceImageBase64 = null;
if (source_image_url && strength !== null) {
console.log('Image-to-image mode detected');
console.log('Source image URL:', source_image_url);
console.log('Strength:', strength);
try {
// Download the source image
const imageResponse = await fetch(source_image_url);
if (!imageResponse.ok) {
throw new Error('Failed to fetch source image');
}
// Convert to base64
const imageBuffer = await imageResponse.arrayBuffer();
const base64String = btoa(String.fromCharCode(...new Uint8Array(imageBuffer)));
sourceImageBase64 = `data:${imageResponse.headers.get('content-type') || 'image/jpeg'};base64,${base64String}`;
console.log('Source image converted to base64, length:', sourceImageBase64.length);
} catch (error) {
console.error('Error processing source image:', error);
throw new Error('Failed to process source image for img2img');
}
}
// Prepare input based on model type
let input: any = {};
if (model_id.includes('flux-schnell')) {
// Flux Schnell only supports specific aspect ratios
const supportedRatios = ['1:1', '16:9', '21:9', '3:2', '2:3', '4:5', '5:4', '3:4', '4:3', '9:16', '9:21'];
// Map the simplified ratio to the closest supported ratio
let fluxAspectRatio = simplifiedRatio;
if (!supportedRatios.includes(simplifiedRatio)) {
// Calculate the numeric ratio
const [w, h] = simplifiedRatio.split(':').map(Number);
const targetRatio = w / h;
// Find the closest supported ratio
let closestRatio = '1:1';
let minDiff = Infinity;
for (const ratio of supportedRatios) {
const [rw, rh] = ratio.split(':').map(Number);
const r = rw / rh;
const diff = Math.abs(r - targetRatio);
if (diff < minDiff) {
minDiff = diff;
closestRatio = ratio;
}
}
fluxAspectRatio = closestRatio;
console.log(`Mapped ${simplifiedRatio} to closest supported ratio: ${fluxAspectRatio}`);
}
// Calculate actual dimensions based on the selected aspect ratio
// Flux Schnell typically generates at 1024px on the shorter side
const [aspectW, aspectH] = fluxAspectRatio.split(':').map(Number);
if (aspectW > aspectH) {
// Landscape: height is shorter
finalHeight = 1024;
finalWidth = Math.round((finalHeight * aspectW) / aspectH);
} else if (aspectW < aspectH) {
// Portrait: width is shorter
finalWidth = 1024;
finalHeight = Math.round((finalWidth * aspectH) / aspectW);
} else {
// Square
finalWidth = 1024;
finalHeight = 1024;
}
console.log(`Final dimensions for Flux Schnell: ${finalWidth}x${finalHeight}`);
input = {
prompt,
num_inference_steps: finalSteps,
guidance: finalGuidance,
num_outputs: 1,
aspect_ratio: fluxAspectRatio,
output_format: 'webp',
output_quality: 90,
};
} else if (model_id.includes('flux-krea-dev') || model_id.includes('flux-dev')) {
input = {
prompt,
num_inference_steps: finalSteps,
guidance_scale: finalGuidance,
num_outputs: 1,
width: finalWidth,
height: finalHeight,
output_format: 'webp',
output_quality: 90,
};
// Add image-to-image parameters if provided
if (sourceImageBase64 && strength !== null) {
input.image = sourceImageBase64;
input.prompt_strength = 1 - strength; // Flux uses prompt_strength which is inverse of strength
console.log('Added img2img params for Flux Dev, prompt_strength:', input.prompt_strength);
}
} else if (model_id.includes('ideogram-v3-turbo') || model_id.includes('ideogram')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
model: 'turbo',
style_type: 'auto',
};
if (seed) {
input.seed = seed;
}
} else if (model_id.includes('imagen-4-fast') || model_id.includes('imagen')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
safety_tolerance: 2,
output_format: 'png',
};
} else if (model_id.includes('sdxl-lightning')) {
// SDXL Lightning has specific parameters
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps: 4, // Always 4 steps for Lightning
guidance_scale: 0, // No guidance for Lightning
disable_safety_checker: false,
output_format: 'webp',
output_quality: 90,
};
// Add image-to-image parameters if provided
if (sourceImageBase64 && strength !== null) {
input.image = sourceImageBase64;
input.strength = strength;
console.log('Added img2img params for SDXL Lightning, strength:', input.strength);
}
if (seed) {
input.seed = seed;
}
} else if (model_id.includes('sdxl')) {
// Regular SDXL
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps: finalSteps,
guidance_scale: finalGuidance,
refine: 'expert_ensemble_refiner',
high_noise_frac: 0.8,
output_format: 'webp',
output_quality: 90,
};
// Add image-to-image parameters if provided
if (sourceImageBase64 && strength !== null) {
input.image = sourceImageBase64;
input.prompt_strength = strength;
console.log('Added img2img params for SDXL, prompt_strength:', input.prompt_strength);
}
if (seed) {
input.seed = seed;
}
} else if (model_id.includes('seedream-4')) {
// SeeDream 4 has different parameters
let sizePreset = '2K';
if (finalWidth >= 4096 || finalHeight >= 4096) {
sizePreset = '4K';
} else if (finalWidth <= 1024 && finalHeight <= 1024) {
sizePreset = '1K';
}
input = {
prompt,
size: sizePreset,
width: finalWidth,
height: finalHeight,
max_images: 1,
aspect_ratio: simplifiedRatio,
};
// Add image-to-image parameters if provided
if (sourceImageBase64 && strength !== null) {
input.image_input = [sourceImageBase64];
console.log('Added img2img params for SeeDream 4');
}
} else if (model_id.includes('seedream-3') || model_id.includes('seedream')) {
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps: finalSteps,
guidance_scale: finalGuidance,
};
if (seed) {
input.seed = seed;
}
} else if (model_id.includes('flux-1.1-pro')) {
// Flux 1.1 Pro uses aspect_ratio
input = {
prompt,
aspect_ratio: simplifiedRatio,
output_format: 'webp',
output_quality: 90,
safety_tolerance: 2,
};
if (seed) {
input.seed = seed;
}
} else if (model_id.includes('recraft-v3-svg')) {
input = {
prompt,
width: finalWidth,
height: finalHeight,
output_format: 'svg',
style: 'vector_illustration',
};
if (seed) {
input.seed = seed;
}
} else if (model_id.includes('recraft-v3') || model_id.includes('recraft')) {
// Recraft V3 (non-SVG) uses size parameter
input = {
prompt,
size: `${finalWidth}x${finalHeight}`,
style: 'realistic_image',
};
} else if (model_id.includes('stable-diffusion-3.5') || model_id.includes('sd-3-5')) {
// SD 3.5 Large
input = {
prompt,
aspect_ratio: simplifiedRatio,
cfg: finalGuidance,
steps: finalSteps,
output_format: 'webp',
output_quality: 90,
};
if (seed) {
input.seed = seed;
}
} else if (model_id.includes('qwen-image') || model_id.includes('qwen')) {
// Qwen Image has specific parameter requirements
input = {
prompt,
aspect_ratio: simplifiedRatio,
num_inference_steps: finalSteps,
guidance: finalGuidance,
go_fast: true,
image_size: 'optimize_for_quality',
output_format: 'webp',
output_quality: 90,
enhance_prompt: false,
disable_safety_checker: false
};
if (seed) {
input.seed = seed;
}
} else {
// Default/fallback input structure
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps: finalSteps,
guidance_scale: finalGuidance,
};
if (seed) {
input.seed = seed;
}
}
console.log('Calling Replicate API with input:', JSON.stringify(input, null, 2));
// Prepare Replicate API request body
// For official models without version, use model ID format (owner/name)
// For models with version, use version hash
const requestBody: any = { input };
if (model_version) {
// Use version hash if available
requestBody.version = model_version;
console.log('Using version hash:', model_version);
} else {
// Use model ID for official models without version
requestBody.model = model_id;
console.log('Using model ID (official model):', model_id);
}
// Call Replicate API
const replicateResponse = await fetch('https://api.replicate.com/v1/predictions', {
method: 'POST',
headers: {
'Authorization': `Token ${REPLICATE_API_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(requestBody)
});
if (!replicateResponse.ok) {
const errorText = await replicateResponse.text();
console.error('Replicate API error:', errorText);
console.error('Response status:', replicateResponse.status);
// Update generation with error if ID provided
if (generation_id) {
await supabaseAdmin
.from('image_generations')
.update({
status: 'failed',
error_message: `Replicate API error: ${errorText}`,
completed_at: new Date().toISOString()
})
.eq('id', generation_id);
}
throw new Error(`Replicate API error (${replicateResponse.status}): ${errorText}`);
}
const prediction = await replicateResponse.json();
console.log('Prediction created:', prediction.id, 'Status:', prediction.status);
// Update generation with prediction ID
if (generation_id) {
await supabaseAdmin
.from('image_generations')
.update({
status: 'processing',
replicate_prediction_id: prediction.id,
error_message: null
})
.eq('id', generation_id);
}
// Start polling for completion
const startTime = Date.now();
let attempts = 0;
const maxAttempts = 120; // 10 minutes max for slower models
while (attempts < maxAttempts) {
await new Promise(resolve => setTimeout(resolve, 2000)); // Poll every 2 seconds
attempts++;
// Get prediction status
const statusResponse = await fetch(`https://api.replicate.com/v1/predictions/${prediction.id}`, {
headers: {
'Authorization': `Token ${REPLICATE_API_TOKEN}`,
},
});
if (!statusResponse.ok) {
console.error('Failed to get prediction status');
continue;
}
const result = await statusResponse.json();
console.log(`Poll ${attempts}: ${result.status}`);
if (result.status === 'succeeded' && result.output) {
// Get the output URL - different models return in different formats
let outputUrl;
if (Array.isArray(result.output)) {
outputUrl = result.output[0];
} else if (typeof result.output === 'string') {
outputUrl = result.output;
} else if (result.output.url) {
outputUrl = result.output.url;
} else {
console.error('Unexpected output format:', result.output);
throw new Error('Unexpected output format from model');
}
console.log('Output URL received:', outputUrl);
// Determine file format
let format = 'webp';
let contentType = 'image/webp';
if (model_id.includes('recraft-v3-svg')) {
format = 'svg';
contentType = 'image/svg+xml';
} else if (model_id.includes('imagen-4')) {
format = 'png';
contentType = 'image/png';
} else if (outputUrl.includes('.png')) {
format = 'png';
contentType = 'image/png';
} else if (outputUrl.includes('.jpg') || outputUrl.includes('.jpeg')) {
format = 'jpeg';
contentType = 'image/jpeg';
}
// Download the generated content
const contentResponse = await fetch(outputUrl);
if (!contentResponse.ok) {
throw new Error('Failed to download generated content from Replicate');
}
const contentBlob = await contentResponse.blob();
const arrayBuffer = await contentBlob.arrayBuffer();
const uint8Array = new Uint8Array(arrayBuffer);
// Generate filename
const filename = `${generation_id || Date.now()}.${format}`;
const storagePath = `${user.id}/${filename}`;
console.log('Uploading to storage:', storagePath);
// Upload to Supabase Storage (using admin client to bypass RLS)
const { error: uploadError } = await supabaseAdmin
.storage
.from('generated-images')
.upload(storagePath, uint8Array, {
contentType,
upsert: true
});
if (uploadError) {
console.error('Upload error:', uploadError);
throw uploadError;
}
// Get public URL
const { data: { publicUrl } } = supabaseAdmin
.storage
.from('generated-images')
.getPublicUrl(storagePath);
console.log('Public URL:', publicUrl);
// Save image record if generation_id provided (using admin client to bypass RLS)
if (generation_id) {
const { data: imageData, error: imageError } = await supabaseAdmin
.from('images')
.insert({
generation_id: generation_id,
user_id: user.id,
filename,
storage_path: storagePath,
public_url: publicUrl,
file_size: uint8Array.length,
width: finalWidth,
height: finalHeight,
format,
prompt: prompt,
negative_prompt: null,
model: model_id.split('/').pop()
})
.select()
.single();
if (imageError) {
console.error('Image record error:', imageError);
throw imageError;
}
// Update generation status
const generationTime = Math.floor((Date.now() - startTime) / 1000);
await supabaseAdmin
.from('image_generations')
.update({
status: 'completed',
completed_at: new Date().toISOString(),
generation_time_seconds: generationTime
})
.eq('id', generation_id);
console.log('Generation completed successfully');
return new Response(
JSON.stringify({
success: true,
image: imageData,
generation_time: generationTime,
}),
{
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
} else {
// Return without saving to database
return new Response(
JSON.stringify({
success: true,
url: publicUrl,
format,
generation_time: Math.floor((Date.now() - startTime) / 1000),
}),
{
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
}
} else if (result.status === 'failed' || result.status === 'canceled') {
const errorMsg = result.error || `Generation ${result.status}`;
console.error('Generation failed:', errorMsg);
// Update generation with error if ID provided
if (generation_id) {
await supabaseAdmin
.from('image_generations')
.update({
status: 'failed',
error_message: errorMsg,
completed_at: new Date().toISOString()
})
.eq('id', generation_id);
}
throw new Error(errorMsg);
}
}
// Timeout
if (generation_id) {
await supabaseAdmin
.from('image_generations')
.update({
status: 'failed',
error_message: 'Generation timeout after 10 minutes',
completed_at: new Date().toISOString()
})
.eq('id', generation_id);
}
throw new Error('Generation timeout after 10 minutes');
} catch (error) {
console.error('Error:', error.message);
console.error('Stack:', error.stack);
return new Response(
JSON.stringify({
error: error.message || 'Internal server error'
}),
{
status: 400,
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
}
});

View file

@ -1,208 +0,0 @@
# Process Generation Edge Function
## Overview
This Edge Function contains the core Replicate API integration logic extracted from the original 667-line `generate-image` function. It can be imported as a module or called standalone for testing.
## Purpose
- Handle actual Replicate API interaction for image generation
- Support 15+ different AI models with model-specific parameter handling
- Calculate aspect ratios and dimensions for each model
- Handle image-to-image (img2img) conversion
- Poll Replicate API until generation completes
- Return result URL when ready
## Supported Models
### FLUX Models
- **FLUX Schnell**: Fast generation with aspect ratio constraints
- **FLUX Dev**: Full control with img2img support
- **FLUX Krea Dev**: Enhanced version with img2img
- **FLUX 1.1 Pro**: Latest version with aspect ratio
### SDXL Models
- **SDXL**: Full parameters with refiner and img2img
- **SDXL Lightning**: Ultra-fast 4-step generation with img2img
### Other Models
- **Ideogram V3 Turbo**: Aspect ratio based
- **Imagen 4 Fast**: Google's model with aspect ratio
- **Stable Diffusion 3.5**: Latest SD with aspect ratio
- **SeeDream 3/4**: Advanced models with preset sizes
- **Recraft V3**: Both raster and SVG output
- **Qwen Image**: Specialized parameters
## Usage
### As a Module (Recommended)
```typescript
import { processGeneration } from '../process-generation/index.ts';
const result = await processGeneration(
{
prompt: "A beautiful sunset over mountains",
model_id: "black-forest-labs/flux-schnell",
width: 1024,
height: 1024,
num_inference_steps: 30,
guidance_scale: 7.5,
},
replicateApiToken
);
if (result.success) {
console.log('Output URL:', result.output_url);
console.log('Format:', result.format);
} else {
console.error('Error:', result.error);
}
```
### As Standalone Function (Testing)
```bash
curl -X POST https://your-project.supabase.co/functions/v1/process-generation \
-H "Authorization: Bearer YOUR_ANON_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "A beautiful sunset",
"model_id": "black-forest-labs/flux-schnell",
"width": 1024,
"height": 1024,
"num_inference_steps": 30,
"guidance_scale": 7.5
}'
```
## Parameters
### Required
- `prompt`: Text description of desired image
- `model_id`: Replicate model ID (e.g., "black-forest-labs/flux-schnell")
- `width`: Image width in pixels
- `height`: Image height in pixels
- `num_inference_steps`: Number of denoising steps
- `guidance_scale`: How closely to follow prompt
### Optional
- `negative_prompt`: What to avoid in image
- `model_version`: Specific model version hash
- `seed`: Random seed for reproducibility
- `source_image_url`: Source image for img2img
- `strength`: Transformation strength (0-1) for img2img
## Return Value
```typescript
interface GenerationResult {
success: boolean;
output_url?: string; // URL to generated image
format?: string; // Image format (webp, png, jpeg, svg)
width?: number; // Final image width
height?: number; // Final image height
error?: string; // Error message if failed
generation_time_seconds?: number; // Time taken
}
```
## Model-Specific Logic
### Aspect Ratio Handling
- **FLUX Schnell**: Only supports specific ratios (1:1, 16:9, etc.)
- Automatically maps requested ratio to closest supported
- Adjusts dimensions to maintain ratio
- **Ideogram/Imagen**: Use simplified aspect ratio string
- **SDXL/Others**: Use exact width/height
### Image-to-Image Support
Models with img2img:
- FLUX Dev/Krea Dev
- SDXL and SDXL Lightning
- SeeDream 4
The function automatically:
1. Downloads source image
2. Converts to base64 data URI
3. Adds appropriate parameters for each model
### Output Formats
- **Default**: WebP for efficiency
- **Imagen 4**: PNG
- **Recraft SVG**: Vector SVG format
- **Auto-detected**: From URL extension
## Architecture
### Main Function
`processGeneration(params, apiToken)` - Main entry point
### Helper Functions
- `simplifyAspectRatio()` - Calculate simplified ratio (e.g., 16:9)
- `convertImageToBase64()` - Convert URL to data URI for img2img
- `buildModelInput()` - Create model-specific input parameters
- `determineOutputFormat()` - Detect output format from URL/model
## Error Handling
- Validates required parameters
- Handles API errors with detailed messages
- Retries polling on transient failures
- Timeout after 10 minutes (120 polls × 2 seconds)
- Returns structured error in result object
## Environment Variables
Required:
- `REPLICATE_API_TOKEN` or `REPLICATE_API_KEY`: Replicate API token
## Development
### Local Testing
```bash
# Serve locally
npx supabase functions serve process-generation
# Test with curl
curl -X POST http://localhost:54321/functions/v1/process-generation \
-H "Authorization: Bearer YOUR_ANON_KEY" \
-d '{"prompt":"test","model_id":"black-forest-labs/flux-schnell",...}'
```
### Deploy
```bash
npx supabase functions deploy process-generation
```
## Integration with Job Queue
This function is called by `process-jobs` worker for 'generate-image' jobs:
```typescript
const result = await processGeneration(job.payload, apiToken);
if (result.success) {
// Enqueue download-image job
await enqueueJob('download-image', {
output_url: result.output_url,
...
});
}
```
## Performance Notes
- Polls every 2 seconds (not resource-intensive)
- Max 10 minute timeout per generation
- Supports concurrent generations when imported
- Image-to-image conversion happens once, then cached in memory
## Future Enhancements
- [ ] Add caching for model configurations
- [ ] Support batch generation (multiple images)
- [ ] Add progress callbacks for long generations
- [ ] Implement retry logic with exponential backoff
- [ ] Add telemetry/metrics collection

View file

@ -1,77 +0,0 @@
import "jsr:@supabase/functions-js/edge-runtime.d.ts";
import { processGeneration, type GenerationParams, type GenerationResult } from './lib.ts';
/**
* PROCESS GENERATION EDGE FUNCTION
*
* Standalone Edge Function wrapper for processGeneration library.
* Useful for testing and direct invocation.
*
* The actual generation logic lives in lib.ts so it can be safely
* imported by other functions (like process-jobs) without causing
* Deno.serve() conflicts.
*/
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type',
};
Deno.serve(async (req: Request) => {
// Handle CORS preflight
if (req.method === 'OPTIONS') {
return new Response('ok', { headers: corsHeaders });
}
try {
console.log('=== PROCESS GENERATION EDGE FUNCTION INVOKED ===');
// Get Replicate API token
const replicateApiToken = Deno.env.get('REPLICATE_API_TOKEN') || Deno.env.get('REPLICATE_API_KEY');
if (!replicateApiToken) {
throw new Error('Replicate API token not configured');
}
// Parse request body
const params: GenerationParams = await req.json();
console.log('Received generation request:', {
model: params.model_id,
prompt: params.prompt.substring(0, 50) + '...',
dimensions: `${params.width}x${params.height}`
});
// Call generation library
const result: GenerationResult = await processGeneration(params, replicateApiToken);
// Return response
return new Response(
JSON.stringify(result),
{
status: result.success ? 200 : 400,
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
} catch (error: any) {
console.error('Error in process-generation handler:', error.message);
console.error('Stack:', error.stack);
return new Response(
JSON.stringify({
success: false,
error: error.message || 'Internal server error'
}),
{
status: 500,
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
}
});

View file

@ -1,507 +0,0 @@
/**
* PROCESS GENERATION LIBRARY
*
* Pure functions for image generation via Replicate API.
* This module contains NO Deno.serve() so it can be safely imported
* by other Edge Functions.
*
* Can be imported by:
* - process-generation/index.ts (Edge Function wrapper)
* - process-jobs/index.ts (Worker function)
* - Any other function that needs to generate images
*/
// Supported model types and their configurations
interface ModelConfig {
id: string;
version?: string;
supportsImg2Img: boolean;
supportsAspectRatio: boolean;
supportsDimensions: boolean;
}
export interface GenerationParams {
prompt: string;
negative_prompt?: string | null;
model_id: string;
model_version?: string | null;
width: number;
height: number;
num_inference_steps: number;
guidance_scale: number;
seed?: number | null;
source_image_url?: string | null;
strength?: number | null;
}
export interface GenerationResult {
success: boolean;
output_url?: string;
format?: string;
width?: number;
height?: number;
error?: string;
generation_time_seconds?: number;
}
/**
* Calculate greatest common divisor for aspect ratio simplification
*/
function gcd(a: number, b: number): number {
return b === 0 ? a : gcd(b, a % b);
}
/**
* Simplify aspect ratio to smallest whole numbers (e.g., 1920:1080 -> 16:9)
*/
function simplifyAspectRatio(width: number, height: number): string {
const divisor = gcd(width, height);
const simplifiedWidth = width / divisor;
const simplifiedHeight = height / divisor;
return `${simplifiedWidth}:${simplifiedHeight}`;
}
/**
* Convert image URL to base64 data URI for img2img
*/
async function convertImageToBase64(imageUrl: string): Promise<string> {
console.log('Converting image to base64:', imageUrl);
const imageResponse = await fetch(imageUrl);
if (!imageResponse.ok) {
throw new Error('Failed to fetch source image');
}
const imageBuffer = await imageResponse.arrayBuffer();
const base64String = btoa(String.fromCharCode(...new Uint8Array(imageBuffer)));
const contentType = imageResponse.headers.get('content-type') || 'image/jpeg';
const dataUri = `data:${contentType};base64,${base64String}`;
console.log('Image converted to base64, length:', dataUri.length);
return dataUri;
}
/**
* Build model-specific input parameters for Replicate API
*/
function buildModelInput(params: GenerationParams, sourceImageBase64?: string | null): any {
const {
prompt,
model_id,
width,
height,
num_inference_steps,
guidance_scale,
seed,
strength
} = params;
let finalWidth = width;
let finalHeight = height;
const simplifiedRatio = simplifyAspectRatio(width, height);
console.log('Building input for model:', model_id);
console.log('Dimensions:', finalWidth, 'x', finalHeight);
console.log('Aspect ratio:', simplifiedRatio);
let input: any = {};
// FLUX Schnell - Uses aspect_ratio with specific supported ratios
if (model_id.includes('flux-schnell')) {
const supportedRatios = ['1:1', '16:9', '21:9', '3:2', '2:3', '4:5', '5:4', '3:4', '4:3', '9:16', '9:21'];
// Find closest supported ratio
let fluxAspectRatio = simplifiedRatio;
if (!supportedRatios.includes(simplifiedRatio)) {
const [w, h] = simplifiedRatio.split(':').map(Number);
const targetRatio = w / h;
let closestRatio = '1:1';
let minDiff = Infinity;
for (const ratio of supportedRatios) {
const [rw, rh] = ratio.split(':').map(Number);
const r = rw / rh;
const diff = Math.abs(r - targetRatio);
if (diff < minDiff) {
minDiff = diff;
closestRatio = ratio;
}
}
fluxAspectRatio = closestRatio;
console.log(`Mapped ${simplifiedRatio} to closest supported ratio: ${fluxAspectRatio}`);
}
// Calculate actual dimensions (Flux Schnell uses 1024px on shorter side)
const [aspectW, aspectH] = fluxAspectRatio.split(':').map(Number);
if (aspectW > aspectH) {
finalHeight = 1024;
finalWidth = Math.round((finalHeight * aspectW) / aspectH);
} else if (aspectW < aspectH) {
finalWidth = 1024;
finalHeight = Math.round((finalWidth * aspectH) / aspectW);
} else {
finalWidth = 1024;
finalHeight = 1024;
}
console.log(`Final dimensions for Flux Schnell: ${finalWidth}x${finalHeight}`);
input = {
prompt,
num_inference_steps,
guidance: guidance_scale,
num_outputs: 1,
aspect_ratio: fluxAspectRatio,
output_format: 'webp',
output_quality: 90,
};
}
// FLUX Dev / FLUX Krea Dev - Supports dimensions and img2img
else if (model_id.includes('flux-krea-dev') || model_id.includes('flux-dev')) {
input = {
prompt,
num_inference_steps,
guidance_scale,
num_outputs: 1,
width: finalWidth,
height: finalHeight,
output_format: 'webp',
output_quality: 90,
};
if (sourceImageBase64 && strength !== null) {
input.image = sourceImageBase64;
input.prompt_strength = 1 - strength; // Flux uses inverse
console.log('Added img2img params for Flux Dev, prompt_strength:', input.prompt_strength);
}
}
// Ideogram V3 Turbo - Uses aspect_ratio
else if (model_id.includes('ideogram-v3-turbo') || model_id.includes('ideogram')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
model: 'turbo',
style_type: 'auto',
};
if (seed) input.seed = seed;
}
// Imagen 4 Fast - Uses aspect_ratio
else if (model_id.includes('imagen-4-fast') || model_id.includes('imagen')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
safety_tolerance: 2,
output_format: 'png',
};
}
// SDXL Lightning - 4 steps, no guidance, supports img2img
else if (model_id.includes('sdxl-lightning')) {
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps: 4, // Always 4 for Lightning
guidance_scale: 0, // No guidance for Lightning
disable_safety_checker: false,
output_format: 'webp',
output_quality: 90,
};
if (sourceImageBase64 && strength !== null) {
input.image = sourceImageBase64;
input.strength = strength;
console.log('Added img2img params for SDXL Lightning, strength:', input.strength);
}
if (seed) input.seed = seed;
}
// Regular SDXL - Full parameters, supports img2img
else if (model_id.includes('sdxl')) {
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps,
guidance_scale,
refine: 'expert_ensemble_refiner',
high_noise_frac: 0.8,
output_format: 'webp',
output_quality: 90,
};
if (sourceImageBase64 && strength !== null) {
input.image = sourceImageBase64;
input.prompt_strength = strength;
console.log('Added img2img params for SDXL, prompt_strength:', input.prompt_strength);
}
if (seed) input.seed = seed;
}
// SeeDream 4 - Uses size preset and aspect_ratio
else if (model_id.includes('seedream-4')) {
let sizePreset = '2K';
if (finalWidth >= 4096 || finalHeight >= 4096) {
sizePreset = '4K';
} else if (finalWidth <= 1024 && finalHeight <= 1024) {
sizePreset = '1K';
}
input = {
prompt,
size: sizePreset,
width: finalWidth,
height: finalHeight,
max_images: 1,
aspect_ratio: simplifiedRatio,
};
if (sourceImageBase64 && strength !== null) {
input.image_input = [sourceImageBase64];
console.log('Added img2img params for SeeDream 4');
}
}
// SeeDream 3 - Standard dimensions
else if (model_id.includes('seedream-3') || model_id.includes('seedream')) {
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps,
guidance_scale,
};
if (seed) input.seed = seed;
}
// FLUX 1.1 Pro - Uses aspect_ratio
else if (model_id.includes('flux-1.1-pro')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
output_format: 'webp',
output_quality: 90,
safety_tolerance: 2,
};
if (seed) input.seed = seed;
}
// Recraft V3 SVG - Vector output
else if (model_id.includes('recraft-v3-svg')) {
input = {
prompt,
width: finalWidth,
height: finalHeight,
output_format: 'svg',
style: 'vector_illustration',
};
if (seed) input.seed = seed;
}
// Recraft V3 - Uses size parameter
else if (model_id.includes('recraft-v3') || model_id.includes('recraft')) {
input = {
prompt,
size: `${finalWidth}x${finalHeight}`,
style: 'realistic_image',
};
}
// Stable Diffusion 3.5 Large
else if (model_id.includes('stable-diffusion-3.5') || model_id.includes('sd-3-5')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
cfg: guidance_scale,
steps: num_inference_steps,
output_format: 'webp',
output_quality: 90,
};
if (seed) input.seed = seed;
}
// Qwen Image - Specific parameter requirements
else if (model_id.includes('qwen-image') || model_id.includes('qwen')) {
input = {
prompt,
aspect_ratio: simplifiedRatio,
num_inference_steps,
guidance: guidance_scale,
go_fast: true,
image_size: 'optimize_for_quality',
output_format: 'webp',
output_quality: 90,
enhance_prompt: false,
disable_safety_checker: false
};
if (seed) input.seed = seed;
}
// Default/fallback for unknown models
else {
input = {
prompt,
width: finalWidth,
height: finalHeight,
num_inference_steps,
guidance_scale,
};
if (seed) input.seed = seed;
}
return { input, finalWidth, finalHeight };
}
/**
* Determine output format from model ID and output URL
*/
function determineOutputFormat(modelId: string, outputUrl: string): { format: string; contentType: string } {
if (modelId.includes('recraft-v3-svg')) {
return { format: 'svg', contentType: 'image/svg+xml' };
}
if (modelId.includes('imagen-4')) {
return { format: 'png', contentType: 'image/png' };
}
if (outputUrl.includes('.png')) {
return { format: 'png', contentType: 'image/png' };
}
if (outputUrl.includes('.jpg') || outputUrl.includes('.jpeg')) {
return { format: 'jpeg', contentType: 'image/jpeg' };
}
// Default to webp
return { format: 'webp', contentType: 'image/webp' };
}
/**
* Main function: Process image generation via Replicate API
*
* @param params - Generation parameters
* @param replicateApiToken - Replicate API token
* @returns Generation result with output URL or error
*/
export async function processGeneration(
params: GenerationParams,
replicateApiToken: string
): Promise<GenerationResult> {
const startTime = Date.now();
try {
console.log('=== PROCESS GENERATION START ===');
console.log('Model:', params.model_id);
console.log('Prompt:', params.prompt.substring(0, 100) + '...');
// Handle image-to-image conversion if needed
let sourceImageBase64: string | null = null;
if (params.source_image_url && params.strength !== null) {
console.log('Image-to-image mode detected');
sourceImageBase64 = await convertImageToBase64(params.source_image_url);
}
// Build model-specific input
const { input, finalWidth, finalHeight } = buildModelInput(params, sourceImageBase64);
console.log('Replicate API input:', JSON.stringify(input, null, 2));
// Prepare Replicate API request
const requestBody: any = { input };
if (params.model_version) {
requestBody.version = params.model_version;
console.log('Using version hash:', params.model_version);
} else {
requestBody.model = params.model_id;
console.log('Using model ID (official model):', params.model_id);
}
// Call Replicate API to start prediction
console.log('Calling Replicate API...');
const replicateResponse = await fetch('https://api.replicate.com/v1/predictions', {
method: 'POST',
headers: {
'Authorization': `Token ${replicateApiToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(requestBody)
});
if (!replicateResponse.ok) {
const errorText = await replicateResponse.text();
console.error('Replicate API error:', errorText);
console.error('Response status:', replicateResponse.status);
throw new Error(`Replicate API error (${replicateResponse.status}): ${errorText}`);
}
const prediction = await replicateResponse.json();
console.log('Prediction created:', prediction.id, 'Status:', prediction.status);
// Poll for completion
const maxAttempts = 120; // 10 minutes max (5 second intervals)
let attempts = 0;
while (attempts < maxAttempts) {
await new Promise(resolve => setTimeout(resolve, 5000)); // Poll every 5 seconds
attempts++;
const statusResponse = await fetch(`https://api.replicate.com/v1/predictions/${prediction.id}`, {
headers: {
'Authorization': `Token ${replicateApiToken}`,
},
});
if (!statusResponse.ok) {
console.error('Failed to get prediction status');
continue; // Retry
}
const result = await statusResponse.json();
console.log(`Poll ${attempts}: ${result.status}`);
// Success - Extract output URL
if (result.status === 'succeeded' && result.output) {
let outputUrl: string;
// Different models return output in different formats
if (Array.isArray(result.output)) {
outputUrl = result.output[0];
} else if (typeof result.output === 'string') {
outputUrl = result.output;
} else if (result.output.url) {
outputUrl = result.output.url;
} else {
console.error('Unexpected output format:', result.output);
throw new Error('Unexpected output format from model');
}
console.log('Generation succeeded! Output URL:', outputUrl);
const { format, contentType } = determineOutputFormat(params.model_id, outputUrl);
const generationTime = Math.floor((Date.now() - startTime) / 1000);
console.log('=== PROCESS GENERATION COMPLETE ===');
console.log('Time taken:', generationTime, 'seconds');
return {
success: true,
output_url: outputUrl,
format,
width: finalWidth,
height: finalHeight,
generation_time_seconds: generationTime
};
}
// Failed or canceled
if (result.status === 'failed' || result.status === 'canceled') {
const errorMsg = result.error || `Generation ${result.status}`;
console.error('Generation failed:', errorMsg);
throw new Error(errorMsg);
}
}
// Timeout after max attempts
throw new Error('Generation timeout after 10 minutes');
} catch (error: any) {
console.error('Error in processGeneration:', error.message);
console.error('Stack:', error.stack);
return {
success: false,
error: error.message || 'Unknown error during generation'
};
}
}

View file

@ -1,140 +0,0 @@
import "jsr:@supabase/functions-js/edge-runtime.d.ts";
import { processGeneration } from '../process-generation/index.ts';
/**
* TEST VERSION 2 - Testing with import
*
* This version tests if the import of process-generation causes the issue
*/
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type',
};
Deno.serve(async (req: Request) => {
console.log('=== STEP 1: Function invoked ===');
console.log('Method:', req.method);
console.log('URL:', req.url);
// Handle CORS preflight
if (req.method === 'OPTIONS') {
console.log('CORS preflight request');
return new Response('ok', { headers: corsHeaders });
}
try {
console.log('=== STEP 2: Getting environment variables ===');
const supabaseUrl = Deno.env.get('SUPABASE_URL');
const supabaseServiceRoleKey = Deno.env.get('SUPABASE_SERVICE_ROLE_KEY');
console.log('SUPABASE_URL exists:', !!supabaseUrl);
console.log('SUPABASE_URL length:', supabaseUrl?.length || 0);
console.log('SUPABASE_SERVICE_ROLE_KEY exists:', !!supabaseServiceRoleKey);
console.log('SUPABASE_SERVICE_ROLE_KEY length:', supabaseServiceRoleKey?.length || 0);
if (!supabaseUrl) {
throw new Error('SUPABASE_URL is not set');
}
if (!supabaseServiceRoleKey) {
throw new Error('SUPABASE_SERVICE_ROLE_KEY is not set');
}
console.log('=== STEP 3: Importing createClient ===');
// Import here to see if this causes issues
const { createClient } = await import('https://esm.sh/@supabase/supabase-js@2.45.0');
console.log('createClient imported successfully');
console.log('=== STEP 4: Creating Supabase client ===');
const supabaseAdmin = createClient(
supabaseUrl,
supabaseServiceRoleKey,
{
auth: {
autoRefreshToken: false,
persistSession: false
}
}
);
console.log('Supabase client created successfully');
console.log('=== STEP 5: Testing RPC call ===');
// Test a simple query first
const { data: testData, error: testError } = await supabaseAdmin
.from('job_queue')
.select('count')
.limit(1);
console.log('Test query result:', { testData, testError });
console.log('=== STEP 6: Calling claim_next_job ===');
const { data: jobs, error: claimError } = await supabaseAdmin.rpc('claim_next_job');
console.log('claim_next_job result:', {
jobs: jobs ? `${jobs.length} jobs` : 'null',
error: claimError
});
if (claimError) {
console.error('Error details:', JSON.stringify(claimError, null, 2));
throw new Error(`claim_next_job failed: ${claimError.message}`);
}
console.log('=== STEP 7: Testing processGeneration import ===');
console.log('processGeneration exists:', typeof processGeneration);
console.log('=== STEP 8: Success ===');
return new Response(
JSON.stringify({
success: true,
message: 'Debug test completed successfully (with import)',
jobs_found: jobs?.length || 0,
debug: {
supabaseUrl: supabaseUrl.substring(0, 20) + '...',
hasServiceRoleKey: !!supabaseServiceRoleKey,
testQueryWorked: !testError,
claimJobWorked: !claimError,
processGenerationImported: typeof processGeneration
}
}),
{
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
} catch (error: any) {
console.error('=== ERROR ===');
console.error('Error name:', error.name);
console.error('Error message:', error.message);
console.error('Error stack:', error.stack);
return new Response(
JSON.stringify({
error: error.message || 'Internal server error',
errorName: error.name,
errorStack: error.stack,
success: false
}),
{
status: 500,
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
}
});

View file

@ -1,412 +0,0 @@
# Process Jobs Worker Edge Function
## Overview
Background worker that processes queued jobs from the `job_queue` table. This is the heart of the asynchronous image generation system.
## Purpose
- Claim and process jobs from the queue
- Handle multiple job types (generate-image, download-image)
- Process jobs in parallel for better throughput
- Update job status and handle retries
- Enqueue follow-up jobs as needed
## Architecture
```
pg_cron (every minute)
process-jobs worker
claims 3 jobs in parallel
┌─────────────┬─────────────┬─────────────┐
│ Job 1 │ Job 2 │ Job 3 │
└─────────────┴─────────────┴─────────────┘
↓ ↓ ↓
generate-image download-image generate-image
↓ ↓ ↓
Replicate API Storage Upload Replicate API
```
## Configuration
```typescript
const MAX_PARALLEL_JOBS = 3; // Jobs processed concurrently
const JOB_TIMEOUT_MS = 600000; // 10 minutes per job
```
Adjust these based on:
- Server capacity
- Replicate API rate limits
- Storage bandwidth
- Average generation time
## Job Types
### 1. generate-image
Generates an image using Replicate API.
**Payload:**
```typescript
{
generation_id: string;
user_id: string;
prompt: string;
negative_prompt?: string;
model_id: string;
model_version?: string;
width: number;
height: number;
num_inference_steps: number;
guidance_scale: number;
seed?: number;
source_image_url?: string; // For img2img
strength?: number; // For img2img
}
```
**Process:**
1. Update generation status to 'processing'
2. Call `processGeneration()` from process-generation function
3. Wait for Replicate completion (polls every 2 seconds)
4. Enqueue 'download-image' job with result URL
5. Update generation status to 'downloading'
6. Complete job
**On Error:**
- Update generation to 'failed'
- Complete job with error (will retry if attempts remain)
### 2. download-image
Downloads generated image and stores it in Supabase Storage.
**Payload:**
```typescript
{
generation_id: string;
user_id: string;
output_url: string;
format: string;
width: number;
height: number;
prompt: string;
negative_prompt?: string;
model_id: string;
}
```
**Process:**
1. Download image from Replicate URL
2. Upload to Supabase Storage (bucket: generated-images)
3. Create record in 'images' table
4. Update generation status to 'completed'
5. Complete job
**On Error:**
- Retry if attempts remain
- If final attempt fails, mark generation as 'failed'
## How It Works
### 1. Invocation
The function can be triggered in multiple ways:
**Via pg_cron (Production):**
```sql
-- Runs every minute
SELECT cron.schedule(
'process-jobs-worker',
'* * * * *', -- Every minute
$$
SELECT net.http_post(
'https://your-project.supabase.co/functions/v1/process-jobs',
'{}',
'{"Content-Type": "application/json"}'::jsonb
)
$$
);
```
**Manually (Testing):**
```bash
curl -X POST https://your-project.supabase.co/functions/v1/process-jobs \
-H "Authorization: Bearer YOUR_ANON_KEY"
```
### 2. Job Claiming
Uses `claim_next_job()` database function:
- Atomically claims next pending job
- Updates status to 'processing'
- Increments attempt number
- Returns job or NULL if queue empty
### 3. Parallel Processing
```typescript
// Claim and process 3 jobs simultaneously
const jobPromises = [];
for (let i = 0; i < MAX_PARALLEL_JOBS; i++) {
jobPromises.push(claimAndProcessJob(supabaseAdmin));
}
await Promise.all(jobPromises);
```
### 4. Job Completion
Uses `complete_job()` database function:
- If successful: Updates to 'completed', stores result
- If error and retries remain: Resets to 'pending'
- If error and no retries: Updates to 'failed'
## Database Integration
### Required Database Functions
**claim_next_job()**: Claims next available job
```sql
CREATE OR REPLACE FUNCTION claim_next_job()
RETURNS TABLE(...) AS $$
-- Atomically claim next pending job
UPDATE job_queue
SET status = 'processing',
attempt_number = attempt_number + 1,
updated_at = now()
WHERE id = (
SELECT id FROM job_queue
WHERE status = 'pending'
ORDER BY priority DESC, created_at ASC
FOR UPDATE SKIP LOCKED
LIMIT 1
)
RETURNING *;
$$ LANGUAGE sql;
```
**complete_job()**: Marks job as completed or failed
```sql
CREATE OR REPLACE FUNCTION complete_job(
p_job_id UUID,
p_result JSONB DEFAULT NULL,
p_error TEXT DEFAULT NULL
)
RETURNS VOID AS $$
-- Update job status based on result
-- Handle retries if error and attempts remain
$$ LANGUAGE plpgsql;
```
**enqueue_job()**: Adds new job to queue
```sql
CREATE OR REPLACE FUNCTION enqueue_job(
p_job_type TEXT,
p_payload JSONB,
p_priority INTEGER DEFAULT 0,
p_max_attempts INTEGER DEFAULT 3
)
RETURNS UUID AS $$
-- Insert new job and return ID
$$ LANGUAGE plpgsql;
```
### Required Tables
**job_queue**:
```sql
CREATE TABLE job_queue (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
job_type TEXT NOT NULL,
payload JSONB NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
priority INTEGER NOT NULL DEFAULT 0,
attempt_number INTEGER NOT NULL DEFAULT 0,
max_attempts INTEGER NOT NULL DEFAULT 3,
result JSONB,
error_message TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
completed_at TIMESTAMPTZ
);
CREATE INDEX idx_job_queue_pending
ON job_queue(status, priority DESC, created_at ASC)
WHERE status = 'pending';
```
## Error Handling
### Job-Level Errors
- Caught and passed to `complete_job()`
- Job retried if attempts remain
- Generation marked as failed on final attempt
### Transient Errors
- Network issues during polling
- Temporary Replicate API errors
- Retried on next attempt
### Fatal Errors
- Invalid job payload
- Missing required configuration
- Job marked as failed immediately
### Timeout Handling
```typescript
await Promise.race([
processJob(job, supabaseAdmin),
new Promise((_, reject) =>
setTimeout(() => reject(new Error('Job timeout')), JOB_TIMEOUT_MS)
)
]);
```
## Monitoring
### Success Response
```json
{
"success": true,
"processed": 3,
"errors": 0,
"message": "Processed 3 job(s) with 0 error(s)"
}
```
### Logs
All operations are logged:
- Job claiming
- Job processing start/complete
- API calls
- Database updates
- Errors with stack traces
### Metrics to Monitor
- Jobs processed per invocation
- Average job duration
- Error rate by job type
- Queue depth (pending jobs)
- Failed jobs requiring attention
## Performance Optimization
### Current Configuration
- Runs every minute
- Processes up to 3 jobs per run
- Max throughput: ~180 jobs/hour
### Scaling Up
**More parallel jobs:**
```typescript
const MAX_PARALLEL_JOBS = 10; // Process more at once
```
**More frequent runs:**
```sql
-- Every 30 seconds
SELECT cron.schedule('...', '*/30 * * * * *', ...);
```
**Multiple workers:**
```sql
-- Deploy multiple worker instances
-- Queue uses SKIP LOCKED for safe concurrency
```
### Resource Considerations
- Replicate API rate limits
- Supabase Edge Function concurrency limits
- Database connection pool size
- Storage bandwidth
## Testing
### Local Testing
```bash
# Start functions locally
npx supabase functions serve process-jobs
# Trigger manually
curl -X POST http://localhost:54321/functions/v1/process-jobs \
-H "Authorization: Bearer YOUR_ANON_KEY"
```
### Enqueue Test Job
```sql
SELECT enqueue_job(
'generate-image',
'{
"generation_id": "test-123",
"user_id": "user-123",
"prompt": "test prompt",
"model_id": "black-forest-labs/flux-schnell",
"width": 1024,
"height": 1024,
"num_inference_steps": 4,
"guidance_scale": 7.5
}'::jsonb
);
```
### Check Job Status
```sql
SELECT * FROM job_queue
ORDER BY created_at DESC
LIMIT 10;
```
## Deployment
```bash
# Deploy function
npx supabase functions deploy process-jobs
# Set up pg_cron job
psql $DATABASE_URL -c "
SELECT cron.schedule(
'process-jobs-worker',
'* * * * *',
\$\$
SELECT net.http_post(
'https://your-project.supabase.co/functions/v1/process-jobs',
'{}',
'{\"Content-Type\": \"application/json\"}'::jsonb
)
\$\$
);
"
```
## Troubleshooting
### Jobs Not Processing
1. Check pg_cron is installed: `SELECT * FROM cron.job;`
2. Check cron job is scheduled: `SELECT * FROM cron.job_run_details;`
3. Check function is deployed: Test with manual curl
4. Check function logs: Supabase dashboard → Edge Functions → Logs
### Jobs Stuck in Processing
- Likely crashed mid-processing
- Reset manually: `UPDATE job_queue SET status='pending' WHERE status='processing';`
- Jobs will be retried
### High Error Rate
- Check Replicate API status
- Verify API token is valid
- Check database functions exist
- Review job payloads for invalid data
## Future Enhancements
- [ ] Add job priority scheduling
- [ ] Implement dead letter queue for failed jobs
- [ ] Add job progress tracking (0-100%)
- [ ] Support job cancellation
- [ ] Add webhook notifications on completion
- [ ] Implement job batching for efficiency
- [ ] Add health check endpoint
- [ ] Store performance metrics in database

View file

@ -1,501 +0,0 @@
import "jsr:@supabase/functions-js/edge-runtime.d.ts";
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2.45.0';
import { processGeneration } from '../process-generation/lib.ts';
/**
* PROCESS JOBS WORKER EDGE FUNCTION
*
* Purpose: Background worker that processes queued jobs from the job_queue table
*
* How it works:
* 1. Called by pg_cron every minute (or manually via HTTP)
* 2. Claims next available job(s) from queue using claim_next_job() function
* 3. Processes job based on job_type:
* - 'generate-image': Calls Replicate API via processGeneration()
* - 'download-image': Downloads and stores generated image
* 4. Updates job status and enqueues follow-up jobs as needed
* 5. Processes multiple jobs in parallel for better throughput
*
* Configuration:
* - MAX_PARALLEL_JOBS: Number of jobs to process concurrently (default 3)
* - Can be triggered manually for testing: POST to function URL
*/
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type',
};
// Configuration
const MAX_PARALLEL_JOBS = 3; // Process up to 3 jobs in parallel
const JOB_TIMEOUT_MS = 600000; // 10 minutes per job
interface Job {
id: string;
job_type: string;
payload: any;
attempt_number: number;
max_attempts: number;
}
/**
* Process a single 'generate-image' job
*/
async function processGenerateImageJob(job: Job, supabaseAdmin: any): Promise<void> {
console.log(`Processing generate-image job: ${job.id}`);
console.log('Payload:', JSON.stringify(job.payload, null, 2));
const {
generation_id,
user_id,
prompt,
negative_prompt,
model_id,
model_version,
width,
height,
num_inference_steps,
guidance_scale,
seed,
source_image_url,
strength
} = job.payload;
if (!generation_id) {
throw new Error('Missing generation_id in job payload');
}
try {
// Update generation status to processing
await supabaseAdmin
.from('image_generations')
.update({
status: 'processing',
error_message: null
})
.eq('id', generation_id);
console.log('Updated generation status to processing');
// Get Replicate API token
const replicateApiToken = Deno.env.get('REPLICATE_API_TOKEN') || Deno.env.get('REPLICATE_API_KEY');
if (!replicateApiToken) {
throw new Error('Replicate API token not configured');
}
// Call the generation processor
const result = await processGeneration(
{
prompt,
negative_prompt,
model_id,
model_version,
width: width || 1024,
height: height || 1024,
num_inference_steps: num_inference_steps || 30,
guidance_scale: guidance_scale || 7.5,
seed,
source_image_url,
strength
},
replicateApiToken
);
if (!result.success) {
throw new Error(result.error || 'Generation failed');
}
console.log('Generation completed successfully');
console.log('Output URL:', result.output_url);
// Enqueue download-image job to handle the actual download and storage
const { data: downloadJobId, error: queueError } = await supabaseAdmin.rpc('enqueue_job', {
p_job_type: 'download-image',
p_payload: {
generation_id,
user_id,
output_url: result.output_url,
format: result.format,
width: result.width,
height: result.height,
prompt,
negative_prompt,
model_id
},
p_priority: 1, // High priority - image is ready
p_max_attempts: 3
});
if (queueError) {
console.error('Failed to enqueue download job:', queueError);
throw new Error('Failed to enqueue download job');
}
console.log('Enqueued download-image job:', downloadJobId);
// Mark generation as 'downloading' (intermediate state)
await supabaseAdmin
.from('image_generations')
.update({
status: 'downloading',
generation_time_seconds: result.generation_time_seconds
})
.eq('id', generation_id);
// Complete the job
await supabaseAdmin.rpc('complete_job', {
p_job_id: job.id,
p_result: {
output_url: result.output_url,
format: result.format,
download_job_id: downloadJobId
}
});
console.log('Job completed successfully');
} catch (error) {
console.error('Error processing generate-image job:', error.message);
// Update generation to failed
await supabaseAdmin
.from('image_generations')
.update({
status: 'failed',
error_message: error.message,
completed_at: new Date().toISOString()
})
.eq('id', generation_id);
// Complete job with error (will retry if attempts remain)
await supabaseAdmin.rpc('complete_job', {
p_job_id: job.id,
p_error: error.message
});
throw error;
}
}
/**
* Process a single 'download-image' job
*/
async function processDownloadImageJob(job: Job, supabaseAdmin: any): Promise<void> {
console.log(`Processing download-image job: ${job.id}`);
console.log('Payload:', JSON.stringify(job.payload, null, 2));
const {
generation_id,
user_id,
output_url,
format,
width,
height,
prompt,
negative_prompt,
model_id
} = job.payload;
if (!generation_id || !output_url || !user_id) {
throw new Error('Missing required fields in job payload');
}
try {
console.log('Downloading image from:', output_url);
// Download the generated image
const contentResponse = await fetch(output_url);
if (!contentResponse.ok) {
throw new Error('Failed to download generated image from Replicate');
}
const contentBlob = await contentResponse.blob();
const arrayBuffer = await contentBlob.arrayBuffer();
const uint8Array = new Uint8Array(arrayBuffer);
console.log('Image downloaded, size:', uint8Array.length, 'bytes');
// Generate filename and storage path
const filename = `${generation_id}.${format || 'webp'}`;
const storagePath = `${user_id}/${filename}`;
console.log('Uploading to storage:', storagePath);
// Determine content type
let contentType = 'image/webp';
if (format === 'svg') contentType = 'image/svg+xml';
else if (format === 'png') contentType = 'image/png';
else if (format === 'jpeg') contentType = 'image/jpeg';
// Upload to Supabase Storage
const { error: uploadError } = await supabaseAdmin
.storage
.from('generated-images')
.upload(storagePath, uint8Array, {
contentType,
upsert: true
});
if (uploadError) {
console.error('Upload error:', uploadError);
throw uploadError;
}
// Get public URL
const { data: { publicUrl } } = supabaseAdmin
.storage
.from('generated-images')
.getPublicUrl(storagePath);
console.log('Public URL:', publicUrl);
// Extract model name
const modelName = model_id?.split('/').pop() || 'unknown';
// Save image record
const { data: imageData, error: imageError } = await supabaseAdmin
.from('images')
.insert({
generation_id,
user_id,
filename,
storage_path: storagePath,
public_url: publicUrl,
file_size: uint8Array.length,
width: width || 1024,
height: height || 1024,
format: format || 'webp',
prompt,
negative_prompt,
model: modelName
})
.select()
.single();
if (imageError) {
console.error('Image record error:', imageError);
throw imageError;
}
console.log('Image record created:', imageData.id);
// Update generation to completed
await supabaseAdmin
.from('image_generations')
.update({
status: 'completed',
completed_at: new Date().toISOString()
})
.eq('id', generation_id);
// Complete the job
await supabaseAdmin.rpc('complete_job', {
p_job_id: job.id,
p_result: {
image_id: imageData.id,
public_url: publicUrl,
storage_path: storagePath
}
});
console.log('Job completed successfully');
} catch (error) {
console.error('Error processing download-image job:', error.message);
// Update generation to failed if this is the last attempt
if (job.attempt_number >= job.max_attempts) {
await supabaseAdmin
.from('image_generations')
.update({
status: 'failed',
error_message: `Failed to download/store image: ${error.message}`,
completed_at: new Date().toISOString()
})
.eq('id', generation_id);
}
// Complete job with error
await supabaseAdmin.rpc('complete_job', {
p_job_id: job.id,
p_error: error.message
});
throw error;
}
}
/**
* Process a single job based on its type
*/
async function processJob(job: Job, supabaseAdmin: any): Promise<void> {
console.log(`\n=== PROCESSING JOB ${job.id} ===`);
console.log('Type:', job.job_type);
console.log('Attempt:', job.attempt_number, '/', job.max_attempts);
const startTime = Date.now();
try {
switch (job.job_type) {
case 'generate-image':
await processGenerateImageJob(job, supabaseAdmin);
break;
case 'download-image':
await processDownloadImageJob(job, supabaseAdmin);
break;
default:
throw new Error(`Unknown job type: ${job.job_type}`);
}
const duration = Date.now() - startTime;
console.log(`Job ${job.id} completed in ${duration}ms`);
} catch (error) {
const duration = Date.now() - startTime;
console.error(`Job ${job.id} failed after ${duration}ms:`, error.message);
throw error;
}
}
/**
* Claim and process a single job
*/
async function claimAndProcessJob(supabaseAdmin: any): Promise<boolean> {
try {
// Claim next available job
const { data: jobs, error: claimError } = await supabaseAdmin.rpc('claim_next_job');
if (claimError) {
console.error('Error claiming job:', claimError);
return false;
}
if (!jobs || jobs.length === 0) {
// No jobs available
return false;
}
const job = jobs[0]; // claim_next_job returns SETOF, so we take the first element
console.log('Claimed job:', job.id);
// Process the job with timeout
await Promise.race([
processJob(job, supabaseAdmin),
new Promise((_, reject) =>
setTimeout(() => reject(new Error('Job timeout')), JOB_TIMEOUT_MS)
)
]);
return true;
} catch (error) {
console.error('Error in claimAndProcessJob:', error.message);
return false;
}
}
/**
* Main worker loop - processes multiple jobs in parallel
*/
async function processJobs(supabaseAdmin: any): Promise<{ processed: number; errors: number }> {
console.log('=== STARTING JOB PROCESSOR ===');
console.log('Max parallel jobs:', MAX_PARALLEL_JOBS);
let processed = 0;
let errors = 0;
// Process jobs in parallel
const jobPromises: Promise<boolean>[] = [];
for (let i = 0; i < MAX_PARALLEL_JOBS; i++) {
jobPromises.push(claimAndProcessJob(supabaseAdmin));
}
const results = await Promise.all(jobPromises);
// Count successes
for (const success of results) {
if (success) {
processed++;
} else {
errors++;
}
}
console.log('=== JOB PROCESSOR FINISHED ===');
console.log('Processed:', processed);
console.log('Errors:', errors);
return { processed, errors };
}
/**
* Edge Function Handler
*
* This function can be called:
* 1. By pg_cron every minute: SELECT net.http_post(...)
* 2. Manually via HTTP POST for testing
* 3. By other functions that need to trigger job processing
*/
Deno.serve(async (req: Request) => {
// Handle CORS preflight
if (req.method === 'OPTIONS') {
return new Response('ok', { headers: corsHeaders });
}
try {
console.log('=== PROCESS JOBS INVOKED ===');
console.log('Method:', req.method);
console.log('URL:', req.url);
// Create admin client for database operations
const supabaseAdmin = createClient(
Deno.env.get('SUPABASE_URL') ?? '',
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY') ?? '',
{
auth: {
autoRefreshToken: false,
persistSession: false
}
}
);
// Process jobs
const result = await processJobs(supabaseAdmin);
return new Response(
JSON.stringify({
success: true,
processed: result.processed,
errors: result.errors,
message: `Processed ${result.processed} job(s) with ${result.errors} error(s)`
}),
{
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
} catch (error) {
console.error('Error in process-jobs:', error.message);
console.error('Stack:', error.stack);
return new Response(
JSON.stringify({
error: error.message || 'Internal server error',
success: false
}),
{
status: 500,
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
}
});

View file

@ -1,217 +0,0 @@
import "jsr:@supabase/functions-js/edge-runtime.d.ts";
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2.45.0';
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type',
};
/**
* START GENERATION EDGE FUNCTION
*
* Purpose: Accept image generation request and enqueue for async processing
*
* Flow:
* 1. Validate user authentication
* 2. Validate model configuration
* 3. Create generation record in database
* 4. Enqueue job for background processing
* 5. Return immediately with generation ID
*
* This function returns INSTANTLY - no waiting for image generation!
*/
Deno.serve(async (req: Request) => {
// Handle CORS preflight
if (req.method === 'OPTIONS') {
return new Response('ok', { headers: corsHeaders });
}
try {
console.log('=== START GENERATION REQUEST ===');
// Get auth token from request
const authHeader = req.headers.get('Authorization');
if (!authHeader) {
throw new Error('No authorization header');
}
// Initialize Supabase client with user context
const supabase = createClient(
Deno.env.get('SUPABASE_URL') ?? '',
Deno.env.get('SUPABASE_ANON_KEY') ?? '',
{
global: { headers: { Authorization: authHeader } },
}
);
// Verify user is authenticated
const { data: { user }, error: authError } = await supabase.auth.getUser();
if (authError || !user) {
throw new Error('Unauthorized');
}
console.log('User authenticated:', user.id);
// Parse request body
const {
prompt,
model_id,
model_version,
width,
height,
num_inference_steps,
guidance_scale,
seed,
negative_prompt,
source_image_url,
strength,
style
} = await req.json();
// Validate required fields
if (!prompt) {
throw new Error('Prompt is required');
}
if (!model_id) {
throw new Error('Model ID is required');
}
console.log('Generating with model:', model_id);
console.log('Prompt:', prompt.substring(0, 50) + '...');
// Get model configuration from database
const { data: model, error: modelError } = await supabase
.from('models')
.select('*')
.eq('replicate_id', model_id)
.single();
if (modelError) {
console.error('Model lookup error:', modelError);
}
const modelName = model?.name || model_id.split('/').pop();
// Create admin client for database writes (bypasses RLS)
const supabaseAdmin = createClient(
Deno.env.get('SUPABASE_URL') ?? '',
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY') ?? '',
{
auth: {
autoRefreshToken: false,
persistSession: false
}
}
);
// Create generation record
const { data: generation, error: generationError } = await supabaseAdmin
.from('image_generations')
.insert({
user_id: user.id,
prompt,
negative_prompt: negative_prompt || null,
model: modelName,
style: style || null,
width: width || model?.default_width || 1024,
height: height || model?.default_height || 1024,
steps: num_inference_steps || model?.default_steps || 30,
guidance_scale: guidance_scale || model?.default_guidance_scale || 7.5,
status: 'pending'
})
.select()
.single();
if (generationError) {
console.error('Failed to create generation record:', generationError);
throw new Error('Failed to create generation record');
}
console.log('Generation record created:', generation.id);
// Enqueue job for async processing
const { data: jobId, error: queueError } = await supabaseAdmin.rpc('enqueue_job', {
p_job_type: 'generate-image',
p_payload: {
generation_id: generation.id,
user_id: user.id,
prompt,
negative_prompt,
model_id,
model_version,
width: width || model?.default_width || 1024,
height: height || model?.default_height || 1024,
num_inference_steps: num_inference_steps || model?.default_steps || 30,
guidance_scale: guidance_scale || model?.default_guidance_scale || 7.5,
seed,
source_image_url,
strength
},
p_priority: 0,
p_max_attempts: 3
});
if (queueError) {
console.error('Failed to enqueue job:', queueError);
// Update generation status to failed
await supabaseAdmin
.from('image_generations')
.update({
status: 'failed',
error_message: 'Failed to enqueue job'
})
.eq('id', generation.id);
throw new Error('Failed to enqueue job');
}
console.log('Job enqueued:', jobId);
// Update generation with job ID
await supabaseAdmin
.from('image_generations')
.update({
status: 'queued',
// Could store job_id here for tracking
})
.eq('id', generation.id);
console.log('=== GENERATION QUEUED SUCCESSFULLY ===');
// Return immediately!
return new Response(
JSON.stringify({
success: true,
generation_id: generation.id,
job_id: jobId,
status: 'queued',
message: 'Image generation started. You will be notified when complete.'
}),
{
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
} catch (error) {
console.error('Error in start-generation:', error.message);
console.error('Stack:', error.stack);
return new Response(
JSON.stringify({
error: error.message || 'Internal server error'
}),
{
status: 400,
headers: {
...corsHeaders,
'Content-Type': 'application/json'
}
}
);
}
});

View file

@ -1,233 +0,0 @@
-- Migration: Boards/Moodboard System
-- Description: Tables and policies for canvas-based moodboard feature
-- Created: 2025-10-09
-- =====================================================
-- TABLE: boards
-- Description: Stores moodboard metadata
-- =====================================================
CREATE TABLE IF NOT EXISTS public.boards (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES auth.users(id) ON DELETE CASCADE,
name TEXT NOT NULL,
description TEXT,
thumbnail_url TEXT,
is_public BOOLEAN DEFAULT false,
canvas_width INTEGER DEFAULT 2000,
canvas_height INTEGER DEFAULT 1500,
background_color TEXT DEFAULT '#ffffff',
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Index for user lookups
CREATE INDEX idx_boards_user_id ON public.boards(user_id);
CREATE INDEX idx_boards_created_at ON public.boards(created_at DESC);
CREATE INDEX idx_boards_is_public ON public.boards(is_public) WHERE is_public = true;
-- =====================================================
-- TABLE: board_items
-- Description: Stores individual images/items on boards
-- =====================================================
CREATE TABLE IF NOT EXISTS public.board_items (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
board_id UUID NOT NULL REFERENCES public.boards(id) ON DELETE CASCADE,
image_id UUID NOT NULL REFERENCES public.images(id) ON DELETE CASCADE,
position_x FLOAT NOT NULL DEFAULT 0,
position_y FLOAT NOT NULL DEFAULT 0,
scale_x FLOAT NOT NULL DEFAULT 1.0,
scale_y FLOAT NOT NULL DEFAULT 1.0,
rotation FLOAT NOT NULL DEFAULT 0,
z_index INTEGER NOT NULL DEFAULT 0,
opacity FLOAT NOT NULL DEFAULT 1.0,
width INTEGER,
height INTEGER,
created_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(board_id, image_id)
);
-- Indexes for board lookups
CREATE INDEX idx_board_items_board_id ON public.board_items(board_id);
CREATE INDEX idx_board_items_z_index ON public.board_items(board_id, z_index);
-- =====================================================
-- TRIGGER: Update updated_at timestamp
-- =====================================================
CREATE OR REPLACE FUNCTION update_boards_updated_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = now();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_update_boards_timestamp
BEFORE UPDATE ON public.boards
FOR EACH ROW
EXECUTE FUNCTION update_boards_updated_at();
-- =====================================================
-- RLS POLICIES: boards table
-- =====================================================
ALTER TABLE public.boards ENABLE ROW LEVEL SECURITY;
-- Users can view their own boards
CREATE POLICY "Users can view own boards"
ON public.boards
FOR SELECT
USING (auth.uid() = user_id);
-- Users can view public boards
CREATE POLICY "Users can view public boards"
ON public.boards
FOR SELECT
USING (is_public = true);
-- Users can insert their own boards
CREATE POLICY "Users can insert own boards"
ON public.boards
FOR INSERT
WITH CHECK (auth.uid() = user_id);
-- Users can update their own boards
CREATE POLICY "Users can update own boards"
ON public.boards
FOR UPDATE
USING (auth.uid() = user_id)
WITH CHECK (auth.uid() = user_id);
-- Users can delete their own boards
CREATE POLICY "Users can delete own boards"
ON public.boards
FOR DELETE
USING (auth.uid() = user_id);
-- =====================================================
-- RLS POLICIES: board_items table
-- =====================================================
ALTER TABLE public.board_items ENABLE ROW LEVEL SECURITY;
-- Users can view items from their own boards
CREATE POLICY "Users can view items from own boards"
ON public.board_items
FOR SELECT
USING (
EXISTS (
SELECT 1 FROM public.boards
WHERE boards.id = board_items.board_id
AND boards.user_id = auth.uid()
)
);
-- Users can view items from public boards
CREATE POLICY "Users can view items from public boards"
ON public.board_items
FOR SELECT
USING (
EXISTS (
SELECT 1 FROM public.boards
WHERE boards.id = board_items.board_id
AND boards.is_public = true
)
);
-- Users can insert items to their own boards
CREATE POLICY "Users can insert items to own boards"
ON public.board_items
FOR INSERT
WITH CHECK (
EXISTS (
SELECT 1 FROM public.boards
WHERE boards.id = board_items.board_id
AND boards.user_id = auth.uid()
)
);
-- Users can update items on their own boards
CREATE POLICY "Users can update items on own boards"
ON public.board_items
FOR UPDATE
USING (
EXISTS (
SELECT 1 FROM public.boards
WHERE boards.id = board_items.board_id
AND boards.user_id = auth.uid()
)
)
WITH CHECK (
EXISTS (
SELECT 1 FROM public.boards
WHERE boards.id = board_items.board_id
AND boards.user_id = auth.uid()
)
);
-- Users can delete items from their own boards
CREATE POLICY "Users can delete items from own boards"
ON public.board_items
FOR DELETE
USING (
EXISTS (
SELECT 1 FROM public.boards
WHERE boards.id = board_items.board_id
AND boards.user_id = auth.uid()
)
);
-- =====================================================
-- FUNCTIONS: Helper functions for boards
-- =====================================================
-- Function to get board with item count
CREATE OR REPLACE FUNCTION get_boards_with_counts(p_user_id UUID)
RETURNS TABLE (
id UUID,
name TEXT,
description TEXT,
thumbnail_url TEXT,
is_public BOOLEAN,
canvas_width INTEGER,
canvas_height INTEGER,
background_color TEXT,
created_at TIMESTAMPTZ,
updated_at TIMESTAMPTZ,
item_count BIGINT
) AS $$
BEGIN
RETURN QUERY
SELECT
b.id,
b.name,
b.description,
b.thumbnail_url,
b.is_public,
b.canvas_width,
b.canvas_height,
b.background_color,
b.created_at,
b.updated_at,
COUNT(bi.id) as item_count
FROM public.boards b
LEFT JOIN public.board_items bi ON b.id = bi.board_id
WHERE b.user_id = p_user_id
GROUP BY b.id
ORDER BY b.updated_at DESC;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
-- Grant execute permission
GRANT EXECUTE ON FUNCTION get_boards_with_counts(UUID) TO authenticated;
-- =====================================================
-- COMMENTS
-- =====================================================
COMMENT ON TABLE public.boards IS 'Stores moodboard/canvas metadata';
COMMENT ON TABLE public.board_items IS 'Stores individual items (images) placed on boards';
COMMENT ON COLUMN public.boards.canvas_width IS 'Canvas width in pixels';
COMMENT ON COLUMN public.boards.canvas_height IS 'Canvas height in pixels';
COMMENT ON COLUMN public.board_items.position_x IS 'X position on canvas in pixels';
COMMENT ON COLUMN public.board_items.position_y IS 'Y position on canvas in pixels';
COMMENT ON COLUMN public.board_items.scale_x IS 'Horizontal scale factor (1.0 = 100%)';
COMMENT ON COLUMN public.board_items.scale_y IS 'Vertical scale factor (1.0 = 100%)';
COMMENT ON COLUMN public.board_items.rotation IS 'Rotation in degrees (0-360)';
COMMENT ON COLUMN public.board_items.z_index IS 'Layer order (higher = on top)';

View file

@ -1,341 +0,0 @@
-- Migration: Add Job Queue System for Async Image Generation
-- Created: 2025-10-09
-- Purpose: Replace synchronous Edge Function with async queue system
-- ============================================================================
-- 1. CREATE JOB QUEUE TABLE
-- ============================================================================
CREATE TABLE IF NOT EXISTS job_queue (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Job identification
job_type TEXT NOT NULL CHECK (job_type IN (
'generate-image',
'download-image',
'process-webhook',
'cleanup-storage'
)),
-- Job data
payload JSONB NOT NULL,
-- Status tracking
status TEXT NOT NULL DEFAULT 'pending' CHECK (status IN (
'pending',
'processing',
'completed',
'failed',
'cancelled'
)),
-- Retry logic
attempts INTEGER NOT NULL DEFAULT 0,
max_attempts INTEGER NOT NULL DEFAULT 3,
-- Scheduling
scheduled_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
started_at TIMESTAMPTZ,
completed_at TIMESTAMPTZ,
-- Error tracking
error_message TEXT,
error_details JSONB,
-- Metadata
created_by UUID REFERENCES auth.users(id) ON DELETE SET NULL,
priority INTEGER DEFAULT 0, -- Higher = more important
-- Timestamps
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Add indexes for performance
CREATE INDEX idx_job_queue_status_scheduled ON job_queue(status, scheduled_at)
WHERE status IN ('pending', 'processing');
CREATE INDEX idx_job_queue_type_status ON job_queue(job_type, status);
CREATE INDEX idx_job_queue_created_by ON job_queue(created_by);
CREATE INDEX idx_job_queue_priority ON job_queue(priority DESC, created_at ASC)
WHERE status = 'pending';
-- ============================================================================
-- 2. CREATE FUNCTION TO CLAIM NEXT JOB (with locking)
-- ============================================================================
CREATE OR REPLACE FUNCTION claim_next_job(
p_job_types TEXT[] DEFAULT NULL
)
RETURNS SETOF job_queue
LANGUAGE plpgsql
AS $$
DECLARE
v_job job_queue;
BEGIN
-- Find and lock the next available job
SELECT * INTO v_job
FROM job_queue
WHERE status = 'pending'
AND scheduled_at <= NOW()
AND (p_job_types IS NULL OR job_type = ANY(p_job_types))
ORDER BY priority DESC, created_at ASC
LIMIT 1
FOR UPDATE SKIP LOCKED; -- Critical: prevents race conditions
-- If no job found, return null
IF v_job IS NULL THEN
RETURN;
END IF;
-- Update job status to processing
UPDATE job_queue
SET
status = 'processing',
started_at = NOW(),
updated_at = NOW()
WHERE id = v_job.id;
-- Return the claimed job
RETURN QUERY SELECT * FROM job_queue WHERE id = v_job.id;
END;
$$;
-- ============================================================================
-- 3. CREATE FUNCTION TO ENQUEUE JOB
-- ============================================================================
CREATE OR REPLACE FUNCTION enqueue_job(
p_job_type TEXT,
p_payload JSONB,
p_priority INTEGER DEFAULT 0,
p_scheduled_at TIMESTAMPTZ DEFAULT NOW(),
p_max_attempts INTEGER DEFAULT 3
)
RETURNS UUID
LANGUAGE plpgsql
SECURITY DEFINER -- Runs with elevated privileges
AS $$
DECLARE
v_job_id UUID;
v_user_id UUID;
BEGIN
-- Get current user ID (if authenticated)
v_user_id := auth.uid();
-- Insert job
INSERT INTO job_queue (
job_type,
payload,
priority,
scheduled_at,
max_attempts,
created_by
)
VALUES (
p_job_type,
p_payload,
p_priority,
p_scheduled_at,
p_max_attempts,
v_user_id
)
RETURNING id INTO v_job_id;
RETURN v_job_id;
END;
$$;
-- ============================================================================
-- 4. CREATE FUNCTION TO HANDLE JOB COMPLETION
-- ============================================================================
CREATE OR REPLACE FUNCTION complete_job(
p_job_id UUID,
p_error_message TEXT DEFAULT NULL,
p_error_details JSONB DEFAULT NULL
)
RETURNS VOID
LANGUAGE plpgsql
AS $$
DECLARE
v_job job_queue;
BEGIN
-- Get current job state
SELECT * INTO v_job FROM job_queue WHERE id = p_job_id FOR UPDATE;
IF v_job IS NULL THEN
RAISE EXCEPTION 'Job not found: %', p_job_id;
END IF;
-- If error provided, handle failure
IF p_error_message IS NOT NULL THEN
-- Check if we should retry
IF v_job.attempts < v_job.max_attempts THEN
-- Retry with exponential backoff
UPDATE job_queue
SET
status = 'pending',
attempts = attempts + 1,
scheduled_at = NOW() + (INTERVAL '1 second' * POWER(2, attempts + 1)), -- 2s, 4s, 8s
error_message = p_error_message,
error_details = p_error_details,
updated_at = NOW()
WHERE id = p_job_id;
ELSE
-- Max retries reached, mark as failed
UPDATE job_queue
SET
status = 'failed',
completed_at = NOW(),
error_message = p_error_message,
error_details = p_error_details,
updated_at = NOW()
WHERE id = p_job_id;
END IF;
ELSE
-- Success! Mark as completed
UPDATE job_queue
SET
status = 'completed',
completed_at = NOW(),
updated_at = NOW()
WHERE id = p_job_id;
END IF;
END;
$$;
-- ============================================================================
-- 5. CREATE MONITORING VIEWS
-- ============================================================================
-- View: Queue Health
CREATE OR REPLACE VIEW queue_health AS
SELECT
job_type,
status,
COUNT(*) as count,
MIN(created_at) as oldest_job,
MAX(created_at) as newest_job,
AVG(EXTRACT(EPOCH FROM (COALESCE(completed_at, NOW()) - created_at))) as avg_duration_seconds,
AVG(attempts) as avg_attempts
FROM job_queue
GROUP BY job_type, status;
-- View: Failed Jobs (last 24h)
CREATE OR REPLACE VIEW failed_jobs_recent AS
SELECT
id,
job_type,
payload,
attempts,
error_message,
created_at,
completed_at
FROM job_queue
WHERE status = 'failed'
AND created_at > NOW() - INTERVAL '24 hours'
ORDER BY created_at DESC;
-- View: Stuck Jobs (processing for >10 min)
CREATE OR REPLACE VIEW stuck_jobs AS
SELECT
id,
job_type,
payload,
started_at,
EXTRACT(EPOCH FROM (NOW() - started_at)) / 60 as minutes_stuck
FROM job_queue
WHERE status = 'processing'
AND started_at < NOW() - INTERVAL '10 minutes'
ORDER BY started_at ASC;
-- ============================================================================
-- 6. ADD TRIGGER TO UPDATE updated_at
-- ============================================================================
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER update_job_queue_updated_at
BEFORE UPDATE ON job_queue
FOR EACH ROW
EXECUTE FUNCTION update_updated_at_column();
-- ============================================================================
-- 7. ADD RLS POLICIES
-- ============================================================================
-- Enable RLS
ALTER TABLE job_queue ENABLE ROW LEVEL SECURITY;
-- Users can see their own jobs
CREATE POLICY "Users can view their own jobs"
ON job_queue
FOR SELECT
USING (created_by = auth.uid());
-- Service role can do everything (for Edge Functions)
CREATE POLICY "Service role has full access"
ON job_queue
FOR ALL
USING (auth.jwt()->>'role' = 'service_role');
-- ============================================================================
-- 8. GRANT PERMISSIONS
-- ============================================================================
-- Grant access to authenticated users
GRANT SELECT ON job_queue TO authenticated;
GRANT SELECT ON queue_health TO authenticated;
GRANT SELECT ON failed_jobs_recent TO authenticated;
GRANT SELECT ON stuck_jobs TO authenticated;
-- Grant execution of functions
GRANT EXECUTE ON FUNCTION enqueue_job TO authenticated;
GRANT EXECUTE ON FUNCTION claim_next_job TO service_role;
GRANT EXECUTE ON FUNCTION complete_job TO service_role;
-- ============================================================================
-- 9. ADD COMMENT DOCUMENTATION
-- ============================================================================
COMMENT ON TABLE job_queue IS 'Async job queue for background processing';
COMMENT ON COLUMN job_queue.job_type IS 'Type of job to process';
COMMENT ON COLUMN job_queue.payload IS 'Job data as JSON';
COMMENT ON COLUMN job_queue.status IS 'Current job status';
COMMENT ON COLUMN job_queue.attempts IS 'Number of processing attempts';
COMMENT ON COLUMN job_queue.max_attempts IS 'Maximum retry attempts before failure';
COMMENT ON COLUMN job_queue.priority IS 'Job priority (higher = more important)';
COMMENT ON FUNCTION claim_next_job IS 'Atomically claim next available job with locking';
COMMENT ON FUNCTION enqueue_job IS 'Add a new job to the queue';
COMMENT ON FUNCTION complete_job IS 'Mark job as complete or retry if failed';
-- ============================================================================
-- MIGRATION COMPLETE
-- ============================================================================
-- Insert a test job to verify setup
DO $$
DECLARE
v_test_job_id UUID;
BEGIN
SELECT enqueue_job(
'generate-image',
'{"test": true, "prompt": "Migration test"}'::JSONB,
0
) INTO v_test_job_id;
RAISE NOTICE 'Job queue system installed successfully! Test job ID: %', v_test_job_id;
-- Clean up test job
DELETE FROM job_queue WHERE id = v_test_job_id;
END $$;

View file

@ -3,21 +3,21 @@ import { Tag } from '~/store/tagStore';
export type Creator = {
id: string;
username: string | null;
avatar_url: string | null;
avatarUrl: string | null;
};
export type ExploreImageItem = {
id: string;
public_url: string | null;
publicUrl: string | null;
prompt: string;
created_at: string;
is_favorite: boolean;
user_id: string;
createdAt: string;
isFavorite: boolean;
userId: string;
model?: string;
tags?: Tag[];
creator?: Creator;
likes_count?: number;
user_has_liked?: boolean;
likesCount?: number;
userHasLiked?: boolean;
blurhash?: string | null;
};

View file

@ -2,10 +2,10 @@ import { Tag } from '~/store/tagStore';
export type ImageItem = {
id: string;
public_url: string | null;
publicUrl: string | null;
prompt: string;
created_at: string;
is_favorite: boolean;
createdAt: string;
isFavorite: boolean;
model?: string;
tags?: Tag[];
blurhash?: string | null;

View file

@ -1,30 +0,0 @@
#!/bin/bash
# Script to set Replicate API Key in Supabase Edge Functions
echo "Setting Replicate API Key for Supabase Edge Functions..."
echo ""
echo "Please enter your Replicate API Key (starts with r8_):"
read -s REPLICATE_KEY
echo ""
if [[ ! $REPLICATE_KEY == r8_* ]]; then
echo "Error: API Key should start with 'r8_'"
exit 1
fi
echo "Setting the key in Supabase..."
# Set using Supabase CLI
npx supabase secrets set REPLICATE_API_KEY=$REPLICATE_KEY --project-ref mjuvnnjxwfwlmxjsgkqu
echo ""
echo "Waiting for secrets to sync (20 seconds)..."
sleep 20
echo ""
echo "Done! The key has been set. Please test the image generation now."
echo ""
echo "To verify, you can check in Supabase Dashboard:"
echo "1. Go to Edge Functions → Secrets"
echo "2. You should see REPLICATE_API_KEY listed there"

View file

@ -1,66 +0,0 @@
-- ============================================================================
-- SETUP pg_cron WORKER for Job Queue Processing
-- ============================================================================
--
-- Führe dieses SQL-Statement im Supabase Dashboard SQL Editor aus:
-- https://supabase.com/dashboard/project/mjuvnnjxwfwlmxjsgkqu/sql
--
-- Dieses Statement erstellt einen Cron-Job, der jede Minute die
-- process-jobs Edge Function aufruft, um Jobs aus der Queue zu verarbeiten.
-- ============================================================================
-- Schedule process-jobs to run every minute
SELECT cron.schedule(
'process-job-queue',
'* * * * *', -- Every minute
$$
SELECT net.http_post(
url := 'https://mjuvnnjxwfwlmxjsgkqu.supabase.co/functions/v1/process-jobs',
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im1qdXZubmp4d2Z3bG14anNna3F1Iiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTc1NjI1ODk1NSwiZXhwIjoyMDcxODM0OTU1fQ.c_30KdU1wD94r-w9Y_Vgg_FYRHJiPT8Peiv3SQJbhZg'
),
body := '{}'::jsonb
);
$$
);
-- ============================================================================
-- VERIFICATION
-- ============================================================================
-- Nach dem Ausführen, prüfe ob der Cron-Job erfolgreich erstellt wurde:
-- 1. Check if cron job exists
SELECT
jobid,
jobname,
schedule,
active,
nodename
FROM cron.job
WHERE jobname = 'process-job-queue';
-- 2. Wait 1-2 minutes, then check execution history
SELECT
jobid,
runid,
job_pid,
status,
return_message,
start_time,
end_time
FROM cron.job_run_details
WHERE jobid = (SELECT jobid FROM cron.job WHERE jobname = 'process-job-queue')
ORDER BY start_time DESC
LIMIT 10;
-- ============================================================================
-- TROUBLESHOOTING
-- ============================================================================
-- If you need to delete and recreate the cron job:
-- SELECT cron.unschedule('process-job-queue');
-- Then run the schedule command again
-- If the job is failing, check the Edge Function logs:
-- https://supabase.com/dashboard/project/mjuvnnjxwfwlmxjsgkqu/logs/edge-functions

View file

@ -1,105 +0,0 @@
-- VERIFICATION SCRIPT
-- Run this in Supabase SQL Editor to verify the migration was successful
-- ============================================================================
-- 1. CHECK TABLES
-- ============================================================================
SELECT 'job_queue table' as check_name,
CASE WHEN EXISTS (
SELECT FROM pg_tables WHERE schemaname = 'public' AND tablename = 'job_queue'
) THEN '✅ EXISTS' ELSE '❌ MISSING' END as status;
-- ============================================================================
-- 2. CHECK FUNCTIONS
-- ============================================================================
SELECT 'enqueue_job function' as check_name,
CASE WHEN EXISTS (
SELECT FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = 'public' AND p.proname = 'enqueue_job'
) THEN '✅ EXISTS' ELSE '❌ MISSING' END as status
UNION ALL
SELECT 'claim_next_job function' as check_name,
CASE WHEN EXISTS (
SELECT FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = 'public' AND p.proname = 'claim_next_job'
) THEN '✅ EXISTS' ELSE '❌ MISSING' END as status
UNION ALL
SELECT 'complete_job function' as check_name,
CASE WHEN EXISTS (
SELECT FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = 'public' AND p.proname = 'complete_job'
) THEN '✅ EXISTS' ELSE '❌ MISSING' END as status;
-- ============================================================================
-- 3. CHECK VIEWS
-- ============================================================================
SELECT 'queue_health view' as check_name,
CASE WHEN EXISTS (
SELECT FROM pg_views WHERE schemaname = 'public' AND viewname = 'queue_health'
) THEN '✅ EXISTS' ELSE '❌ MISSING' END as status
UNION ALL
SELECT 'failed_jobs_recent view' as check_name,
CASE WHEN EXISTS (
SELECT FROM pg_views WHERE schemaname = 'public' AND viewname = 'failed_jobs_recent'
) THEN '✅ EXISTS' ELSE '❌ MISSING' END as status
UNION ALL
SELECT 'stuck_jobs view' as check_name,
CASE WHEN EXISTS (
SELECT FROM pg_views WHERE schemaname = 'public' AND viewname = 'stuck_jobs'
) THEN '✅ EXISTS' ELSE '❌ MISSING' END as status;
-- ============================================================================
-- 4. CHECK INDEXES
-- ============================================================================
SELECT
'Indexes on job_queue' as check_name,
COUNT(*)::text || ' indexes created' as status
FROM pg_indexes
WHERE schemaname = 'public' AND tablename = 'job_queue';
-- ============================================================================
-- 5. TEST ENQUEUE FUNCTION
-- ============================================================================
-- Create a test job
DO $$
DECLARE
v_job_id UUID;
BEGIN
SELECT enqueue_job(
'generate-image',
'{"test": true, "prompt": "Database verification test"}'::JSONB,
0
) INTO v_job_id;
RAISE NOTICE '✅ Test job created: %', v_job_id;
-- Clean up test job
DELETE FROM job_queue WHERE id = v_job_id;
RAISE NOTICE '✅ Test job cleaned up';
END $$;
-- ============================================================================
-- 6. FINAL STATUS
-- ============================================================================
SELECT
'🎉 DATABASE SETUP COMPLETE!' as message,
(SELECT COUNT(*) FROM pg_tables WHERE schemaname = 'public' AND tablename = 'job_queue') as tables_created,
(SELECT COUNT(*) FROM pg_proc p JOIN pg_namespace n ON p.pronamespace = n.oid WHERE n.nspname = 'public' AND p.proname IN ('enqueue_job', 'claim_next_job', 'complete_job')) as functions_created,
(SELECT COUNT(*) FROM pg_views WHERE schemaname = 'public' AND viewname IN ('queue_health', 'failed_jobs_recent', 'stuck_jobs')) as views_created;