chore(memoro): import legacy backend, mobile, and landing apps

Adds the original NestJS backends (backend, audio-backend), Expo mobile app,
and Astro landing page as-is from the standalone memoro repo. These are
not yet migrated to monorepo standards (migration tracked in memory/CLAUDE.md).

Also adds eslint.config.mjs ignore for apps/*/apps/audio-backend/**
and .prettierignore entries for legacy memoro dirs.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Till JS 2026-03-31 17:08:45 +02:00
parent 09d5576f2a
commit d8a2b37126
1377 changed files with 280653 additions and 2 deletions

View file

@ -88,3 +88,9 @@ apps/picture/apps/landing/src/components/promptTemplates/CategoryGrid.astro
**/*QUICK*.md
**/*QUICKSTART*.md
# Legacy memoro apps (not yet migrated to monorepo standards, have their own tooling)
apps/memoro/apps/backend/**
apps/memoro/apps/audio-backend/**
apps/memoro/apps/mobile/**
apps/memoro/apps/landing/**

1
apps/memoro/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
*.env.deploy

459
apps/memoro/CLAUDE.md Normal file
View file

@ -0,0 +1,459 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Repository Overview
Memoro is a monorepo containing an AI-powered voice recording and memo management application with two apps:
- **Mobile App** (`apps/mobile/`): React Native + Expo cross-platform app (iOS, Android, Web)
- **Web App** (`apps/web/`): SvelteKit companion web application
Both apps share the same Supabase backend.
## Development Commands
### Mobile App (`apps/mobile/`)
```bash
# Development
npm start # Start Expo dev server
npm run start:dev # Start with dev environment
npm run start:prod # Start with prod environment
npm run ios # Run on iOS simulator
npm run android # Run on Android emulator
npm run web # Run web version
npm run web:dev # Run web with dev environment
# Code Quality
npm run lint # Run ESLint and Prettier check
npm run lint:fix # Auto-fix linting issues
npm run lint:unused # Find unused imports/vars
npm run format # Format code with ESLint + Prettier
# Build & Deploy
npm run prebuild # Generate native projects
npm run rebuild # Clean rebuild (removes node_modules, ios/, android/)
npm run web:build # Build for web deployment
eas build --profile development # Development build
eas build --profile preview # Preview build
eas build --profile production # Production build
```
### Web App (`apps/web/`)
```bash
npm run dev # Start development server
npm run build # Build for production
npm run preview # Preview production build
npm run check # Run svelte-check
npm run check:watch # Watch mode for svelte-check
```
## Architecture
### Mobile App Architecture
**Framework Stack:**
- React Native 0.83.2 + Expo SDK 55
- Expo Router (file-based routing)
- TypeScript
- NativeWind (Tailwind CSS for React Native)
- Zustand (state management)
**Key Design Patterns:**
1. **Feature-Based Architecture** (`features/`):
- Each feature is self-contained with its own services, hooks, components, and stores
- Features: auth, audioRecordingV2, memos, spaces, credits, subscription, i18n, theme, etc.
- 33 feature modules in total
2. **Atomic Design System** (`components/`):
- `atoms/`: Basic UI components (Button, Input, Text, Icon, etc.)
- `molecules/`: Composite components (MemoPreview, RecordingBar, TagSelector, etc.)
- `organisms/`: Complex components (AudioRecorder, Memory, TranscriptDisplay, etc.)
- `statistics/`: Specialized analytics components
3. **Route Structure** (`app/`):
- `(public)/`: Unauthenticated routes (login, register)
- `(protected)/`: Authenticated routes with auth guard
- `(tabs)/`: Main tab navigation (home, memos, spaces)
- `(memo)/[id]`: Dynamic memo detail pages
- `(space)/[id]`: Dynamic space detail pages
- Uses Expo Router's file-based routing with typed routes enabled
### Authentication System
Uses a **middleware-based authentication bridge** between the app and Supabase:
```
Mobile App → Middleware Auth Service → Supabase
```
**Key Points:**
- Middleware issues three tokens: `manaToken`, `appToken` (Supabase-compatible JWT), `refreshToken`
- Tokens stored securely via platform-specific `safeStorage` utility
- Auth state managed via `AuthContext` provider
- Supabase client configured to use JWT from middleware
- Row Level Security (RLS) policies use JWT claims (`sub`, `role`, `app_id`)
- Supports email/password, Google Sign-In, and Apple Sign-In
- Automatic token refresh mechanism
See `apps/mobile/features/auth/README.md` for detailed authentication flow.
### Audio Recording System
**AudioRecordingV2** is the current audio recording implementation:
- Uses `expo-audio` (migrated from deprecated `expo-av`)
- Platform-specific services: `IOSRecordingService`, `AndroidRecordingService`
- Zustand store for state management (`recordingStore`)
- Comprehensive error handling with retry strategies
- Android: Foreground service with wake locks
- iOS: Background audio capability with `mixWithOthers` mode
- Real-time status updates via polling
- Prevents zero-byte recordings with validation
- **Background recording works correctly** - continues when app is backgrounded or locked
**iOS Background Recording:**
- Uses `interruptionMode: 'mixWithOthers'` for background recording support
- Recording continues when pressing home button, switching apps, or locking device
- Audio session automatically restored when returning to foreground
- JavaScript timers suspended in background, but native recording continues
- Handles real interruptions (phone calls, Siri) automatically
**Recording Options:**
- High quality: M4A format with AAC encoding (MONO for compatibility)
- Presets: HIGH_QUALITY, MEDIUM_QUALITY, LOW_QUALITY, VOICE_MEMO
- Max duration and size limits
- Pause/resume support
- Audio level metering for waveform visualization
- Optimized for voice (MONO, 96 quality) to prevent FFmpeg 'chnl' box errors
**Key Technical Details:**
- MONO recording prevents iOS spatial audio metadata issues
- Audio session verification on cold start prevents first-recording failures
- Status polling restarts when app returns from background
- Full duration captured (foreground + background time)
See `apps/mobile/features/audioRecordingV2/README.md` for full details.
See `apps/mobile/features/audioRecordingV2/TROUBLESHOOTING.md` for bug fixes and solutions.
### AI Processing System
**Blueprints:**
- Reusable AI analysis patterns for different use cases
- Examples: Text Analysis, Creative Writing, Meeting Notes
- Each blueprint has localized advice tips (32 languages)
- Stored in Supabase with public/private visibility
**Prompts:**
- Specific AI tasks for content transformation
- Examples: Summary, To-Do extraction, Translation, Q&A
- Associated with blueprints via `blueprint_prompts` join table
- Multi-language support (German/English minimum)
**Content Organization:**
- 8 categories: Coaching, Crafts, Healthcare, Journal, Journalism, Office, Sales, University
- Categories provide contextual grouping for blueprints/prompts
See `apps/mobile/docs/blueprints_and_prompts.md` for full documentation.
### Theme System
**Multi-Theme Support:**
- 4 theme variants: Lume (gold), Nature (green), Stone (slate), Ocean (blue)
- Each theme has light and dark mode variants
- 13 semantic color tokens per theme (primary, secondary, borders, backgrounds, text)
- Theme state managed via `ThemeProvider` context
- Dark mode detection + manual override
- All colors defined in `tailwind.config.js`
**Markdown Rendering:**
- Full Markdown support in memo display
- Theme-aware styles adapt to light/dark mode
- Centralized styles in `features/theme/markdownStyles.ts`
- Hybrid rendering with auto-detection
### Spaces (Collaboration)
**Team Workspaces:**
- Create unlimited collaborative spaces
- Role-based permissions (owner, member)
- Memo sharing within spaces
- Email-based invitation system
- Credit pools shared among team members
- Real-time sync via Supabase Realtime
**Backend Integration:**
- RESTful API for space management
- RLS policies for access control
- Space-specific memo filtering
See `apps/mobile/docs/SPACES.md` for implementation details.
### Subscription & Credits
**Mana Credit System:**
- Backend-driven transparent pricing
- Real-time credit validation before operations
- Usage tracking and analytics
- Credit sharing in team spaces
- Free tier: 150 Mana + 5 daily Mana
**RevenueCat Integration:**
- Cross-platform (iOS, Android, Web)
- Subscription lifecycle management
- User identification tied to auth
- Purchase restoration across devices
- 4 individual plans: Stream (€5.99), River (€14.99), Lake (€29.99), Ocean (€49.99)
- Team and Enterprise plans available
### Internationalization
**32 Languages Supported:**
- Arabic, Bengali, Bulgarian, Chinese, Czech, Danish, Dutch, English, Estonian, Finnish, French, Gaelic, German, Greek, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Lithuanian, Latvian, Maltese, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Turkish, Ukrainian, Urdu, Vietnamese
**Implementation:**
- `react-i18next` for translations
- Automatic device language detection
- Persistent user preference storage
- RTL support for Arabic/Hebrew
- Translation files in `features/i18n/translations/`
### Real-Time Features
**Supabase Realtime:**
- Live memo updates (INSERT, UPDATE, DELETE)
- Real-time collaboration in spaces
- `MemoRealtimeProvider` context for subscriptions
- Automatic reconnection handling
- RLS-aware subscriptions
### Platform-Specific Notes
**Web Platform:**
- Uses `.web.ts` file extensions for web-specific implementations
- `safeStorage.web.ts` uses localStorage (vs AsyncStorage on native)
- Web Audio API for recording (vs expo-audio)
- Some features unavailable: push notifications, haptics, native gestures
**iOS:**
- Background audio capability required
- Audio session management
- Apple Sign-In integration
- RevenueCat StoreKit 2
**Android:**
- Foreground service for recording
- Wake lock to prevent sleep
- Android 16+ requires foreground to start recording
- Google Sign-In integration
## Environment Configuration
The mobile app uses environment-specific `.env` files:
- `.env.dev`: Development environment (copy from `.env.dev.example`)
- `.env.prod`: Production environment (copy from `.env.prod.example`)
- `.env.local`: Active environment (auto-generated by npm scripts)
**Key Environment Variables:**
- `EXPO_PUBLIC_SUPABASE_URL`: Supabase project URL
- `EXPO_PUBLIC_SUPABASE_ANON_KEY`: Supabase anon key
- `EXPO_PUBLIC_MIDDLEWARE_API_URL`: Middleware auth service URL
- `EXPO_PUBLIC_APPID`: Application ID for middleware
- RevenueCat keys for iOS/Android
## Code Quality
**Linting:**
- ESLint with TypeScript plugin
- React/React Native rules
- Unused imports auto-removal
- Configuration in `eslint.config.js`
**Formatting:**
- Prettier with Tailwind plugin
- Auto-format on save recommended
**TypeScript:**
- Strict mode enabled
- Typed routes from Expo Router
- Type definitions in `types/` and feature-specific types
## Migration Notes
**Expo SDK 55 (Current):**
- React Native 0.83.2, React 19.2
- Native `allowsBackgroundRecording` support in expo-audio (no more workarounds needed)
- All Expo packages use `^55.x.x` version scheme
- New Architecture is the default (Legacy Architecture dropped)
- Android compileSdkVersion/targetSdkVersion 36
**Expo SDK 54 Migration (Historical):**
- Migrated from `expo-av` to `expo-audio`
- New audio recording API (`AudioModule.AudioRecorder`)
- Status polling instead of callbacks
- See `EXPO_54_AUDIO_RECORDING_MIGRATION.md`
**SvelteKit Web App:**
- Separate web app being built as companion
- Shares Supabase backend with mobile app
- See `SVELTEKIT_MIGRATION_ANALYSIS.md` for migration plan
## Testing Strategy
**Manual Testing:**
- Test on both iOS and Android before commits
- Verify web platform compatibility
- Check dark mode and all theme variants
- Test with different languages
**Platform Matrix:**
- iOS (simulator + device)
- Android (emulator + device)
- Web (Chrome, Safari, Firefox)
## Common Patterns
### Creating a New Feature
1. Create feature directory in `features/`
2. Add subdirectories: `components/`, `hooks/`, `services/`, `store/`, `types/`
3. Export public API via `index.ts`
4. Add feature-specific README if complex
5. Update this CLAUDE.md if architectural
### Adding a New Route
1. Add file in `app/` directory following Expo Router conventions
2. Use `(protected)/` group if authentication required
3. Use `[id]` for dynamic routes
4. Enable typed routes in `app.json` (already enabled)
5. Import route types from `expo-router`
### Working with Zustand Stores
```typescript
// Create store
export const useMyStore = create<MyState>((set, get) => ({
// state
data: null,
// actions
setData: (data) => set({ data }),
// computed/derived
getData: () => get().data,
}));
```
Stores are located in:
- Global: `store/store.ts`
- Feature-specific: `features/[feature]/store/`
### Platform-Specific Code
Use file extensions for platform-specific implementations:
- `file.ts`: Default (mobile)
- `file.web.ts`: Web platform
- `file.ios.ts`: iOS only
- `file.android.ts`: Android only
Metro bundler automatically resolves based on platform.
### Error Handling
1. Use feature-specific error types
2. Provide user-friendly messages
3. Include retry mechanisms where appropriate
4. Log errors to console for debugging
5. Consider Sentry integration for production
## Build and Deployment
**EAS Build Profiles:**
- `development`: Dev client with debugging
- `preview`: Internal distribution (TestFlight/Google Play Internal)
- `simulator`: iOS simulator build
- `production`: Auto-increment version, store-ready
**Environment Selection:**
EAS profiles automatically load correct environment via `EXPO_PUBLIC_USE_ENV_FILE` in `eas.json`.
**Version Management:**
- iOS: `buildNumber` in `app.json`
- Android: `versionCode` in `app.json`
- Production profile auto-increments both
## Important Files
- `app.json`: Expo configuration, plugins, permissions
- `eas.json`: EAS Build configuration
- `package.json`: Dependencies and scripts
- `tailwind.config.js`: Theme colors and styling
- `eslint.config.js`: Linting rules
- `babel.config.js`: Babel configuration
- `metro.config.js`: Metro bundler configuration (if present)
- `types/supabase.ts`: Auto-generated Supabase types
## Database Schema
The app uses Supabase with the following key tables:
- `memos`: Audio recordings and transcriptions
- `memories`: AI-generated insights from memos
- `blueprints`: AI analysis templates
- `prompts`: AI task templates
- `blueprint_prompts`: Many-to-many join table
- `categories`: Organization categories
- `tags`: User-defined tags
- `memo_tags`: Many-to-many join table
- `spaces`: Collaborative workspaces
- `space_members`: User-space relationships
- `profiles`: User profiles and settings
All tables use RLS policies based on JWT claims.
## Auto-Delete Audio Files (30-Day Retention)
When users enable `autoDeleteAudiosAfter30Days` in their settings, audio files older than 30 days are automatically deleted while preserving memo records (transcripts, metadata).
**Setting Location:** `app_settings.memoro.autoDeleteAudiosAfter30Days` (default: `false`)
**Two Cleanup Mechanisms:**
1. **Cloud Storage Cleanup** (memoro-service):
- Daily cron job at 3 AM UTC via Google Cloud Scheduler
- Queries `storage.objects` table for files older than 30 days
- Deletes from Supabase Storage bucket `user-uploads`
- Updates memo `source` field: `{ audio_path: null, audio_deleted: true, audio_deleted_at: timestamp }`
2. **Local Device Cleanup** (mobile app):
- Runs on app launch after successful authentication
- Throttled to once per 24 hours
- Uses `fileStorageService.cleanupOldFiles()` with 30-day retention
- Implementation: `features/storage/services/localAudioCleanup.ts`
**Key Files:**
- `memoro-service/src/cleanup/` - Cloud cleanup service
- `mana-core-middleware/src/modules/users/services/user-settings.service.ts` - User settings query
- `apps/mobile/features/storage/services/localAudioCleanup.ts` - Local device cleanup
- `apps/mobile/features/auth/contexts/AuthContext.tsx` - Cleanup trigger after auth
## Known Issues
1. **Android 16+ Recording**: Must be in foreground to start recording
2. **Zero-byte Recordings**: Occasional issue on some Android devices (retry mechanism in place)
3. **Token Refresh**: Email may not be in refreshed token (stored separately as workaround)
4. **Web Platform**: Limited functionality vs native (no push notifications, haptics, etc.)
## Additional Documentation
- `apps/mobile/README.md`: Full mobile app documentation
- `apps/web/README.md`: Web app documentation
- `features/auth/README.md`: Authentication system details
- `features/audioRecordingV2/README.md`: Audio recording implementation
- `docs/blueprints_and_prompts.md`: AI processing system
- `docs/SPACES.md`: Collaboration features
- `SVELTEKIT_MIGRATION_ANALYSIS.md`: Web app migration plan

373
apps/memoro/README.md Normal file
View file

@ -0,0 +1,373 @@
# Memoro
**AI-powered voice recording and memo management platform** that transforms audio recordings into structured, searchable content using artificial intelligence.
![Platform](https://img.shields.io/badge/platform-iOS%20%7C%20Android%20%7C%20Web-blue)
![React Native](https://img.shields.io/badge/React%20Native-0.81.4-61dafb)
![Expo](https://img.shields.io/badge/Expo-54.0.0-000020)
![SvelteKit](https://img.shields.io/badge/SvelteKit-2.x-ff3e00)
![TypeScript](https://img.shields.io/badge/TypeScript-5.x-3178c6)
## 📱 What is Memoro?
Memoro is a cross-platform application that combines voice recording, AI processing, and collaborative features to help individuals and teams capture, organize, and analyze spoken content. Record meetings, interviews, lectures, or personal notes, and let AI transform them into structured, actionable insights.
### Key Features
**High-Quality Audio Recording** - Background recording with pause/resume support
🤖 **AI-Powered Analysis** - Transform recordings using customizable Blueprints and Prompts
👥 **Collaborative Spaces** - Share and organize memos within team workspaces
🌍 **32 Languages** - Full internationalization with automatic language detection
🎨 **4 Theme Variants** - Light/dark mode with Nature, Ocean, Stone, and Lume themes
💰 **Credit System** - Transparent Mana-based pricing for AI operations
🔒 **Enterprise Security** - Row-level security with JWT authentication
📊 **Rich Analytics** - Track usage, productivity, and team insights
## 🏗 Monorepo Structure
```
memoro_app/
├── apps/
│ ├── mobile/ # React Native + Expo app (iOS & Android native)
│ └── web/ # SvelteKit web application
├── CLAUDE.md # Development guidance for Claude Code
└── README.md # This file
```
Both applications share the same Supabase backend for seamless data synchronization.
## 🚀 Quick Start
### Prerequisites
- **Node.js** 18 or higher
- **npm** or **pnpm**
- **Expo CLI** (for mobile development)
- **iOS Simulator** (macOS only) or **Android Emulator**
- **Supabase Account** (for backend services)
### Installation
```bash
# Clone the repository
git clone <repository-url>
cd memoro_app
# Install mobile app dependencies
cd apps/mobile
npm install
# Install web app dependencies
cd ../web
npm install
```
### Environment Setup
Both apps require environment variables. Copy the example files and fill in your credentials:
```bash
# Mobile app
cd apps/mobile
cp .env.dev.example .env.dev
cp .env.prod.example .env.prod
# Edit .env.dev and .env.prod with your Supabase and API credentials
# Web app
cd apps/web
cp .env.example .env
# Edit .env with your Supabase credentials
```
**Required Environment Variables:**
- `EXPO_PUBLIC_SUPABASE_URL` - Your Supabase project URL
- `EXPO_PUBLIC_SUPABASE_ANON_KEY` - Your Supabase anonymous key
- `EXPO_PUBLIC_MIDDLEWARE_API_URL` - Middleware authentication service URL
- `EXPO_PUBLIC_APPID` - Application ID for middleware
### Running the Apps
**Mobile App (iOS & Android):**
```bash
cd apps/mobile
# Start development server
npm start
# Run on iOS
npm run ios
# Run on Android
npm run android
# Run with specific environment
npm run start:dev # Development environment
npm run start:prod # Production environment
```
**Web App:**
```bash
cd apps/web
# Start development server
npm run dev
# Build for production
npm run build
npm run preview
```
## 📖 Documentation
### Comprehensive Guides
- **[CLAUDE.md](./CLAUDE.md)** - Complete architectural overview and development guidelines
- **[Mobile App README](./apps/mobile/README.md)** - Detailed mobile app documentation
- **[Web App README](./apps/web/README.md)** - SvelteKit web app guide
### Feature Documentation
- **[Authentication System](./apps/mobile/features/auth/README.md)** - Middleware-based auth with JWT
- **[Audio Recording](./apps/mobile/features/audioRecordingV2/README.md)** - AudioRecordingV2 implementation
- **[Blueprints & Prompts](./apps/mobile/docs/blueprints_and_prompts.md)** - AI processing system
- **[Spaces](./apps/mobile/docs/SPACES.md)** - Collaborative workspaces
- **[SvelteKit Migration](./SVELTEKIT_MIGRATION_ANALYSIS.md)** - Web app migration analysis
## 🛠 Technology Stack
### Mobile App (`apps/mobile/`)
| Category | Technologies |
|----------|-------------|
| **Framework** | React Native 0.81.4, Expo SDK 54 |
| **Language** | TypeScript 5.x |
| **Navigation** | Expo Router (file-based) |
| **Styling** | NativeWind (Tailwind CSS) |
| **State** | Zustand, React Context |
| **Backend** | Supabase (PostgreSQL, Storage, Realtime) |
| **Audio** | expo-audio, Azure Speech Services |
| **Payments** | RevenueCat (iOS, Android) |
| **Analytics** | PostHog, Sentry |
| **i18n** | react-i18next (32 languages) |
### Web App (`apps/web/`)
| Category | Technologies |
|----------|-------------|
| **Framework** | SvelteKit 2.x |
| **Language** | TypeScript 5.x |
| **Styling** | TailwindCSS 3.x |
| **State** | Svelte Stores |
| **Backend** | Supabase (shared with mobile) |
| **i18n** | svelte-i18n |
## 🏛 Architecture Highlights
### Feature-Based Structure
The mobile app uses a feature-based architecture with **33 self-contained modules** (auth, audioRecordingV2, memos, spaces, credits, subscription, i18n, theme, etc.), each with its own services, hooks, components, and stores.
### Atomic Design System
Components are organized using atomic design principles:
- **Atoms**: Button, Input, Text, Icon (16 components)
- **Molecules**: MemoPreview, RecordingBar, TagSelector (21 components)
- **Organisms**: AudioRecorder, Memory, TranscriptDisplay (9 components)
- **Statistics**: Analytics components (14 components)
### Middleware Authentication
Uses a custom middleware service as a bridge between the app and Supabase:
```
Mobile/Web App → Middleware Auth → Supabase (with JWT + RLS)
```
- Three token types: `manaToken`, `appToken`, `refreshToken`
- Platform-specific secure storage
- Automatic token refresh
- Supports email/password, Google, and Apple Sign-In
### AI Processing Pipeline
- **Blueprints**: Reusable analysis patterns (Text Analysis, Creative Writing, Meeting Notes)
- **Prompts**: Specific AI tasks (Summary, To-Do, Translation, Q&A)
- **Categories**: 8 organizational categories (Office, Healthcare, University, etc.)
- Multi-language support with localized advice
## 🎯 Key Features Deep Dive
### Audio Recording System (V2)
- High-quality M4A/AAC recording
- Background recording with foreground service (Android)
- Pause/resume support
- Real-time audio metering
- Platform-specific optimizations (iOS/Android)
- Crash recovery with automatic segmentation
- Zero-byte recording prevention
### Collaborative Spaces
- Create unlimited team workspaces
- Role-based permissions (owner, member)
- Email-based invitation system
- Shared credit pools
- Real-time collaboration via Supabase Realtime
### Theme System
4 complete theme variants with light/dark modes:
- **Lume**: Modern gold & dark
- **Nature**: Soothing green
- **Stone**: Elegant slate
- **Ocean**: Tranquil blue
Each theme includes 13 semantic color tokens for consistent UI.
### Internationalization
**32 supported languages** with:
- Automatic device language detection
- Persistent user preferences
- RTL support (Arabic, Hebrew)
- Complete UI translations
Languages: Arabic, Bengali, Bulgarian, Chinese, Czech, Danish, Dutch, English, Estonian, Finnish, French, Gaelic, German, Greek, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Lithuanian, Latvian, Maltese, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Turkish, Ukrainian, Urdu, Vietnamese.
## 💻 Development
### Code Quality Tools
```bash
# Mobile app linting
cd apps/mobile
npm run lint # Check code quality
npm run lint:fix # Auto-fix issues
npm run lint:unused # Find unused imports/vars
npm run format # Format with Prettier + ESLint
# Web app checking
cd apps/web
npm run check # Type check
npm run check:watch # Watch mode
```
### Building for Production
**Mobile App (EAS Build):**
```bash
cd apps/mobile
# Development build (with dev client)
eas build --profile development
# Preview build (internal testing)
eas build --profile preview
# Production build (store submission)
eas build --profile production
```
**Web App:**
```bash
cd apps/web
# Build static site
npm run build
# Preview production build
npm run preview
```
## 📊 Project Statistics
- **~10,890** TypeScript/JavaScript files in mobile app
- **33** feature modules
- **60+** reusable components
- **32** language translations
- **4** theme variants (8 including dark modes)
- **2** platforms (mobile + web)
- **1** shared Supabase backend
## 🔒 Security
- **Row Level Security (RLS)** on all Supabase tables
- **JWT-based authentication** with middleware
- **Secure token storage** (platform-specific)
- **Automatic token rotation**
- **Environment variable protection**
- **Sensitive file exclusion** (.gitignore)
## 🤝 Contributing
1. Read the [CLAUDE.md](./CLAUDE.md) for architectural guidance
2. Follow the atomic design system for components
3. Use feature-based organization for new features
4. Test on both iOS and Android before committing
5. Run linting and formatting before pushing
6. Update documentation for significant changes
## 📝 Common Development Tasks
### Adding a New Feature
```bash
# 1. Create feature directory in mobile app
mkdir -p apps/mobile/features/my-feature/{components,hooks,services,store,types}
# 2. Create index.ts for public API
touch apps/mobile/features/my-feature/index.ts
# 3. Add feature-specific README if complex
touch apps/mobile/features/my-feature/README.md
# 4. Update CLAUDE.md if architecturally significant
```
### Adding a New Route (Mobile)
```bash
# File-based routing with Expo Router
# Protected route:
touch apps/mobile/app/\(protected\)/my-route.tsx
# Public route:
touch apps/mobile/app/\(public\)/my-route.tsx
```
### Platform-Specific Code (Mobile App Only)
```bash
# Create platform variants for iOS/Android differences
touch apps/mobile/features/my-feature/myService.ts # Default/shared
touch apps/mobile/features/my-feature/myService.ios.ts # iOS-specific
touch apps/mobile/features/my-feature/myService.android.ts # Android-specific
# Metro bundler automatically resolves the correct file based on platform
# Note: .web.ts variants are no longer used - use apps/web/ for web features
```
### Adding a New Route (Web App)
```bash
# SvelteKit file-based routing
# Protected route:
mkdir -p apps/web/src/routes/\(protected\)/my-route
touch apps/web/src/routes/\(protected\)/my-route/+page.svelte
# Public route:
mkdir -p apps/web/src/routes/my-route
touch apps/web/src/routes/my-route/+page.svelte
```
## 🐛 Known Issues
1. **Android 16+**: Must be in foreground to start recording (platform restriction)
2. **Zero-byte recordings**: Occasional issue on some Android devices (retry mechanism implemented)
3. **Token refresh**: Email may not be in refreshed JWT (stored separately as workaround)
## 📄 License
Proprietary - All rights reserved
---
## 🔗 Quick Links
- **Documentation**: [CLAUDE.md](./CLAUDE.md)
- **Mobile App**: [apps/mobile/README.md](./apps/mobile/README.md)
- **Web App**: [apps/web/README.md](./apps/web/README.md)
- **Architecture**: See CLAUDE.md for detailed architecture
- **Issue Tracking**: (Add your issue tracker link)
- **Support**: (Add your support contact)
---
**Built with ❤️ using React Native, Expo, SvelteKit, and Supabase**

View file

@ -0,0 +1,15 @@
# Server Configuration
PORT=1337
# Azure Speech Service
AZURE_SPEECH_KEY=your-azure-speech-key
AZURE_SPEECH_REGION=swedencentral
# Azure Storage Account
AZURE_STORAGE_ACCOUNT_NAME=your-storage-account
AZURE_STORAGE_ACCOUNT_KEY=your-storage-key
# Supabase Configuration
SUPABASE_URL=https://npgifbrwhftlbrbaglmi.supabase.co
SUPABASE_SERVICE_KEY=your-service-key
SUPABASE_ANON_KEY=your-anon-key

View file

@ -0,0 +1,13 @@
.gcloudignore
.git
.gitignore
node_modules/
npm-debug.log
.env
.env.local
.env.*.local
uploads/
dist/
*.log
README.md
.dockerignore

View file

@ -0,0 +1,6 @@
/node_modules
/dist
pubsub-service-account-key.json
# Deployment secrets
.env.deploy
DEPLOY.md

View file

@ -0,0 +1,26 @@
# Audio Microservice Changelog
## [Unreleased]
### Added
- Service-to-service authentication using Supabase service role keys
- Support for `MEMORO_SUPABASE_SERVICE_KEY` environment variable
- UserId parameter in batch metadata updates for ownership validation
### Changed
- All memoro service callbacks now use dedicated `/service/` endpoints
- Authentication uses service role key instead of user JWT tokens
- Updated callback methods:
- `notifyTranscriptionComplete`: Now calls `/memoro/service/transcription-completed`
- `notifyAppendTranscriptionComplete`: Now calls `/memoro/service/append-transcription-completed`
- `storeBatchJobMetadata`: Now calls `/memoro/service/update-batch-metadata`
### Fixed
- 401 authentication errors when calling memoro service
- Callbacks no longer fail due to expired user tokens
- Service-to-service communication is now independent of user sessions
### Security
- Service role keys are never exposed to clients
- All service-to-service communication uses HTTPS
- Environment variables store sensitive credentials

View file

@ -0,0 +1,42 @@
FROM node:20-alpine
# Install FFmpeg 8.x from Alpine edge repository
# - Native support for iOS spatial audio 'chnl' v1 metadata box
# - Fixes: "Unsupported 'chnl' box with version 1" error
# - Install mpg123-libs from edge to avoid symbol conflicts
RUN apk add --no-cache \
--repository=https://dl-cdn.alpinelinux.org/alpine/edge/main \
--repository=https://dl-cdn.alpinelinux.org/alpine/edge/community \
ffmpeg \
mpg123-libs
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install all dependencies (including dev dependencies for build)
RUN npm ci
# Copy source code
COPY . .
# Build the application
RUN npm run build
# Remove dev dependencies to reduce image size
RUN npm prune --production
# Create uploads directory
RUN mkdir -p uploads
# Cloud Run uses PORT environment variable
EXPOSE ${PORT:-1337}
# Use non-root user for security
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nestjs -u 1001
USER nestjs
# Start the application
CMD ["npm", "run", "start:prod"]

View file

@ -0,0 +1,265 @@
# Enhanced Audio & Video Transcription Microservice
NestJS microservice for advanced audio and video processing with transcription. Features dual routing: fast real-time processing and enhanced Azure Batch transcription for long files.
## 🎯 What It Does
### Audio Processing
- **Receives audio file** uploads (MP3, WAV, M4A, AAC, OGG, WebM, FLAC)
- **Validates format** and file size (50MB max)
- **Converts to Azure-compatible WAV format** using FFmpeg
- **Enhanced diarization** with up to 10 speaker detection
- **Multi-language support** with automatic language identification and smart fallback
- **Uploads to Azure Blob Storage** with SAS tokens
- **Starts Azure Batch transcription** with advanced speaker processing
- **Recovery tracking** via memo metadata storage
- **Returns job ID** for tracking and recovery
### Video Processing (NEW)
- **Extracts audio from video files** (MP4, MOV, AVI, MKV, WEBM, FLV, WMV)
- **Automatic video-to-audio conversion** using FFmpeg
- **High-quality audio extraction** optimized for speech recognition
- **Supports all video formats** with audio tracks
- **Smart routing** (fast <115min, batch 115min) based on extracted audio duration
- **Full transcription pipeline** with speaker diarization
- **Progress tracking** and error handling
## 🚀 Quick Start
```bash
# Install dependencies
npm install
# Configure environment
cp .env.example .env
# Edit .env with your Azure credentials
# Start development server
npm run start:dev
# Service runs on port 1337
```
## 📡 API Endpoints
### Process Video File (NEW)
```bash
POST /audio/process-video
Content-Type: application/json
Authorization: Bearer <token>
curl -X POST http://localhost:1337/audio/process-video \
-H "Authorization: Bearer your-jwt-token" \
-H "Content-Type: application/json" \
-d '{
"videoPath": "user123/memo456/video.mp4",
"memoId": "memo456",
"userId": "user123",
"spaceId": "space789",
"recordingLanguages": ["en-US", "de-DE"],
"enableDiarization": true
}'
```
**Supported formats:** MP4, MOV, AVI, MKV, WEBM, FLV, WMV, MPEG
**Required Authentication:** Bearer JWT token
**Fields:**
- `videoPath` (required) - Supabase storage path to video file
- `memoId` (required) - Memo identifier
- `userId` (required) - User identifier
- `spaceId` (optional) - Space identifier
- `recordingLanguages` (optional) - Array of language codes
- `enableDiarization` (optional) - Enable speaker detection (default: true)
**Response:**
```json
{
"success": true,
"route": "fast",
"source": "video",
"memoId": "memo456",
"message": "Video processed and transcribed successfully via fast route"
}
```
### Upload Audio for Batch Transcription
```bash
POST /audio/transcribe
Content-Type: multipart/form-data
curl -X POST http://localhost:1337/audio/transcribe \
-F "audio=@your-audio-file.m4a" \
-F "userId=user123" \
-F "spaceId=space456"
```
**Supported formats:** MP3, WAV, M4A, AAC, OGG, WebM, FLAC
**Max file size:** 50MB
**Fields:**
- `audio` (required) - Audio file
- `userId` (optional) - User identifier
- `spaceId` (optional) - Space identifier
### Convert and Transcribe (with Supabase Integration)
```bash
POST /audio/convert-and-transcribe
Content-Type: multipart/form-data
Authorization: Bearer <token>
curl -X POST http://localhost:1337/audio/convert-and-transcribe \
-H "Authorization: Bearer your-jwt-token" \
-F "audio=@your-audio-file.m4a" \
-F "audioPath=user123/memo456/audio.m4a" \
-F "memoId=memo456" \
-F "recordingLanguages=en-US,es-ES"
```
**Required Authentication:** Bearer JWT token
**Fields:**
- `audio` (required) - Audio file
- `audioPath` (required) - Supabase storage path
- `memoId` (required) - Memo identifier
- `recordingLanguages` (optional) - Comma-separated language codes (if not provided, auto-detects from 10 common languages)
## 📊 Response Examples
### Success Response
```json
{
"status": "processing",
"type": "batch",
"jobId": "azure-batch-job-123",
"userId": "user123",
"spaceId": "space456",
"duration": 3600.5,
"message": "Batch transcription started. Webhook will notify when complete."
}
```
### Error Response
```json
{
"status": "failed",
"message": "Azure Storage credentials not configured",
"type": "batch",
"jobId": null,
"userId": "user123",
"spaceId": "space456"
}
```
## ⚙️ Configuration
Required environment variables:
```env
# Azure Configuration
AZURE_SPEECH_KEY=your-azure-speech-key
AZURE_SPEECH_REGION=swedencentral
AZURE_STORAGE_ACCOUNT_NAME=your-storage-account
AZURE_STORAGE_ACCOUNT_KEY=your-storage-key
# Supabase Configuration
SUPABASE_URL=https://npgifbrwhftlbrbaglmi.supabase.co
SUPABASE_SERVICE_KEY=your-service-key
SUPABASE_ANON_KEY=your-anon-key
# Memoro Service Integration
MEMORO_SERVICE_URL=https://memoro-service-111768794939.europe-west3.run.app
# Server Configuration
PORT=1337
```
## 🐳 Docker
```bash
# Build image
docker build -t audio-microservice .
# Run container
docker run -p 1337:1337 --env-file .env audio-microservice
```
## 🔄 How It Works
### Enhanced Batch Transcription Route (`/audio/transcribe-from-storage`)
1. **Storage Download** → Download audio file from Supabase Storage
2. **Duration Analysis** → Calculate audio length using FFmpeg
3. **Convert** → FFmpeg converts to Azure-compatible WAV (PCM 16-bit LE, 16kHz mono)
4. **Upload** → Store in Azure Blob Storage with 6-hour SAS token
5. **Enhanced Batch Job** → Create Azure Speech batch transcription job with:
- **Advanced diarization** (up to 10 speakers)
- **Smart language identification** with fallback to 10 common languages when auto mode is used
- **Word-level timestamps**
- **Webhook callback configuration**
6. **Metadata Storage** → Store jobId in memo metadata for recovery tracking
7. **Response** → Return job ID and processing status
### Fast Transcription Route (`/audio/convert-and-transcribe-from-storage`)
1. **Authentication** → Validate Bearer JWT token
2. **Storage Download** → Download audio from Supabase Storage
3. **Duration Analysis** → Calculate audio length using FFmpeg
4. **Convert** → Convert to WAV format if needed
5. **Supabase Upload** → Store converted audio in Supabase Storage (overwrite original)
6. **Edge Function** → Call Supabase transcribe function for real-time processing
7. **Response** → Return transcription results or processing status
### Recovery System
- **Metadata Tracking** → Each batch job stores jobId in memo metadata using direct memo ID lookup (improved 2025-06-08)
- **Authentication Fixed** → Proper JWT token handling for metadata storage (fixed 2025-06-08)
- **Webhook Failure Recovery** → Planned cron job system for stuck transcriptions
- **Status Monitoring** → Integration with memoro-service for batch job tracking
## 🌍 Language Detection
The service supports intelligent language detection with two modes:
### Specific Language Mode
When `recordingLanguages` is provided, Azure will attempt to identify the language from the specified list:
```bash
# Example: Detect Spanish or English
-F "recordingLanguages=es-ES,en-US"
```
### Auto Mode (Smart Fallback)
When no `recordingLanguages` are provided, the service automatically uses a curated list of 10 common languages:
- `de-DE` (German)
- `en-GB` (English - UK)
- `fr-FR` (French)
- `it-IT` (Italian)
- `es-ES` (Spanish)
- `sv-SE` (Swedish)
- `ru-RU` (Russian)
- `nl-NL` (Dutch)
- `tr-TR` (Turkish)
- `pt-PT` (Portuguese)
This ensures reliable language detection even when the frontend is in auto mode, improving transcription accuracy across different languages.
## 🔧 Integration Example
```javascript
// Call from another microservice
const formData = new FormData();
formData.append('audio', audioFileBuffer);
formData.append('userId', 'user123');
formData.append('spaceId', 'space456');
const response = await fetch('http://localhost:1337/audio/transcribe', {
method: 'POST',
body: formData
});
const result = await response.json();
console.log('Job ID:', result.jobId);
```
Optimized for long audio files with Azure Batch transcription! 🎵
example response: {"status":"processing","type":"batch","jobId":"287e93a0-3065-487d-9a22-36c3cfb5e1dc","userId":"test-user","duration":2407.119819,"message":"Batch transcription started. Webhook will notify when complete."}
Service URL: https://audio-microservice-111768794939.europe-west3.run.app# audio-middleware
# Deployment test Sat Jul 26 19:26:53 CEST 2025
test

View file

@ -0,0 +1,46 @@
#!/bin/bash
# Load environment variables from .env.deploy and deploy to Google Cloud Run
# Extract environment variables from .env.deploy (ignoring quotes and comments)
ENV_VARS=""
while IFS= read -r line || [[ -n "$line" ]]; do
# Skip empty lines and comments
if [[ -z "$line" || "$line" =~ ^[[:space:]]*# ]]; then
continue
fi
# Extract key:value pairs, removing quotes and extra spaces
if [[ "$line" =~ ^[[:space:]]*([^:]+):[[:space:]]*\"?([^\"]*)\"?[[:space:]]*$ ]]; then
key="${BASH_REMATCH[1]// /}"
value="${BASH_REMATCH[2]}"
# Add to ENV_VARS string
if [[ -n "$ENV_VARS" ]]; then
ENV_VARS="$ENV_VARS,$key=$value"
else
ENV_VARS="$key=$value"
fi
fi
done < .env.deploy
# Add PORT if not present
if [[ ! "$ENV_VARS" =~ PORT= ]]; then
ENV_VARS="$ENV_VARS"
fi
echo "Deploying with environment variables..."
echo "ENV_VARS: $ENV_VARS"
# Deploy to Google Cloud Run
gcloud run deploy audio-microservice \
--source . \
--platform managed \
--region europe-west3 \
--allow-unauthenticated \
--port 1337 \
--memory 2Gi \
--cpu 2 \
--timeout 900 \
--max-instances 10 \
--set-env-vars "$ENV_VARS"

View file

@ -0,0 +1,459 @@
# API-Exposition Roadmap
## Übersicht
Dieser Plan beschreibt alle notwendigen Schritte, um den Audio-Middleware-Service als professionelle, öffentliche API anzubieten.
**Status**: Der Service IST bereits eine REST-API, benötigt aber noch Production-Ready Features für öffentliche Nutzung.
---
## Was fehlt noch für eine professionelle API-Exposition?
### 🔐 1. Authentifizierung & Autorisierung
**Aktuell**: Nur simple Bearer-Token-Prüfung ohne Validierung
```typescript
// Aktuelle Implementierung in audio.controller.ts:44-46
if (!authHeader || !authHeader.startsWith('Bearer ')) {
throw new BadRequestException('Authorization token is required');
}
```
**Was fehlt:**
- API-Key-Management-System (API-Keys generieren, rotieren, widerrufen)
- JWT-Token-Validierung (derzeit wird Token nur weitergegeben, nicht validiert)
- OAuth 2.0 / OpenID Connect Integration
- Unterschiedliche Permission-Levels (Read/Write/Admin)
- Service-to-Service Authentifizierung
**Technologien:**
- `@nestjs/passport`
- `@nestjs/jwt`
- `passport-jwt`
---
### 📚 2. API-Dokumentation (OpenAPI/Swagger)
**Was fehlt:**
- `@nestjs/swagger` Integration
- Automatische API-Docs auf `/api-docs`
- DTOs mit Decorators für automatische Validierung
- Request/Response-Beispiele
- Interaktive API-Playground
**Beispiel-Implementation:**
```typescript
// Aktuell fehlt:
@ApiTags('audio')
@ApiBearerAuth()
export class AudioController {
@ApiOperation({ summary: 'Process video file and transcribe' })
@ApiResponse({ status: 200, description: 'Success', type: ProcessVideoResponse })
@Post('process-video')
async processVideo(@Body() body: ProcessVideoDto) { ... }
}
```
**Technologien:**
- `@nestjs/swagger`
- `swagger-ui-express`
---
### 🛡️ 3. Rate Limiting & Throttling
**Was fehlt:**
- Request-Limits pro API-Key (z.B. 100 Requests/Minute)
- Throttling für ressourcenintensive Endpunkte
- `@nestjs/throttler` Package
- Unterschiedliche Limits für verschiedene Tier-Levels (Free/Pro/Enterprise)
**Beispiel:**
```typescript
@Throttle(10, 60) // 10 requests per 60 seconds
@Post('process-video')
async processVideo() { ... }
```
**Technologien:**
- `@nestjs/throttler`
- Redis für distributed rate limiting
---
### ✅ 4. Input-Validierung mit DTOs
**Aktuell**: Manuelle Validierung
```typescript
if (!body.audioPath) {
throw new BadRequestException('Audio path is required');
}
```
**Besser: Class-Validator DTOs:**
```typescript
class ProcessVideoDto {
@IsString()
@IsNotEmpty()
videoPath: string;
@IsString()
@IsNotEmpty()
memoId: string;
@IsArray()
@IsOptional()
recordingLanguages?: string[];
@IsString()
@IsOptional()
callbackUrl?: string;
}
```
**Technologien:**
- `class-validator`
- `class-transformer`
---
### 📊 5. Monitoring, Logging & Analytics
**Was fehlt:**
- Request/Response Logging (strukturiert)
- API-Nutzungsstatistiken pro API-Key
- Performance-Metriken (Latency, Success Rate)
- Error-Tracking (z.B. Sentry Integration)
- Dashboard für API-Health-Monitoring
**Features:**
- Strukturiertes JSON-Logging
- Request-ID-Tracking über alle Services
- Performance-Metriken (P50, P95, P99 Latency)
- Error-Rate-Monitoring
- API-Usage-Analytics
**Technologien:**
- `winston` oder `pino` für Logging
- Sentry für Error-Tracking
- Prometheus + Grafana für Metrics
- Google Cloud Monitoring
---
### 🔢 6. API-Versionierung
**Was fehlt:**
```typescript
// Beispiel:
@Controller('v1/audio') // Version 1
@Controller('v2/audio') // Version 2 mit breaking changes
```
**Best Practices:**
- URL-basierte Versionierung (`/v1/audio`, `/v2/audio`)
- Sunset-Header für deprecated Endpoints
- Migrations-Guide zwischen Versionen
---
### 💰 7. Quotas & Billing
**Was fehlt:**
- Nutzungslimits (Minuten Transkription pro Monat)
- Kostenberechnung basierend auf Nutzung
- Billing-Integration (Stripe, etc.)
- Quota-Überwachung und Warnungen
- Usage-basierte Preismodelle
**Features:**
- Free Tier: 100 Minuten/Monat
- Pro Tier: 1000 Minuten/Monat
- Enterprise: Custom Limits
- Echtzeitüberwachung der Nutzung
**Technologien:**
- Stripe für Billing
- Redis/PostgreSQL für Quota-Tracking
---
### 🔄 8. Webhook-Management
**Aktuell**: Webhooks werden gesendet, aber:
- Kein Interface zum Registrieren/Verwalten von Webhooks
- Keine Webhook-Retry-Logik mit Exponential Backoff
- Kein Webhook-Event-Log
- Keine Webhook-Signatur-Validierung
**Was fehlt:**
- Webhook-Registrierung-API
- Retry-Mechanismus (3 Retries mit Backoff)
- Webhook-Event-History
- HMAC-Signatur für Sicherheit
- Webhook-Testing-Tools
---
### 📦 9. SDKs & Client Libraries
**Was fehlt:**
- JavaScript/TypeScript SDK
- Python SDK
- Java SDK
- Go SDK
- Code-Beispiele für verschiedene Sprachen
**Beispiel TypeScript SDK:**
```typescript
import { AudioAPI } from '@memo/audio-api';
const client = new AudioAPI({ apiKey: 'your-api-key' });
const result = await client.processVideo({
videoPath: 'gs://bucket/video.mp4',
memoId: 'memo-123',
recordingLanguages: ['de-DE']
});
```
---
### 🌐 10. Developer Portal
**Was fehlt:**
- Self-Service API-Key-Generierung
- Interaktive API-Dokumentation
- Code-Beispiele und Tutorials
- Nutzungsstatistiken-Dashboard
- Support/Ticketing-System
- Changelog und Release Notes
**Features:**
- User-Registrierung und Login
- API-Key-Management (Erstellen, Rotieren, Löschen)
- Live-API-Testing-Playground
- Nutzungs-Dashboard mit Grafiken
- Billing-Übersicht
---
### 🔒 11. Security Headers & CORS
**Aktuell**: `app.enableCors()` (zu permissiv)
**Besser:**
```typescript
app.enableCors({
origin: process.env.ALLOWED_ORIGINS?.split(','),
methods: ['POST', 'GET'],
credentials: true,
maxAge: 3600
});
// Helmet.js für Security Headers
app.use(helmet({
contentSecurityPolicy: true,
hsts: true,
noSniff: true
}));
```
**Zusätzliche Security:**
- HTTPS-Only
- API-Key-Verschlüsselung im Storage
- Request-Signing für sensitive Operationen
- IP-Whitelisting (optional)
**Technologien:**
- `helmet`
- `@nestjs/cors`
---
## 📋 Priorisierte Umsetzungs-Roadmap
### Phase 1: Basis-Absicherung
**Ziel**: Minimale Production-Ready API
**Tasks:**
1. ✅ DTOs mit class-validator implementieren
- ProcessVideoDto
- TranscribeDto
- ConvertAndTranscribeDto
- Response-DTOs
2. ✅ API-Key-Authentifizierung
- API-Key-Generation
- API-Key-Validierung
- Datenbank-Schema für Keys
3. ✅ Rate Limiting
- @nestjs/throttler Setup
- Pro-Endpoint-Limits
- Redis-Integration für distributed limiting
4. ✅ Swagger-Dokumentation
- @nestjs/swagger Setup
- Controller-Decorators
- DTO-Documentation
- API-Docs auf /api-docs
**Geschätzter Aufwand**: 1-2 Wochen
---
### Phase 2: Professional Features
**Ziel**: Production-Grade Monitoring & Security
**Tasks:**
5. ✅ API-Versionierung
- v1/audio Endpoints
- Versionierungs-Strategie dokumentieren
6. ✅ Strukturiertes Logging
- Winston/Pino Integration
- Request-ID-Tracking
- Strukturierte Log-Formate
7. ✅ Error-Tracking
- Sentry Integration
- Error-Kategorisierung
- Alert-Konfiguration
8. ✅ CORS-Konfiguration
- Environment-basierte Origin-Liste
- Helmet.js Integration
9. ✅ Webhook-Retry-Logik
- Exponential Backoff
- Retry-Limits
- Event-Logging
**Geschätzter Aufwand**: 2-3 Wochen
---
### Phase 3: Enterprise-Features
**Ziel**: Vollständiges API-Produkt
**Tasks:**
10. ✅ Developer Portal
- Frontend-Entwicklung
- User-Management
- API-Key-Management-UI
- Usage-Dashboard
11. ✅ SDKs
- TypeScript SDK
- Python SDK
- Code-Generatoren
12. ✅ Quotas & Billing
- Quota-System
- Stripe-Integration
- Usage-Metering
13. ✅ Webhook-Management-API
- Registrierung
- Testing-Tools
- Event-History
14. ✅ Performance-Monitoring
- Prometheus-Metrics
- Grafana-Dashboards
- Alerting
**Geschätzter Aufwand**: 4+ Wochen
---
## 🎯 Quick Wins (Sofort umsetzbar)
Diese Features können schnell implementiert werden und bringen sofortigen Mehrwert:
1. **Swagger-Dokumentation** (1-2 Tage)
- Schnelle Übersicht für Entwickler
- Interaktives Testing
2. **DTOs mit Validation** (2-3 Tage)
- Bessere Fehler-Messages
- Automatische Validierung
3. **Rate Limiting** (1 Tag)
- Schutz vor Missbrauch
- Einfache Implementation
4. **Strukturiertes Logging** (1-2 Tage)
- Besseres Debugging
- Production-Monitoring
---
## 📚 Zusätzliche Empfehlungen
### Performance-Optimierungen
- Response-Caching für häufige Requests
- Database-Connection-Pooling
- Background-Job-Queue für lange Prozesse
### Testing
- Unit-Tests für alle Services
- Integration-Tests für API-Endpoints
- Load-Testing für Performance-Validierung
### Documentation
- API-Reference-Dokumentation
- Getting-Started-Guide
- Code-Beispiele für alle Endpoints
- Troubleshooting-Guide
### Compliance
- DSGVO-Compliance (Audio-Daten)
- Daten-Löschungs-Policies
- Audit-Logs für Compliance
---
## 🔧 Benötigte Dependencies (Phase 1)
```json
{
"dependencies": {
"@nestjs/swagger": "^7.1.0",
"@nestjs/throttler": "^5.0.0",
"@nestjs/passport": "^10.0.0",
"@nestjs/jwt": "^10.1.0",
"class-validator": "^0.14.0",
"class-transformer": "^0.5.1",
"helmet": "^7.0.0",
"passport-jwt": "^4.0.1",
"bcrypt": "^5.1.1"
},
"devDependencies": {
"@types/passport-jwt": "^3.0.9",
"@types/bcrypt": "^5.0.0"
}
}
```
---
## 💡 Nächste Schritte
Welcher Aspekt soll zuerst implementiert werden?
**Empfehlung**: Start mit Phase 1, Task 1-4 (Basis-Absicherung)
1. DTOs & Validation
2. Swagger-Dokumentation
3. Rate Limiting
4. API-Key-Authentifizierung
Dies schafft eine solide Basis für alle weiteren Features.

View file

@ -0,0 +1,5 @@
{
"$schema": "https://json.schemastore.org/nest-cli",
"collection": "@nestjs/schematics",
"sourceRoot": "src"
}

View file

@ -0,0 +1,37 @@
{
"name": "@memoro/audio-backend",
"version": "1.0.0",
"description": "Simple microservice for audio transcription with batch routing",
"main": "dist/main.js",
"scripts": {
"build": "nest build",
"start": "nest start",
"start:dev": "nest start --watch",
"start:prod": "node dist/main"
},
"dependencies": {
"@azure/storage-blob": "^12.17.0",
"@nestjs/common": "^10.0.0",
"@nestjs/config": "^3.0.0",
"@nestjs/core": "^10.0.0",
"@nestjs/platform-express": "^10.0.0",
"@nestjs/swagger": "^7.4.2",
"@nestjs/throttler": "^5.2.0",
"@supabase/supabase-js": "^2.41.0",
"class-transformer": "^0.5.1",
"class-validator": "^0.14.3",
"fluent-ffmpeg": "^2.1.2",
"helmet": "^8.1.0",
"multer": "^1.4.5-lts.1",
"reflect-metadata": "^0.1.13",
"rxjs": "^7.8.1",
"swagger-ui-express": "^5.0.1"
},
"devDependencies": {
"@nestjs/cli": "^10.0.0",
"@types/fluent-ffmpeg": "^2.1.21",
"@types/multer": "^1.4.7",
"@types/node": "^20.3.1",
"typescript": "^5.1.3"
}
}

View file

@ -0,0 +1,43 @@
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { MulterModule } from '@nestjs/platform-express';
import { ThrottlerModule, ThrottlerGuard } from '@nestjs/throttler';
import { APP_GUARD } from '@nestjs/core';
import { AudioController } from './audio.controller';
import { AudioService } from './audio.service';
@Module({
imports: [
ConfigModule.forRoot({ isGlobal: true }),
MulterModule.register({
dest: './uploads',
limits: { fileSize: 500 * 1024 * 1024 }, // 500MB
}),
ThrottlerModule.forRoot([
{
name: 'short',
ttl: 1000, // 1 second
limit: 3, // 3 requests per second
},
{
name: 'medium',
ttl: 60000, // 1 minute
limit: 20, // 20 requests per minute
},
{
name: 'long',
ttl: 3600000, // 1 hour
limit: 100, // 100 requests per hour
},
]),
],
controllers: [AudioController],
providers: [
AudioService,
{
provide: APP_GUARD,
useClass: ThrottlerGuard,
},
],
})
export class AppModule {}

View file

@ -0,0 +1,205 @@
import {
Controller,
Post,
Get,
Body,
Param,
BadRequestException,
Logger,
Headers,
} from '@nestjs/common';
import { ApiTags, ApiOperation, ApiResponse, ApiBearerAuth, ApiParam } from '@nestjs/swagger';
import { AudioService } from './audio.service';
import {
TranscribeRealtimeDto,
TranscribeFromStorageDto,
ProcessVideoDto,
TranscriptionResponseDto,
BatchStatusResponseDto,
} from './dto';
@ApiTags('Audio Transcription')
@ApiBearerAuth()
@Controller('audio')
export class AudioController {
private readonly logger = new Logger(AudioController.name);
constructor(private readonly audioService: AudioService) {}
@Post('transcribe-realtime')
@ApiOperation({
summary: 'Transcribe audio file in real-time',
description:
'Process and transcribe audio files using real-time transcription with automatic fallback to batch processing for longer files (>115 minutes). Supports speaker diarization and multi-language detection.',
})
@ApiResponse({
status: 200,
description: 'Transcription completed successfully',
type: TranscriptionResponseDto,
})
@ApiResponse({
status: 400,
description: 'Bad request - invalid input parameters',
})
async transcribeRealtime(
@Body() body: TranscribeRealtimeDto,
@Headers('authorization') authHeader?: string
) {
if (!authHeader || !authHeader.startsWith('Bearer ')) {
throw new BadRequestException('Authorization token is required');
}
const token = authHeader.replace('Bearer ', '');
this.logger.log(`Starting fast transcription: ${body.audioPath} for memo ${body.memoId}`);
try {
const result = await this.audioService.transcribeRealtimeWithFallback(
body.audioPath,
body.memoId,
body.userId,
body.spaceId,
body.recordingLanguages || [],
token,
body.enableDiarization,
body.isAppend,
body.recordingIndex
);
return result;
} catch (error) {
this.logger.error('Error in transcribe-realtime with fallback:', error);
throw new BadRequestException(
`Transcription failed after all fallback attempts: ${error.message}`
);
}
}
@Post('transcribe-from-storage')
@ApiOperation({
summary: 'Transcribe audio from cloud storage',
description:
'Process audio files directly from cloud storage paths. Supports both Google Cloud Storage (gs://) and Supabase storage paths.',
})
@ApiResponse({
status: 200,
description: 'Transcription completed successfully',
type: TranscriptionResponseDto,
})
@ApiResponse({
status: 400,
description: 'Bad request - invalid input parameters',
})
async transcribeFromStorage(
@Body() body: TranscribeFromStorageDto,
@Headers('authorization') authHeader?: string
) {
if (!authHeader || !authHeader.startsWith('Bearer ')) {
throw new BadRequestException('Authorization token is required');
}
const token = authHeader.replace('Bearer ', '');
this.logger.log(`Processing audio from storage: ${body.audioPath}`);
try {
// Process audio using storage path
const result = await this.audioService.processAudioFromStorage(
body.audioPath,
body.userId,
body.spaceId,
body.recordingLanguages,
token,
body.memoId,
body.enableDiarization
);
return result;
} catch (error) {
this.logger.error('Error in transcribe-from-storage:', error);
throw new BadRequestException(`Transcription failed: ${error.message}`);
}
}
@Get('batch-status/:jobId')
@ApiOperation({
summary: 'Check batch transcription job status',
description:
'Check the status and retrieve results of a batch transcription job. Used for long audio files that are processed asynchronously.',
})
@ApiParam({
name: 'jobId',
description: 'Batch transcription job ID',
example: 'batch-job-12345',
})
@ApiResponse({
status: 200,
description: 'Job status retrieved successfully',
type: BatchStatusResponseDto,
})
@ApiResponse({
status: 400,
description: 'Bad request - invalid job ID',
})
async checkBatchStatus(
@Param('jobId') jobId: string,
@Headers('authorization') authHeader?: string
) {
if (!jobId) {
throw new BadRequestException('Job ID is required');
}
this.logger.log(`Checking batch transcription status for job: ${jobId}`);
try {
const result = await this.audioService.checkBatchTranscriptionStatus(jobId);
return result;
} catch (error) {
this.logger.error('Error checking batch status:', error);
throw new BadRequestException(`Status check failed: ${error.message}`);
}
}
@Post('process-video')
@ApiOperation({
summary: 'Process video file and transcribe audio',
description:
'Extract audio from video files and transcribe automatically. Supports multiple video formats (MP4, MOV, AVI, MKV, WEBM, FLV, WMV) with automatic format detection and conversion.',
})
@ApiResponse({
status: 200,
description: 'Video processing and transcription completed successfully',
type: TranscriptionResponseDto,
})
@ApiResponse({
status: 400,
description: 'Bad request - invalid input parameters',
})
async processVideo(@Body() body: ProcessVideoDto, @Headers('authorization') authHeader?: string) {
if (!authHeader || !authHeader.startsWith('Bearer ')) {
throw new BadRequestException('Authorization token is required');
}
const token = authHeader.replace('Bearer ', '');
this.logger.log(`Processing video file: ${body.videoPath} for memo ${body.memoId}`);
try {
const result = await this.audioService.processVideoFile(
body.videoPath,
body.memoId,
body.userId,
body.spaceId,
body.recordingLanguages || [],
token,
body.enableDiarization,
body.isAppend,
body.recordingIndex
);
return result;
} catch (error) {
this.logger.error('Error processing video:', error);
throw new BadRequestException(`Video processing failed: ${error.message}`);
}
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,4 @@
export * from './transcribe-realtime.dto';
export * from './transcribe-from-storage.dto';
export * from './process-video.dto';
export * from './transcription-response.dto';

View file

@ -0,0 +1,71 @@
import { IsString, IsNotEmpty, IsOptional, IsArray, IsBoolean, IsNumber } from 'class-validator';
import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger';
export class ProcessVideoDto {
@ApiProperty({
description: 'Path to the video file in cloud storage (gs:// or supabase path)',
example: 'gs://bucket-name/videos/recording.mp4',
})
@IsString()
@IsNotEmpty()
videoPath: string;
@ApiProperty({
description: 'Unique identifier for the memo',
example: '123e4567-e89b-12d3-a456-426614174000',
})
@IsString()
@IsNotEmpty()
memoId: string;
@ApiProperty({
description: 'User ID who owns this transcription',
example: 'user-123',
})
@IsString()
@IsNotEmpty()
userId: string;
@ApiPropertyOptional({
description: 'Space/workspace ID for organization',
example: 'space-456',
})
@IsString()
@IsOptional()
spaceId?: string;
@ApiPropertyOptional({
description: 'Array of language codes for transcription (e.g., ["de-DE", "en-US"])',
example: ['de-DE', 'en-US'],
type: [String],
})
@IsArray()
@IsOptional()
recordingLanguages?: string[];
@ApiPropertyOptional({
description: 'Enable speaker diarization (speaker separation)',
example: true,
default: false,
})
@IsBoolean()
@IsOptional()
enableDiarization?: boolean;
@ApiPropertyOptional({
description: 'Append to existing transcription instead of replacing',
example: false,
default: false,
})
@IsBoolean()
@IsOptional()
isAppend?: boolean;
@ApiPropertyOptional({
description: 'Index of the recording in a multi-recording session',
example: 0,
})
@IsNumber()
@IsOptional()
recordingIndex?: number;
}

View file

@ -0,0 +1,54 @@
import { IsString, IsNotEmpty, IsOptional, IsArray, IsBoolean } from 'class-validator';
import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger';
export class TranscribeFromStorageDto {
@ApiProperty({
description: 'Path to the audio file in cloud storage',
example: 'gs://bucket-name/audio/recording.mp3',
})
@IsString()
@IsNotEmpty()
audioPath: string;
@ApiProperty({
description: 'User ID who owns this transcription',
example: 'user-123',
})
@IsString()
@IsNotEmpty()
userId: string;
@ApiPropertyOptional({
description: 'Space/workspace ID for organization',
example: 'space-456',
})
@IsString()
@IsOptional()
spaceId?: string;
@ApiPropertyOptional({
description: 'Array of language codes for transcription (e.g., ["de-DE", "en-US"])',
example: ['de-DE', 'en-US'],
type: [String],
})
@IsArray()
@IsOptional()
recordingLanguages?: string[];
@ApiPropertyOptional({
description: 'Unique identifier for the memo (optional for this endpoint)',
example: '123e4567-e89b-12d3-a456-426614174000',
})
@IsString()
@IsOptional()
memoId?: string;
@ApiPropertyOptional({
description: 'Enable speaker diarization (speaker separation)',
example: true,
default: false,
})
@IsBoolean()
@IsOptional()
enableDiarization?: boolean;
}

View file

@ -0,0 +1,71 @@
import { IsString, IsNotEmpty, IsOptional, IsArray, IsBoolean, IsNumber } from 'class-validator';
import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger';
export class TranscribeRealtimeDto {
@ApiProperty({
description: 'Path to the audio file in cloud storage (gs:// or supabase path)',
example: 'gs://bucket-name/audio/recording.mp3',
})
@IsString()
@IsNotEmpty()
audioPath: string;
@ApiProperty({
description: 'Unique identifier for the memo',
example: '123e4567-e89b-12d3-a456-426614174000',
})
@IsString()
@IsNotEmpty()
memoId: string;
@ApiProperty({
description: 'User ID who owns this transcription',
example: 'user-123',
})
@IsString()
@IsNotEmpty()
userId: string;
@ApiPropertyOptional({
description: 'Space/workspace ID for organization',
example: 'space-456',
})
@IsString()
@IsOptional()
spaceId?: string;
@ApiPropertyOptional({
description: 'Array of language codes for transcription (e.g., ["de-DE", "en-US"])',
example: ['de-DE', 'en-US'],
type: [String],
})
@IsArray()
@IsOptional()
recordingLanguages?: string[];
@ApiPropertyOptional({
description: 'Enable speaker diarization (speaker separation)',
example: true,
default: false,
})
@IsBoolean()
@IsOptional()
enableDiarization?: boolean;
@ApiPropertyOptional({
description: 'Append to existing transcription instead of replacing',
example: false,
default: false,
})
@IsBoolean()
@IsOptional()
isAppend?: boolean;
@ApiPropertyOptional({
description: 'Index of the recording in a multi-recording session',
example: 0,
})
@IsNumber()
@IsOptional()
recordingIndex?: number;
}

View file

@ -0,0 +1,105 @@
import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger';
export class TranscriptionSegment {
@ApiProperty({
description: 'Text content of the segment',
example: 'Hello, this is a test recording.',
})
text: string;
@ApiPropertyOptional({
description: 'Start time of the segment in seconds',
example: 0.5,
})
start?: number;
@ApiPropertyOptional({
description: 'End time of the segment in seconds',
example: 3.2,
})
end?: number;
@ApiPropertyOptional({
description: 'Speaker identifier (when diarization is enabled)',
example: 'Speaker 1',
})
speaker?: string;
@ApiPropertyOptional({
description: 'Confidence score of the transcription',
example: 0.95,
})
confidence?: number;
}
export class TranscriptionResponseDto {
@ApiProperty({
description: 'Full transcription text',
example: 'Hello, this is a test recording. How are you today?',
})
text: string;
@ApiPropertyOptional({
description: 'Individual transcription segments with timing',
type: [TranscriptionSegment],
})
segments?: TranscriptionSegment[];
@ApiPropertyOptional({
description: 'Detected language of the audio',
example: 'de-DE',
})
language?: string;
@ApiPropertyOptional({
description: 'Duration of the audio in seconds',
example: 125.5,
})
duration?: number;
@ApiProperty({
description: 'Status of the transcription',
example: 'success',
enum: ['success', 'processing', 'failed'],
})
status: string;
@ApiPropertyOptional({
description: 'Job ID for batch transcriptions (for long audio files)',
example: 'batch-job-12345',
})
jobId?: string;
@ApiPropertyOptional({
description: 'Error message if transcription failed',
example: 'Audio file not found',
})
error?: string;
}
export class BatchStatusResponseDto {
@ApiProperty({
description: 'Current status of the batch job',
example: 'Succeeded',
enum: ['NotStarted', 'Running', 'Succeeded', 'Failed'],
})
status: string;
@ApiPropertyOptional({
description: 'Transcription result (available when status is Succeeded)',
type: TranscriptionResponseDto,
})
transcription?: TranscriptionResponseDto;
@ApiPropertyOptional({
description: 'Error details if the job failed',
example: 'Transcription service timeout',
})
error?: string;
@ApiPropertyOptional({
description: 'Progress percentage (0-100)',
example: 75,
})
progress?: number;
}

View file

@ -0,0 +1,98 @@
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { json, urlencoded } from 'express';
import { Logger, ValidationPipe } from '@nestjs/common';
import { DocumentBuilder, SwaggerModule } from '@nestjs/swagger';
import helmet from 'helmet';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
const logger = new Logger('Bootstrap');
// Add security headers with Helmet
app.use(
helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
styleSrc: ["'self'", "'unsafe-inline'"], // For Swagger UI
scriptSrc: ["'self'", "'unsafe-inline'"], // For Swagger UI
imgSrc: ["'self'", 'data:', 'https:'], // For Swagger UI
},
},
crossOriginEmbedderPolicy: false, // Disable for Swagger UI compatibility
})
);
// Add request size logging middleware
app.use((req, res, next) => {
const contentLength = req.headers['content-length'];
if (contentLength && parseInt(contentLength) > 100000) {
// Log requests > 100KB
logger.log(`Large request detected: ${contentLength} bytes to ${req.url}`);
}
next();
});
// Configure body parser limits for large JSON payloads
app.use(
json({
limit: '50mb',
verify: (req, res, buf, encoding) => {
if (buf.length > 50 * 1024 * 1024) {
logger.error(`JSON payload too large: ${buf.length} bytes`);
throw new Error('Payload too large');
}
},
})
);
app.use(urlencoded({ extended: true, limit: '50mb' }));
// Enable CORS
app.enableCors({
origin: process.env.ALLOWED_ORIGINS?.split(',') || '*',
methods: ['GET', 'POST'],
credentials: true,
});
// Enable global validation pipe
app.useGlobalPipes(
new ValidationPipe({
whitelist: true, // Strip properties that don't have decorators
forbidNonWhitelisted: true, // Throw error if non-whitelisted properties are present
transform: true, // Automatically transform payloads to DTO instances
transformOptions: {
enableImplicitConversion: true, // Allow automatic type conversion
},
})
);
// Swagger API Documentation
const config = new DocumentBuilder()
.setTitle('Audio Transcription API')
.setDescription(
'Professional API for audio and video transcription with Azure Speech Services. Supports real-time and batch processing, speaker diarization, and multi-language detection.'
)
.setVersion('1.0')
.addBearerAuth({
type: 'http',
scheme: 'bearer',
bearerFormat: 'JWT',
description: 'Enter your Bearer token',
})
.addTag('Audio Transcription', 'Endpoints for audio and video transcription')
.build();
const document = SwaggerModule.createDocument(app, config);
SwaggerModule.setup('api-docs', app, document, {
customSiteTitle: 'Audio Transcription API - Documentation',
customCss: '.swagger-ui .topbar { display: none }',
});
const port = process.env.PORT || 1337;
await app.listen(port, '0.0.0.0');
console.log(`🎵 Audio Transcription Microservice running on port ${port}`);
}
bootstrap();

View file

@ -0,0 +1,9 @@
-- Storage policy to allow service role to download audio files for processing
-- This is needed for the audio microservice to access user-uploaded files
-- Allow service role to SELECT (download) files from user-uploads bucket
CREATE POLICY "Service role can download files for processing"
ON storage.objects
FOR SELECT
TO service_role
USING (bucket_id = 'user-uploads');

View file

@ -0,0 +1,23 @@
{
"compilerOptions": {
"module": "commonjs",
"declaration": true,
"removeComments": true,
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"allowSyntheticDefaultImports": true,
"target": "ES2021",
"sourceMap": true,
"outDir": "./dist",
"baseUrl": "./",
"incremental": true,
"skipLibCheck": true,
"strictNullChecks": false,
"noImplicitAny": false,
"strictBindCallApply": false,
"forceConsistentCasingInFileNames": false,
"noFallthroughCasesInSwitch": false
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist"]
}

View file

@ -0,0 +1,11 @@
#!/bin/bash
# Update audio-microservice environment variables with correct Supabase credentials
echo "🔧 Updating audio-microservice environment variables..."
gcloud run services update audio-microservice \
--region=europe-west3 \
--set-env-vars=MEMORO_SUPABASE_URL=https://npgifbrwhftlbrbaglmi.supabase.co,MEMORO_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im5wZ2lmYnJ3aGZ0bGJyYmFnbG1pIiwicm9sZSI6ImFub24iLCJpYXQiOjE3MTMxODA4MTcsImV4cCI6MjAyODc1NjgxN30.xfBwgNLkgwW0aJkUCIQM9FBwbqWE8K7ynI-zUY0oOr8,MEMORO_SERVICE_URL=https://memoro-service-111768794939.europe-west3.run.app
echo "✅ Environment variables updated!"
echo "🚀 Audio microservice should now be able to access Supabase Storage"

View file

@ -0,0 +1,39 @@
# Dependencies
node_modules
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Environment files - these should come from Cloud Run secrets
.env
.env.*
env.example
# Test files
*.spec.ts
*.spec.js
test
jest.config.js
# Development files
.git
.gitignore
README.md
*.md
# Build artifacts
dist
# IDE files
.vscode
.idea
*.swp
*.swo
# OS files
.DS_Store
Thumbs.db
# Temporary files
*.tmp
*.temp

View file

@ -0,0 +1,24 @@
# Server Configuration
PORT=3001
NODE_ENV=development
# Service URLs
#MANA_SERVICE_URL=https://mana-core-middleware-111768794939.europe-west3.run.app
MANA_SERVICE_URL=http://localhost:3000
BATCH_TRANSCRIPTION_SERVICE_URL=http://localhost:1337
AUDIO_MICROSERVICE_URL=http://localhost:1337
# App Configuration
MEMORO_APP_ID=973da0c1-b479-4dac-a1b0-ed09c72caca8
# Memoro Supabase Configuration
MEMORO_SUPABASE_URL=https://npgifbrwhftlbrbaglmi.supabase.co
MEMORO_SUPABASE_ANON_KEY=sb_publishable_HlAZpB4BxXaMcfOCNx6VJA_-64NTxu4
MEMORO_SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im5wZ2lmYnJ3aGZ0bGJyYmFnbG1pIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTc0NTg1MTQxNiwiZXhwIjoyMDYxNDI3NDE2fQ.-6hArOVoEgGwIwdjclLQCTOAu13BFYnp9hPxQks4JPM
# Also accept SUPABASE_SERVICE_KEY for compatibility with audio microservice
SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im5wZ2lmYnJ3aGZ0bGJyYmFnbG1pIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTc0NTg1MTQxNiwiZXhwIjoyMDYxNDI3NDE2fQ.-6hArOVoEgGwIwdjclLQCTOAu13BFYnp9hPxQks4JPM
# Mana Core service key for service-to-service auth
MANA_SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6InNtZW51ZWx6c2twaG5waGFhZXRwIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTc0MjA3NzYwMiwiZXhwIjoyMDU3NjUzNjAyfQ.guxCZQNZo4jM8M9kDA2MxDc1o78VSOuCLmVULnDCVnQ

9
apps/memoro/apps/backend/.gitignore vendored Normal file
View file

@ -0,0 +1,9 @@
node_modules
dist
.env
# Testing
coverage
.nyc_output
*.lcov

View file

@ -0,0 +1,143 @@
# Memoro Service - Branding Configuration
**Updated**: 2025-11-05
---
## Hardcoded Memoro Branding
The Memoro service has **hardcoded branding** that is automatically applied to all signup confirmation emails. This ensures consistent branding across all Memoro signups without needing environment variables.
### Branding Details
**Location**: `src/auth-proxy/auth-proxy.service.ts:113-123`
```typescript
const memoroBranding: BrandingConfig = {
appName: 'Memoro',
logoUrl: 'memoro-logo.png',
primaryColor: '#F8D62B',
secondaryColor: '#f5c500',
websiteUrl: 'https://memoro.ai',
taglineDe: 'Sprechen statt Tippen',
taglineEn: 'Speak Instead of Type',
copyright: '© 2025 Memoro · Made with 💛 in Germany'
};
```
### Logo
**File**: `memoro-logo.png`
**Storage URL**: https://smenuelzskphnphaaetp.supabase.co/storage/v1/object/public/satellites-logos/memoro-logo.png
**Note**: PNG format is required for email compatibility. Gmail and most email clients block SVG images for security reasons.
The logo is stored in Supabase Storage and referenced by filename only. Mana Core automatically builds the full URL.
### Redirect URL
**URL**: https://app.manacore.ai/welcome?appName=memoro
After email confirmation, users are redirected to the centralized welcome page with Memoro-specific branding (blue theme, voice recording features).
---
## How It Works
1. **Every signup** automatically includes Memoro branding
2. **No configuration needed** - branding is built into the code
3. **Can be overridden** - If needed, pass `metadata.branding` in signup payload
4. **Merges with custom** - If partial branding provided, merges with defaults
### Merging Behavior
```typescript
// Standard signup - uses all Memoro defaults
POST /auth/signup
{ email, password, deviceInfo }
→ Email has full Memoro branding
// Partial override - merges with defaults
POST /auth/signup
{
email, password, deviceInfo,
metadata: { branding: { logoUrl: 'special-logo.svg' } }
}
→ Email has special logo, but keeps Memoro colors, taglines, etc.
// Full override - replaces all branding
POST /auth/signup
{
email, password, deviceInfo,
metadata: { branding: { /* complete custom branding */ } }
}
→ Email uses completely custom branding
```
---
## Why Hardcoded?
**Consistency** - All Memoro signups look the same
**Simplicity** - No environment variables to manage
**Reliability** - Can't accidentally break branding with config errors
**Version Control** - Branding changes are tracked in git
---
## To Change Branding
If you need to update Memoro branding:
1. **Edit the file**: `src/auth-proxy/auth-proxy.service.ts`
2. **Update the values**: Lines 113-123
3. **Rebuild and deploy**: `npm run build && deploy`
**Example**:
```typescript
// Update copyright year
copyright: '© 2026 Memoro · Made with 💛 in Germany'
// Update colors
primaryColor: '#FF5733',
secondaryColor: '#C70039',
```
---
## Testing
To test branding locally:
```bash
# Start services
cd mana-core-middleware && npm run start:dev # Port 3003
cd memoro-service && npm run start:dev # Port 3001
# Test signup
curl -X POST 'http://localhost:3001/auth/signup' \
-H 'Content-Type: application/json' \
-d '{
"email": "test@example.com",
"password": "SecurePass123!",
"deviceInfo": {
"deviceId": "test-1",
"deviceName": "Test",
"deviceType": "web"
}
}'
# Check confirmation email for Memoro branding
```
See `LOCAL_SIGNUP_TEST_GUIDE.md` for detailed testing instructions.
---
## Related Files
- **Branding Interface**: `src/auth-proxy/interfaces/branding.interface.ts`
- **Auth Service**: `src/auth-proxy/auth-proxy.service.ts`
- **Auth Controller**: `src/auth-proxy/auth-proxy.controller.ts`
- **Documentation**: `SIGNUP_BRANDING.md`
- **Test Guide**: `../LOCAL_SIGNUP_TEST_GUIDE.md`

View file

@ -0,0 +1,331 @@
# Memoro Service - Claude Development Notes
## Enhanced Audio Processing Architecture
### Direct Storage Upload Strategy
- Audio files are uploaded directly to Supabase Storage from the frontend
- This bypasses Cloud Run's 32MB file size limit
- Memoro service then processes the uploaded file via `POST /memoro/process-uploaded-audio`
### Dual-Path Transcription System
**Smart Routing based on duration:**
- **Fast Transcription** (<115 minutes): Real-time Azure Speech Service
- **Batch Transcription** (≥115 minutes): Azure Speech Service with enhanced processing
### Enhanced Audio Format Fallback Strategy
The service implements a robust 4-tier fallback strategy with comprehensive error handling:
1. **Fast Transcribe (Primary)** - Direct transcription via Azure Speech Service
2. **Format Conversion + Retry** - Auto-detects format errors and converts via audio-microservice
3. **Batch Processing Fallback** - Uses enhanced batch processing if conversion fails
4. **Intelligent Error Detection** - Automatically identifies Azure Speech format issues
### Speaker Diarization Fix (2025-06-09)
**Critical Issue Resolved:**
- **Problem**: Azure Fast Transcription API diarization configuration was incorrect, causing 0/149 phrases to have speaker data
- **Root Cause**: Used incorrect `diarization.speakers.maxCount` instead of `diarization.maxSpeakers`
- **Solution**: Updated to correct Azure API format: `diarization: { enabled: true, maxSpeakers: 10 }`
- **Result**: Now 216/216 phrases have proper speaker data with complete utterances, speakers, and speakerMap
- **Request Size Fix**: Increased body parser limit to 200MB to handle very large transcriptions with extensive speaker data (fixed 413 errors)
### Batch Transcription Enhancements (NEW)
**Advanced Features:**
- **Enhanced Diarization**: Up to 10 speakers (vs 2 in basic mode)
- **Multi-language Detection**: Automatic identification from user preferences
- **Complete Speaker Data**: Same structure as fast transcription (utterances, speakers, speakerMap)
- **Recovery Tracking**: Stores Azure jobId for webhook failure recovery
- **Language Consistency**: Primary language detection and multi-language support
**Recovery System Foundation:**
- **Metadata Storage**: Each batch job stores jobId in memo metadata via `/update-batch-metadata`
- **Memo ID Based Lookup**: Direct memo ID lookup for reliable metadata updates (fixed 2025-06-08)
- **Authentication Fixed**: Proper JWT token passing between services (fixed 2025-06-08)
- **Recovery Ready**: Infrastructure for cron-based recovery system
- **Webhook Failure Handling**: Planned automatic recovery for stuck transcriptions
### Error Detection Patterns
The system detects audio format errors by checking for:
- "audio format", "audio stream could not be decoded"
- "InvalidAudioFormat", "UnprocessableEntity"
- "audio/x-m4a", "422" status codes
- Azure Speech Service specific error messages
### Processing Routes
- `fast_transcribe` - Direct success
- `fast_transcribe_converted` - Success after format conversion
- `batch_transcribe` - Enhanced batch processing for long files (NEW)
- `batch_transcribe_fallback` - Success via batch processing fallback
## Memo Creation Flow (Updated 2025-06-26)
### Enhanced Memo Response
The `createMemoFromUploadedFile` method now returns the complete memo object:
```typescript
{
memo: { /* full memo object */ },
memoId: string,
audioPath: string
}
```
### Recording Time Preservation
- **recordingStartedAt** is stored in memo metadata
- Frontend uses this for accurate timestamp display
- Preserved through all real-time updates
### Processing State Management
Memo metadata structure:
```typescript
metadata: {
processing: {
transcription: { status: 'pending' | 'processing' | 'completed' | 'failed' },
headline_and_intro: { status: 'pending' | 'processing' | 'completed' | 'failed' }
},
recordingStartedAt?: string, // ISO timestamp of actual recording start
location?: any
}
```
## Authentication Proxy Architecture (NEW - 2025-01-07)
### Purpose
The auth-proxy module routes all authentication requests through memoro-service to hide mana-core-middleware from the frontend. This provides a single entry point for all backend services.
### Auth Proxy Endpoints
All endpoints mirror the mana-core-middleware auth endpoints:
- `POST /auth/signin` - Email/password sign-in
- `POST /auth/signup` - User registration
- `POST /auth/google-signin` - Google OAuth sign-in
- `POST /auth/apple-signin` - Apple OAuth sign-in
- `POST /auth/refresh` - Token refresh
- `POST /auth/logout` - User logout
- `POST /auth/forgot-password` - Password reset
- `POST /auth/validate` - Token validation
- `GET /auth/credits` - Get user credits (proxies `/users/credits` from mana-core)
- `GET /auth/devices` - Get user devices
### Implementation Details
- **Module**: `auth-proxy` module separate from existing auth module
- **No OAuth Redirects**: Social sign-ins use token exchange, not redirects
- **Error Preservation**: Original error responses passed through
- **App ID Injection**: Automatically adds `appId` query parameter
- **Header Forwarding**: Authorization headers passed through for authenticated endpoints
## Append Transcription Feature (NEW - 2025-01-07)
### Purpose
Allows adding additional audio recordings to existing memos and transcribing them, storing results in the `source.additional_recordings` array.
### Endpoint
`POST /memoro/append-transcription`
### Request Body
```typescript
{
memoId: string; // ID of existing memo
filePath: string; // Audio file path in storage
duration: number; // Duration in seconds
recordingIndex?: number; // Optional: index to update specific recording
recordingLanguages?: string[];
enableDiarization?: boolean;
}
```
### Features
- **Smart Routing**: Uses same fast (<115min) vs batch (≥115min) logic as main transcription
- **Credit Management**: Validates and consumes credits like main transcription
- **Access Control**: Validates user owns memo or has access through space
- **Preserves Original**: Keeps original transcript intact, only appends to additional_recordings
- **Speaker Diarization**: Full support for speaker detection in appended audio
- **Error Handling**: Comprehensive fallback strategy matching main transcription flow
### Additional Recordings Structure
```typescript
source: {
// Original transcript and speaker data preserved
transcript: string;
speakers: {...};
utterances: [...];
// Appended recordings array
additional_recordings: [
{
path: string;
transcript: string;
languages: string[];
primary_language: string;
speakers: object;
speakerMap: object;
utterances: array;
status: 'completed' | 'processing' | 'error';
timestamp: string;
updated_at: string;
}
]
}
```
## Audio Cleanup System (Auto-Delete Old Audio Files)
### Overview
Automatically deletes audio files older than 30 days for users who have opted in. This helps users manage storage and comply with data retention preferences.
### How It Works
1. **GCP Cloud Scheduler** triggers `POST /cleanup/trigger-from-cron` daily at 3:00 AM UTC
2. **memoro-service** calls mana-core-middleware to get users with cleanup enabled
3. For each user, queries Supabase storage for files older than 30 days
4. Deletes files in batches (100 files per batch, 200ms delay between batches)
5. Updates memo `source` field to mark audio as deleted
6. Logs results to `audio_cleanup_logs` table
### Architecture
```
┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
│ GCP Cloud │ │ memoro-service │ │ mana-core- │
│ Scheduler │────>│ /cleanup/ │────>│ middleware │
│ (3:00 AM UTC) │ │ trigger-from-cron │ │ /internal/users/ │
└─────────────────────┘ └─────────────────────┘ │ audio-cleanup- │
│ │ enabled │
│ └─────────────────────┘
v
┌─────────────────────┐
│ Supabase Storage │
│ (user-uploads) │
│ - Delete old files │
│ - Update memos │
└─────────────────────┘
```
### Enabling Auto-Delete for a User
Add `autoDeleteAudiosAfter30Days: true` to the user's `app_settings.memoro` object in the `users` table:
```json
{
"memoro": {
"autoDeleteAudiosAfter30Days": true,
"dataUsageAcceptance": true,
"emailNewsletterOptIn": false
}
}
```
### SQL Query to Enable for a User
```sql
UPDATE users
SET app_settings = jsonb_set(
COALESCE(app_settings, '{}'::jsonb),
'{memoro,autoDeleteAudiosAfter30Days}',
'true'
)
WHERE id = 'USER_UUID_HERE';
```
### SQL Query to Enable for Multiple Users (by email)
```sql
WITH user_emails AS (
SELECT unnest(ARRAY[
'user1@example.com',
'user2@example.com',
'user3@example.com'
]::text[]) AS email
)
UPDATE users u
SET app_settings = jsonb_set(
jsonb_set(
COALESCE(u.app_settings, '{}'::jsonb),
'{memoro}',
COALESCE(u.app_settings->'memoro', '{}'::jsonb)
),
'{memoro,autoDeleteAudiosAfter30Days}',
'true'
)
FROM user_emails
WHERE u.email = user_emails.email;
```
### SQL Query to Check Users with Cleanup Enabled
```sql
SELECT id, email, app_settings->'memoro'->'autoDeleteAudiosAfter30Days'
FROM users
WHERE app_settings->'memoro'->>'autoDeleteAudiosAfter30Days' = 'true';
```
### Configuration
| Setting | Value | Location |
|---------|-------|----------|
| Retention period | 30 days | `audio-cleanup.service.ts` |
| Batch size | 100 files | `audio-cleanup.service.ts` |
| Batch delay | 200ms | `audio-cleanup.service.ts` |
| Storage bucket | `user-uploads` | `audio-cleanup.service.ts` |
| Schedule | `0 3 * * *` (daily 3 AM UTC) | GCP Cloud Scheduler |
| Timeout | 1800s (30 min) | GCP Cloud Scheduler |
### GCP Cloud Scheduler Jobs
**Dev:**
```bash
gcloud scheduler jobs describe audio-cleanup-daily --project=mana-core-dev --location=europe-west3
```
**Prod:**
```bash
gcloud scheduler jobs describe audio-cleanup-daily --project=mana-core-prod --location=europe-west3
```
### Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/cleanup/trigger-from-cron` | POST | Called by Cloud Scheduler |
| `/cleanup/trigger-manual` | POST | Manual trigger for testing |
| `/cleanup/process-old-audios` | POST | Process specific user IDs |
All endpoints require `X-Internal-API-Key` header.
### What Happens When Audio is Deleted
1. Audio file removed from Supabase Storage
2. Memo `source` field updated:
```json
{
"audio_path": null,
"audio_deleted": true,
"audio_deleted_at": "2026-01-26T06:47:02.000Z",
"transcript": "...",
"utterances": [...]
}
```
3. Transcript and other data remain intact
### Monitoring
Check cleanup logs:
```sql
SELECT * FROM audio_cleanup_logs ORDER BY started_at DESC LIMIT 10;
```
### Files
- `memoro_middleware/src/cleanup/audio-cleanup.service.ts` - Main cleanup logic
- `memoro_middleware/src/cleanup/audio-cleanup.controller.ts` - HTTP endpoints
- `memoro_middleware/src/cleanup/cleanup.module.ts` - NestJS module
- `mana-core-token-middleware/src/modules/users/controllers/user-cleanup.controller.ts` - User query endpoint
- `mana-core-token-middleware/src/modules/users/services/user-settings.service.ts` - User settings queries
## Development Commands
- `npm run start:dev` - Development server with hot reload
- `npm run build` - Production build
- `npm run start:prod` - Production server
## Key Implementation Details
- Audio format conversion handled via audio-microservice
- Credit validation before processing
- Automatic fallback without user intervention
- Detailed logging for debugging each fallback step
- Full memo object returned on creation for immediate frontend sync
- Auth proxy provides single backend entry point for frontend

View file

@ -0,0 +1,208 @@
# Memoro Service Deployment Manual
## Prerequisites
1. **Google Cloud SDK** installed and authenticated:
```bash
gcloud auth login
gcloud config set project memo-2c4c4
```
2. **Docker** installed (for local testing)
3. **Access to** `memo-2c4c4` project with Cloud Build and Cloud Run permissions
## Step-by-Step Deployment Process
### Step 1: Prepare for Deployment
Navigate to the memoro-service directory:
```bash
cd memoro-service
```
Check current version in `cloudbuild-memoro.yaml`:
```bash
cat cloudbuild-memoro.yaml
```
### Step 2: Update Version (Optional)
If you want to increment the version, update the tag in `cloudbuild-memoro.yaml`:
```yaml
# Change v4.0.0 to v4.1.0 (or next version)
args: ['build', '-t', 'europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.4.4', '.']
```
### Step 3: Build and Push Docker Image
Run the Cloud Build process:
```bash
gcloud builds submit --project=memo-2c4c4 --config=cloudbuild-memoro.yaml .
```
**Expected output:**
- ✅ Source uploaded to Cloud Storage
- ✅ Docker build steps execute
- ✅ Image pushed to Artifact Registry
- ✅ Build completes with "SUCCESS" status
### Step 4: Deploy to Cloud Run
Use the image version from the build output:
```bash
gcloud run deploy memoro-service \
--project=memo-2c4c4 \
--image europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.9.6 \
--platform managed \
--region europe-west3 \
--allow-unauthenticated \
--memory 1Gi
```
**Deployment will prompt:**
- Service configuration questions (usually accept defaults)
- Traffic allocation (usually 100% to new revision)
### Step 5: Verify Deployment
1. **Get service URL:**
```bash
SERVICE_URL=$(gcloud run services describe memoro-service --platform managed --region europe-west3 --format 'value(status.url)')
echo "Service URL: $SERVICE_URL"
```
2. **Test health endpoint:**
```bash
curl $SERVICE_URL/health
```
3. **Test with authentication (optional):**
```bash
curl -H "Authorization: Bearer YOUR_JWT_TOKEN" $SERVICE_URL/memoro/spaces
```
## Environment Variables & Secrets
The deployment preserves existing environment variables and secrets. Current secrets include:
- `MEMORO_SUPABASE_URL`
- `MEMORO_SUPABASE_ANON_KEY`
- `MEMORO_SUPABASE_SERVICE_KEY`
- `MANA_SERVICE_URL`
- `BATCH_TRANSCRIPTION_SERVICE_URL`
- `MEMORO_APP_ID`
To update environment variables:
```bash
gcloud run services update memoro-service \
--region europe-west3 \
--set-env-vars="NEW_VAR=value"
```
## Troubleshooting
### Build Issues
1. **Authentication errors:**
```bash
gcloud auth login
gcloud auth configure-docker europe-west3-docker.pkg.dev
```
2. **Project access issues:**
```bash
gcloud config set project memo-2c4c4
gcloud projects get-iam-policy memo-2c4c4
```
### Deployment Issues
1. **Check service logs:**
```bash
gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=memoro-service" --limit 10
```
2. **Check service status:**
```bash
gcloud run services describe memoro-service --region europe-west3
```
3. **Memory issues (increase if needed):**
```bash
gcloud run services update memoro-service \
--region europe-west3 \
--memory 1Gi
```
### Runtime Issues
1. **Test specific endpoints:**
```bash
# Health check
curl $SERVICE_URL/health
# Batch upload (requires valid JWT and audio file)
curl -X POST \
-H "Authorization: Bearer $JWT_TOKEN" \
-F "file=@test-audio.mp3" \
$SERVICE_URL/memoro/upload-audio
```
2. **Check environment variables:**
```bash
gcloud run services describe memoro-service \
--region europe-west3 \
--format="export" | grep env
```
## Quick Reference Commands
```bash
# Build only
gcloud builds submit --project=memo-2c4c4 --config=cloudbuild-memoro.yaml .
# Deploy latest version
gcloud run deploy memoro-service \
--image europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.4.0 \
--region europe-west3
# Get service URL
gcloud run services describe memoro-service --region europe-west3 --format 'value(status.url)'
# View logs
gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=memoro-service" --limit 10
# Update environment variable
gcloud run services update memoro-service --region europe-west3 --set-env-vars="VAR=value"
```
## File Structure Reference
```
memoro-service/
├── cloudbuild-memoro.yaml # Build configuration
├── Dockerfile # Container definition
├── package.json # Dependencies
├── src/ # Source code
│ ├── memoro/
│ │ ├── memoro.controller.ts # Updated with batch jobId storage
│ │ └── memoro.service.ts # Updated with batch logic
│ └── ...
└── DEPLOY_MANUAL.md # This file
```
## Recent Updates
**v4.0.0 includes:**
- ✅ Fixed batch upload jobId storage in memo metadata
- ✅ Updated duration threshold to 1h55m for batch processing
- ✅ Added `updateMemoWithJobId` method for webhook callback support
- ✅ Improved error handling for batch transcription flow
---
**Last Updated:** $(date)
**Current Version:** v4.0.0
**Deployment Region:** europe-west3
**Project:** memo-2c4c4

View file

@ -0,0 +1,13 @@
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
EXPOSE 3001
CMD ["npm", "run", "start:prod"]

View file

@ -0,0 +1,21 @@
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
# Debug: Check what files are present before build
RUN ls -la
# Run build with verbose output
RUN npm run build
# Debug: Check if dist was created
RUN ls -la dist/
EXPOSE 3001
CMD ["npm", "run", "start:prod"]

View file

@ -0,0 +1,153 @@
# Memoro Microservice
This is a standalone microservice for the Memoro component of the Mana Core system. It was extracted from the monolithic mana-core-middleware to enable independent scaling and deployment.
## Architecture
This microservice:
- Handles all Memoro-specific functionality
- Communicates with Auth service for authentication/authorization
- Communicates with Spaces service for space management
- Connects directly to the Memoro Supabase instance
- Implements mana cost validation for AI operations
## Mana Cost System
The service implements a backend-driven credit validation system:
- **Transcription**: 120 credits per hour / 2 credits per minute (base cost: 10 credits minimum)
- **Question Processing**: 5 mana per question asked to memos
- **Memo Combination**: 5 mana per memo when combining multiple memos
- **Headline Generation**: 10 credits for title/summary generation
- **Memory Creation**: 10 credits for AI-generated memories
- **Blueprint Processing**: 5 credits for blueprint application
- **Memo Sharing**: 1 credit for sharing operations
- **Space Operations**: 2 credits for space-related operations
- **Early Validation**: Credits are checked before expensive AI operations
- **Real-time Updates**: Frontend mana counter updates immediately after operations
All AI processing endpoints validate sufficient mana credits before processing and consume credits upon successful completion.
## API Endpoints
### Core Memoro Endpoints
- `GET /memoro/spaces` - Get all Memoro spaces for the authenticated user
- `POST /memoro/spaces` - Create a new Memoro space
- `GET /memoro/spaces/:id` - Get details for a specific Memoro space
- `DELETE /memoro/spaces/:id` - Delete a Memoro space
- `POST /memoro/link-memo` - Link a memo to a space
- `POST /memoro/unlink-memo` - Unlink a memo from a space
- `GET /memoro/spaces/:id/memos` - Get all memos for a specific space
- `POST /memoro/spaces/:id/leave` - Leave a space
### Space Invitation Management
- `GET /memoro/spaces/:id/invites` - Get space invitations
- `POST /memoro/spaces/:id/invite` - Invite user to space
- `POST /memoro/spaces/invites/:inviteId/resend` - Resend invitation
- `DELETE /memoro/spaces/invites/:inviteId` - Cancel invitation
- `GET /memoro/invites/pending` - Get user's pending invites
- `POST /memoro/spaces/invites/accept` - Accept invitation
- `POST /memoro/spaces/invites/decline` - Decline invitation
### Audio Processing
- `POST /memoro/process-uploaded-audio` - Process uploaded audio with intelligent fallback strategy and credit validation
- `POST /memoro/update-batch-metadata` - Update batch transcription metadata for recovery tracking (improved with memo ID lookup, 2025-06-08)
- `POST /memoro/retry-transcription` - Retry failed transcription
- `POST /memoro/retry-headline` - Retry failed headline generation
#### Enhanced Audio Processing System
The service implements a sophisticated dual-path transcription system with comprehensive fallback strategies:
**Transcription Paths:**
1. **Fast Transcription** (<115 minutes) - Real-time processing via Supabase Edge Function
2. **Batch Transcription** (≥115 minutes) - Azure Speech Service batch processing with webhook callbacks
**Enhanced Fallback Strategy:**
1. **Fast Transcribe** - Attempts fast transcription via edge function
2. **Format Conversion + Retry** - If audio format error detected, converts file via audio-microservice and retries
3. **Batch Processing Fallback** - Falls back to batch processing if conversion fails
4. **Intelligent Error Detection** - Automatically detects Azure Speech Service format compatibility issues
**Batch Transcription Enhancements:**
- **Advanced Diarization**: Supports up to 10 speakers (vs 2 in basic mode)
- **Multi-language Detection**: Automatic language identification from user preferences
- **Complete Data Consistency**: Same speaker data structure as fast transcription
- **Recovery Tracking**: Stores Azure jobId for webhook failure recovery
- **Graceful Degradation**: Falls back to text-only if speaker processing fails
**Supported Processing Routes:**
- `fast_transcribe` - Direct fast transcription success
- `fast_transcribe_converted` - Success after format conversion
- `batch_transcribe` - Regular batch processing for long files
- `batch_transcribe_fallback` - Success via batch processing fallback
**Data Structure Consistency:**
Both fast and batch transcription now save identical data:
- `transcript` - Transcribed text
- `primary_language` - Detected primary language
- `languages` - All detected languages
- `utterances` - Speaker segments with timestamps
- `speakers` - Speaker labels
- `speakerMap` - Speaker-grouped utterances
### AI Processing Endpoints (with Credit Validation)
- `POST /memoro/question-memo` - Ask questions about memos (5 mana cost)
- `POST /memoro/combine-memos` - Combine multiple memos with AI processing (5 mana per memo)
### Credit Management
- `POST /memoro/credits/check-transcription` - Check credits before transcription
- `POST /memoro/credits/consume-transcription` - Consume transcription credits
- `POST /memoro/credits/consume-operation` - Consume operation credits
### User Settings Management
- `GET /settings` - Get all user settings
- `GET /settings/memoro` - Get Memoro-specific settings
- `PATCH /settings/memoro` - Update Memoro settings
- `PATCH /settings/memoro/data-usage` - Update data usage acceptance
- `PATCH /settings/memoro/email-newsletter` - Update email newsletter opt-in
- `PATCH /settings/profile` - Update user profile (firstName, lastName, avatarUrl)
## Environment Variables
Required environment variables:
```env
# Server Configuration
PORT=3001
# Service URLs
MANA_SERVICE_URL=http://localhost:3000
AUDIO_MICROSERVICE_URL=https://audio-microservice-111768794939.europe-west3.run.app
# Supabase Configuration
MEMORO_SUPABASE_URL=https://your-memoro-project.supabase.co
MEMORO_SUPABASE_ANON_KEY=your-memoro-anon-key
MEMORO_SUPABASE_SERVICE_KEY=your-memoro-service-key
# App Configuration
MEMORO_APP_ID=973da0c1-b479-4dac-a1b0-ed09c72caca8
```
## Development
```bash
# Install dependencies
npm install
# Run in development mode
npm run start:dev
# Build for production
npm run build
# Run in production mode
npm run start:prod
```
## Deployment
For Cloud Run deployment instructions, see `cloud-run-deploy.md`.
testing prod deployment 30.juli 2025 03:30

View file

@ -0,0 +1,145 @@
# Memoro Service - Signup Branding Support
**Updated**: 2025-11-05
---
## Overview
The signup endpoint automatically applies **Memoro branding** to all confirmation emails. The branding is hardcoded in the service and includes:
- **App Name**: Memoro
- **Logo**: memoro-logo.png
- **Primary Color**: #F8D62B (Yellow)
- **Secondary Color**: #f5c500 (Golden Yellow)
- **Tagline DE**: "Sprechen statt Tippen"
- **Tagline EN**: "Speak Instead of Type"
- **Website**: https://memoro.ai
- **Redirect URL**: https://app.manacore.ai/welcome?appName=memoro
- **Copyright**: "© 2025 Memoro · Made with 💛 in Germany"
You can optionally override specific branding fields per signup if needed.
## Simple Usage
### Standard Signup (Automatic Memoro Branding)
```bash
POST /auth/signup
{
"email": "user@memoro.ai",
"password": "SecurePass123!",
"deviceInfo": {
"deviceId": "web-123",
"deviceName": "Chrome",
"deviceType": "web"
}
}
```
**Result**: Email automatically uses Memoro branding (yellow colors, Memoro logo, German/English taglines).
---
### Custom Branding (Optional)
```bash
POST /auth/signup
{
"email": "user@example.com",
"password": "SecurePass123!",
"deviceInfo": {
"deviceId": "web-123",
"deviceName": "Chrome",
"deviceType": "web"
},
"metadata": {
"branding": {
"logoUrl": "custom-logo.svg",
"primaryColor": "#FF5733"
}
}
}
```
**Result**: Email uses custom logo and color, other fields use Memoro defaults.
---
### Full Custom Branding
```bash
POST /auth/signup
{
"email": "user@example.com",
"password": "SecurePass123!",
"deviceInfo": {...},
"metadata": {
"branding": {
"appName": "Custom App",
"logoUrl": "custom-logo.svg",
"primaryColor": "#2C3E50",
"secondaryColor": "#34495E",
"websiteUrl": "https://custom-app.com",
"taglineDe": "Ihre Lösung",
"taglineEn": "Your Solution",
"copyright": "© 2025 Custom App"
}
}
}
```
---
## Branding Fields
All fields are **optional**:
| Field | Type | Description | Example |
|-------|------|-------------|---------|
| `appName` | string | App display name | `"My App"` |
| `logoUrl` | string | Logo filename (from Supabase Storage) | `"app-logo.png"` |
| `primaryColor` | string | Primary color (hex) | `"#F8D62B"` |
| `secondaryColor` | string | Secondary color (hex) | `"#f5c500"` |
| `websiteUrl` | string | Website URL | `"https://app.com"` |
| `taglineDe` | string | German tagline | `"Sprechen statt Tippen"` |
| `taglineEn` | string | English tagline | `"Speak Instead of Type"` |
| `copyright` | string | Footer text | `"© 2025 My App"` |
---
## TypeScript Types
```typescript
import { BrandingConfig } from './auth-proxy/interfaces/branding.interface';
// Example
const branding: BrandingConfig = {
logoUrl: 'custom-logo.svg',
primaryColor: '#FF5733'
};
await authProxy.signup({
email: 'user@example.com',
password: 'pass123',
deviceInfo: {...},
metadata: { branding }
});
```
---
## How It Works
1. **No metadata** → Mana Core uses default branding for your app
2. **With metadata.branding** → Mana Core merges your branding with defaults
3. **Any missing fields** → Filled in by Mana Core defaults
---
## That's It!
- ✅ Backward compatible - existing signups work unchanged
- ✅ Simple - just add `metadata.branding` when you want custom branding
- ✅ Flexible - override any or all branding fields
- ✅ No new endpoints - just use `POST /auth/signup`

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,132 @@
# Memoro Microservice Cloud Run Deployment Guide
## 1. Set up environment secrets
```bash
# Step 1: Authenticate with Google Cloud if needed
gcloud auth login
# Step 2: Set your project ID
gcloud config set project memo-2c4c4
# Step 3: Create or update GCP Secret Manager secrets for Memoro service
# If you're using existing secrets from the main service, you can reference those
# Otherwise, create new secrets for Memoro-specific configuration
gcloud secrets create MEMORO_SUPABASE_URL --data-file=/path/to/secret/value.txt
gcloud secrets create MEMORO_SUPABASE_ANON_KEY --data-file=/path/to/secret/value.txt
gcloud secrets create MANA_SERVICE_URL --data-file=/path/to/secret/value.txt
gcloud secrets create MEMORO_APP_ID --data-file=/path/to/secret/value.txt
```
## 2. Build and push Docker image
```bash
# Navigate to the Memoro service directory
cd memoro-service
gcloud builds submit --project=memo-2c4c4 --config=cloudbuild-memoro.yaml .
## 3. Deploy to Cloud Run
```bash
gcloud run deploy memoro-service \
--image europe-west3-docker.pkg.dev/mana-core-453821/memoro-service/memoro-service:v1.0.0 \
--platform managed \
--region europe-west3 \
--allow-unauthenticated \
--memory 512Mi \
--set-secrets=MEMORO_SUPABASE_URL=MEMORO_SUPABASE_URL:latest,MEMORO_SUPABASE_ANON_KEY=MEMORO_SUPABASE_ANON_KEY:latest,MANA_SERVICE_URL=MANA_SERVICE_URL:latest,MEMORO_APP_ID=MEMORO_APP_ID:latest
```
gcloud run deploy memoro-service \
--source . \
--platform managed \
--region europe-west3 \
--allow-unauthenticated \
--memory 512Mi \
--set-secrets=MEMORO_SUPABASE_URL=MEMORO_SUPABASE_URL:latest,MEMORO_SUPABASE_ANON_KEY=MEMORO_SUPABASE_ANON_KEY:latest,MANA_SERVICE_URL=MANA_SERVICE_URL:latest,MEMORO_APP_ID=MEMORO_APP_ID:latest
## 4. Update Main Middleware Environment Variables
After deploying the Memoro microservice, you need to update the main middleware service's environment to point to the new Memoro service URL.
```bash
# Get the Memoro service URL
MEMORO_SERVICE_URL=$(gcloud run services describe memoro-service --platform managed --region europe-west3 --format 'value(status.url)')
# Update the main middleware's MEMORO_SERVICE_URL environment variable
gcloud run services update mana-core-middleware-dev \
--region europe-west3 \
--platform managed \
--set-env-vars=MEMORO_SERVICE_URL=$MEMORO_SERVICE_URL
```
## 5. Testing the deployment
```bash
# Get the service URL
SERVICE_URL=$(gcloud run services describe memoro-service --platform managed --region europe-west3 --format 'value(status.url)')
# Test the API (requires authentication)
curl -H "Authorization: Bearer YOUR_JWT_TOKEN" $SERVICE_URL/memoro/spaces
```
## 6. Monitoring and Logging
After deployment, you can monitor your service through:
- **Cloud Run Dashboard**: For service health, traffic, and resource usage
- **Cloud Logging**: For application logs
- **Cloud Monitoring**: For setting up alerts and dashboards
```bash
# View logs
gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=memoro-service" --limit 10
```
## 7. Troubleshooting
If you encounter issues with your deployment:
1. Check application logs in Cloud Logging
2. Verify that all environment secrets are correctly set
3. Ensure that your service has sufficient memory and CPU
4. Check that the service account has the necessary permissions
5. Verify that the service can communicate with Auth and Spaces services
6. Check for CORS issues if calling from frontend applications
## 8. Continuous Deployment (optional)
You can set up continuous deployment using Cloud Build:
```bash
# Create a Cloud Build trigger
gcloud builds triggers create github \
--repo-name=your-repo-name \
--branch-pattern=main \
--build-config=cloudbuild.yaml
```
Example `cloudbuild.yaml`:
```yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'europe-west3-docker.pkg.dev/mana-core-453821/memoro-service/memoro-service:$COMMIT_SHA', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'europe-west3-docker.pkg.dev/mana-core-453821/memoro-service/memoro-service:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'memoro-service'
- '--image'
- 'europe-west3-docker.pkg.dev/mana-core-453821/memoro-service/memoro-service:$COMMIT_SHA'
- '--region'
- 'europe-west3'
- '--platform'
- 'managed'
images:
- 'europe-west3-docker.pkg.dev/mana-core-453821/memoro-service/memoro-service:$COMMIT_SHA'
```

View file

@ -0,0 +1,8 @@
# cloudbuild-memoro.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.9.8', '.'] # Assumes Dockerfile is in ./memoro-service
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.9.8']
images:
- 'europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.9.8'

View file

@ -0,0 +1,146 @@
# Service-to-Service Authentication Implementation
## Overview
This document describes the implementation of service role key authentication between the audio microservice and memoro service, replacing the previous user JWT token passthrough approach.
## Problem Statement
The audio microservice was experiencing 401 authentication errors when calling back to the memoro service because:
- User JWT tokens were expiring during long-running transcription processes
- The audio service needed to make callbacks even after the user's session ended
- Service-to-service communication should not depend on user authentication
## Solution Architecture
### 1. Service Authentication Guard
Created `src/guards/service-auth.guard.ts` that:
- Validates requests using Supabase service role keys
- Accepts both `MEMORO_SUPABASE_SERVICE_KEY` and `SUPABASE_SERVICE_KEY` for compatibility
- Marks authenticated requests with `isServiceAuth` flag
### 2. Dedicated Service Endpoints
Created `src/memoro/memoro-service.controller.ts` with service-specific endpoints:
- `/memoro/service/transcription-completed`
- `/memoro/service/append-transcription-completed`
- `/memoro/service/update-batch-metadata`
These endpoints:
- Use `ServiceAuthGuard` instead of regular `AuthGuard`
- Call existing service methods with `token: null`
- Pass userId for ownership validation
### 3. Ownership Validation
Updated service methods to validate memo ownership when using service auth:
- `handleTranscriptionCompleted`: Validates memo.user_id matches provided userId
- `handleAppendTranscriptionCompleted`: Validates memo.user_id matches provided userId
- `updateBatchMetadataByMemoId`: Validates memo.user_id matches provided userId (when userId provided)
### 4. Supabase Client Configuration
Fixed JWT parsing errors by conditionally creating Supabase clients:
```typescript
const authClient = isServiceAuth
? createClient(this.memoroUrl, this.memoroServiceKey)
: createClient(this.memoroUrl, this.memoroServiceKey, {
global: { headers: { Authorization: `Bearer ${token}` } }
});
```
## Audio Microservice Changes
### 1. Updated Callback URLs
All callbacks now use `/service/` endpoints:
- `notifyTranscriptionComplete`: Uses `/memoro/service/transcription-completed`
- `notifyAppendTranscriptionComplete`: Uses `/memoro/service/append-transcription-completed`
- `storeBatchJobMetadata`: Uses `/memoro/service/update-batch-metadata`
### 2. Service Key Authentication
Updated to use service role key instead of user tokens:
```typescript
const serviceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY') ||
this.configService.get('SUPABASE_SERVICE_KEY');
```
### 3. UserId Parameter
Added userId parameter to batch metadata updates for ownership validation
## Environment Variables
### Memoro Service
```bash
# Primary service key
MEMORO_SUPABASE_SERVICE_KEY=<service-role-key>
# Also accepts for compatibility
SUPABASE_SERVICE_KEY=<service-role-key>
```
### Audio Microservice
```bash
# Primary service key (for memoro callbacks)
MEMORO_SUPABASE_SERVICE_KEY=<service-role-key>
# Original service key (for Supabase operations)
SUPABASE_SERVICE_KEY=<service-role-key>
```
## Deployment Steps
### 1. Deploy Memoro Service
```bash
# Add environment variable
gcloud run services update memoro-service \
--project=memo-2c4c4 \
--region=europe-west3 \
--update-env-vars="SUPABASE_SERVICE_KEY=<service-role-key>"
# Build and deploy new code
gcloud builds submit --config=cloudbuild-memoro.yaml
gcloud run deploy memoro-service \
--project=memo-2c4c4 \
--image=europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.9.6 \
--platform=managed \
--region=europe-west3 \
--allow-unauthenticated \
--memory=1Gi
```
### 2. Deploy Audio Microservice
```bash
# Add environment variable
gcloud run services update audio-microservice \
--project=memo-2c4c4 \
--region=europe-west3 \
--update-env-vars="MEMORO_SUPABASE_SERVICE_KEY=<service-role-key>"
# Build and deploy new code
# (Follow standard audio microservice deployment process)
```
## Security Considerations
1. **Service Role Key Protection**: Service role keys bypass RLS, so they must be:
- Stored as environment variables only
- Never exposed to clients
- Rotated periodically
2. **Ownership Validation**: Even with service auth, the system validates:
- User owns the memo being updated
- Prevents unauthorized access across users
3. **Network Security**: Both services run on Google Cloud Run with:
- HTTPS encryption in transit
- Network isolation
- IAM-based access control
## Benefits
1. **Reliability**: No more 401 errors from expired user tokens
2. **Consistency**: Service-to-service auth independent of user sessions
3. **Performance**: Direct service authentication without token validation overhead
4. **Maintainability**: Clear separation between user and service endpoints
## Future Improvements
1. **mTLS**: Implement mutual TLS between services
2. **Service Accounts**: Use Google Cloud service accounts instead of API keys
3. **Rate Limiting**: Add rate limiting to service endpoints
4. **Audit Logging**: Enhanced logging for service-to-service calls

View file

@ -0,0 +1,460 @@
# Memoro Service - New Signup Implementation Plan
## Overview
This plan outlines the steps to integrate the Memoro backend service with the new Mana Core authentication system that includes dynamic email branding and enhanced device tracking.
## Current State Analysis
### Existing Implementation
- **Location**: `src/auth-proxy/auth-proxy.service.ts`
- **Current App ID**: `973da0c1-b479-4dac-a1b0-ed09c72caca8` (in .env)
- **Mana Core App ID**: `edde080c-3882-46bd-9867-72bdf3cbd99c` (in mana-core config)
- **Current Flow**: Simple proxy to Mana Core with redirect URL override
### Current Signup Code (Line 111-118)
```typescript
async signup(payload: any) {
// Add custom redirect URL for Memoro
const enhancedPayload = {
...payload,
redirectUrl: 'https://memoro.ai/de/welcome/'
};
return this.proxyPost('/auth/signup', enhancedPayload);
}
```
### Issues to Address
1. ❌ No TypeScript types/interfaces (uses `any`)
2. ❌ App ID mismatch between .env and mana-core config
3. ❌ Missing logo metadata for custom branding
4. ❌ No validation of required fields (deviceInfo)
5. ❌ No DTO classes for request/response
---
## Implementation Plan
### Phase 1: Create TypeScript Interfaces & DTOs
#### 1.1 Device Info Interface
**File**: `src/auth-proxy/dto/device-info.dto.ts`
```typescript
import { IsString, IsEnum, IsOptional } from 'class-validator';
export enum DeviceType {
WEB = 'web',
IOS = 'ios',
ANDROID = 'android',
DESKTOP = 'desktop',
}
export class DeviceInfoDto {
@IsString()
deviceId: string;
@IsString()
deviceName: string;
@IsEnum(DeviceType)
deviceType: DeviceType;
@IsOptional()
@IsString()
userAgent?: string;
}
```
#### 1.2 Signup Request DTO
**File**: `src/auth-proxy/dto/signup-request.dto.ts`
```typescript
import { IsEmail, IsString, MinLength, ValidateNested, IsOptional } from 'class-validator';
import { Type } from 'class-transformer';
import { DeviceInfoDto } from './device-info.dto';
export class SignupRequestDto {
@IsEmail()
email: string;
@IsString()
@MinLength(8)
password: string;
@ValidateNested()
@Type(() => DeviceInfoDto)
deviceInfo: DeviceInfoDto;
@IsOptional()
metadata?: {
[key: string]: any;
};
@IsOptional()
@IsString()
redirectUrl?: string;
}
```
#### 1.3 Signup Response Interface
**File**: `src/auth-proxy/interfaces/signup-response.interface.ts`
```typescript
export interface SignupResponse {
message: string;
confirmationRequired: boolean;
manaToken?: string;
appToken?: string;
refreshToken?: string;
deviceId?: string;
user: {
id: string;
email: string;
created_at?: string;
};
}
```
#### 1.4 Auth Metadata Interface
**File**: `src/auth-proxy/interfaces/auth-metadata.interface.ts`
```typescript
export interface AuthMetadata {
logoUrl?: string;
userName?: string;
[key: string]: any;
}
```
---
### Phase 2: Update Environment Configuration
#### 2.1 Verify App ID
**Action**: Check which App ID is correct
- Option A: Update `.env` to use `edde080c-3882-46bd-9867-72bdf3cbd99c` (from mana-core)
- Option B: Update mana-core config to use `973da0c1-b479-4dac-a1b0-ed09c72caca8`
**Recommendation**: Use the App ID that's configured in mana-core (`edde080c-3882-46bd-9867-72bdf3cbd99c`)
#### 2.2 Add Logo Configuration
**File**: `.env`
```bash
# Add to .env
MEMORO_LOGO_FILENAME=memoro-logo.svg
```
**File**: `env.example`
```bash
# Add to env.example
MEMORO_LOGO_FILENAME=memoro-logo.svg
```
---
### Phase 3: Update Auth Proxy Service
#### 3.1 Enhanced Signup Method
**File**: `src/auth-proxy/auth-proxy.service.ts`
```typescript
import { SignupRequestDto } from './dto/signup-request.dto';
import { SignupResponse } from './interfaces/signup-response.interface';
import { AuthMetadata } from './interfaces/auth-metadata.interface';
export class AuthProxyService {
private memoroLogoFilename: string;
constructor(
private httpService: HttpService,
private configService: ConfigService,
) {
this.manaServiceUrl = this.configService.get<string>('MANA_SERVICE_URL', 'http://localhost:3000');
this.memoroAppId = this.configService.get<string>('MEMORO_APP_ID');
this.memoroLogoFilename = this.configService.get<string>('MEMORO_LOGO_FILENAME', 'memoro-logo.svg');
}
async signup(payload: SignupRequestDto): Promise<SignupResponse> {
// Validate device info is present
if (!payload.deviceInfo) {
throw new HttpException(
'Device information is required for signup',
HttpStatus.BAD_REQUEST
);
}
// Prepare metadata with logo for custom email branding
const metadata: AuthMetadata = {
...payload.metadata,
logoUrl: this.memoroLogoFilename, // Just the filename
};
// Enhanced payload with Memoro-specific branding
const enhancedPayload = {
email: payload.email,
password: payload.password,
deviceInfo: payload.deviceInfo,
metadata,
redirectUrl: payload.redirectUrl || 'https://memoro.ai/de/welcome/',
};
console.log('[AuthProxy] Signup with enhanced payload:', {
email: enhancedPayload.email,
hasDeviceInfo: !!enhancedPayload.deviceInfo,
logoUrl: metadata.logoUrl,
redirectUrl: enhancedPayload.redirectUrl,
});
return this.proxyPost('/auth/signup', enhancedPayload);
}
}
```
---
### Phase 4: Update Auth Proxy Controller
#### 4.1 Add Validation Pipe
**File**: `src/auth-proxy/auth-proxy.controller.ts`
```typescript
import {
Controller,
Post,
Get,
Body,
Headers,
HttpCode,
HttpException,
HttpStatus,
UsePipes,
ValidationPipe
} from '@nestjs/common';
import { SignupRequestDto } from './dto/signup-request.dto';
import { SignupResponse } from './interfaces/signup-response.interface';
@Controller('auth')
export class AuthProxyController {
constructor(private readonly authProxyService: AuthProxyService) {}
@Post('signup')
@UsePipes(new ValidationPipe({
whitelist: true,
forbidNonWhitelisted: true,
transform: true
}))
async signup(@Body() payload: SignupRequestDto): Promise<SignupResponse> {
return this.authProxyService.signup(payload);
}
// Other methods remain similar but can be typed
@Post('signin')
async signin(@Body() payload: any) {
// Validate device info
if (!payload.deviceInfo) {
throw new HttpException(
'Device information is required for signin',
HttpStatus.BAD_REQUEST
);
}
return this.authProxyService.signin(payload);
}
}
```
---
### Phase 5: Install Required Dependencies
```bash
cd memoro-service
npm install class-validator class-transformer
```
---
### Phase 6: Testing
#### 6.1 Unit Tests
**File**: `src/auth-proxy/auth-proxy.service.spec.ts`
Add tests for:
- ✅ Signup with valid deviceInfo
- ✅ Signup includes logo metadata
- ✅ Signup includes redirect URL
- ✅ Error when deviceInfo is missing
#### 6.2 Integration Tests
**Test 1: Signup with All Fields**
```bash
curl -X POST http://localhost:3001/auth/signup \
-H 'Content-Type: application/json' \
-d '{
"email": "test@memoro.ai",
"password": "Test123456!",
"deviceInfo": {
"deviceId": "web-test-device-1",
"deviceName": "Chrome on MacBook",
"deviceType": "web",
"userAgent": "Mozilla/5.0..."
}
}'
```
**Expected Response:**
```json
{
"message": "Sign up successful. Please check your email to confirm your account.",
"confirmationRequired": true,
"user": {
"id": "...",
"email": "test@memoro.ai"
}
}
```
**Test 2: Check Email Branding**
- Email should show Memoro logo
- Email should use yellow color scheme (#F8D62B)
- Email should show German/English taglines
- Email should include Memoro features
**Test 3: Missing DeviceInfo (Should Fail)**
```bash
curl -X POST http://localhost:3001/auth/signup \
-H 'Content-Type: application/json' \
-d '{
"email": "test@memoro.ai",
"password": "Test123456!"
}'
```
**Expected:** 400 Bad Request with validation error
---
### Phase 7: Documentation
#### 7.1 Update README
**File**: `README.md`
Add section:
```markdown
## Authentication
Memoro uses the Mana Core authentication system with custom branding.
### Signup Flow
When users sign up via Memoro:
1. Frontend calls `/auth/signup` with email, password, and device info
2. Memoro backend adds Memoro logo metadata
3. Mana Core creates account and sends branded email
4. User confirms email and can log in
See [docs/AUTH_INTEGRATION.md](./docs/AUTH_INTEGRATION.md) for details.
```
#### 7.2 Create Integration Doc
**File**: `docs/AUTH_INTEGRATION.md`
Document:
- How Memoro integrates with Mana Core
- Required environment variables
- Device info requirements
- Custom branding flow
- Error handling
---
## Migration Checklist
### Pre-Deployment
- [ ] Verify App ID is correct in both services
- [ ] Upload `memoro-logo.svg` to Mana Core Supabase bucket
- [ ] Update `.env` with correct `MEMORO_APP_ID`
- [ ] Add `MEMORO_LOGO_FILENAME=memoro-logo.svg` to `.env`
- [ ] Install dependencies: `class-validator`, `class-transformer`
- [ ] Run tests locally
### Code Changes
- [ ] Create DTOs in `src/auth-proxy/dto/`
- [ ] Create interfaces in `src/auth-proxy/interfaces/`
- [ ] Update `auth-proxy.service.ts` with new signup method
- [ ] Update `auth-proxy.controller.ts` with validation
- [ ] Add unit tests
- [ ] Update documentation
### Deployment
- [ ] Deploy to staging environment
- [ ] Test signup flow end-to-end
- [ ] Verify email branding looks correct
- [ ] Check device tracking works
- [ ] Deploy to production
- [ ] Monitor for errors
### Post-Deployment
- [ ] Verify production signup emails show Memoro branding
- [ ] Test all auth flows (signin, google, apple)
- [ ] Update frontend to include deviceInfo if not already
- [ ] Document any issues/learnings
---
## Timeline Estimate
- **Phase 1-2** (Types & Config): 1 hour
- **Phase 3-4** (Service & Controller): 2 hours
- **Phase 5** (Dependencies): 15 minutes
- **Phase 6** (Testing): 2 hours
- **Phase 7** (Documentation): 1 hour
**Total**: ~6-7 hours
---
## Risk Assessment
### Low Risk
✅ Adding types/interfaces (backward compatible)
✅ Adding logo metadata (optional field)
✅ Documentation updates
### Medium Risk
⚠️ Changing App ID (requires coordination)
⚠️ Adding validation (could break existing clients)
### Mitigation
- Test thoroughly in staging
- Deploy during low-traffic period
- Have rollback plan ready
- Monitor error rates after deployment
---
## Questions to Resolve
1. **App ID**: Which App ID should be used?
- Current in memoro-service: `973da0c1-b479-4dac-a1b0-ed09c72caca8`
- Current in mana-core: `edde080c-3882-46bd-9867-72bdf3cbd99c`
2. **Breaking Changes**: Should we enforce validation immediately or phase it in?
- Option A: Enforce now (could break old clients)
- Option B: Log warnings first, enforce later
3. **Logo Location**: Is `memoro-logo.svg` already uploaded to satellites-logos bucket?
---
## Success Criteria
✅ Signup creates account successfully
✅ Email shows Memoro branding (yellow, logo, features)
✅ DeviceInfo is properly tracked
✅ All auth tests pass
✅ No breaking changes to existing clients
✅ Documentation is complete
✅ Production deployment successful

View file

@ -0,0 +1,154 @@
# Append Transcription Usage Example
## Overview
The append-transcription endpoint allows you to add additional audio recordings to an existing memo and have them transcribed. This is useful when users want to add follow-up thoughts or additional content to a memo without creating a new one.
## Frontend Integration Example
```typescript
// Example: Adding an additional recording to an existing memo
async function appendAudioToMemo(
memoId: string,
audioFile: File,
recordingDuration: number
) {
try {
// 1. Upload audio file to Supabase storage (similar to main recording)
const filePath = `${userId}/recordings/${Date.now()}_append.webm`;
const { error: uploadError } = await supabase.storage
.from('user-uploads')
.upload(filePath, audioFile);
if (uploadError) {
throw uploadError;
}
// 2. Call the append-transcription endpoint
const response = await fetch(`${MEMORO_SERVICE_URL}/memoro/append-transcription`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${token}`
},
body: JSON.stringify({
memoId: memoId,
filePath: filePath,
duration: recordingDuration,
recordingLanguages: ['de-DE', 'en-US'], // Optional: user's selected languages
enableDiarization: true // Optional: enable speaker detection
})
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.message || 'Failed to append transcription');
}
const result = await response.json();
console.log('Append transcription started:', result);
// The memo will be updated asynchronously
// You can listen to real-time updates or poll for status
return result;
} catch (error) {
console.error('Error appending audio to memo:', error);
throw error;
}
}
```
## Response Format
### Success Response
```json
{
"success": true,
"memoId": "uuid-here",
"filePath": "userId/recordings/timestamp_append.webm",
"status": "processing",
"estimatedDuration": 5,
"message": "Append transcription in progress.",
"estimatedCredits": 10
}
```
### Error Responses
#### Insufficient Credits
```json
{
"statusCode": 403,
"message": "Insufficient credits for transcription. Required: 10, Available: 5 (user credits)"
}
```
#### Memo Not Found
```json
{
"statusCode": 404,
"message": "Memo not found or access denied"
}
```
## Accessing Appended Recordings
Once transcription is complete, the additional recordings will be available in the memo's source:
```typescript
// Fetch updated memo
const { data: memo } = await supabase
.from('memos')
.select('*')
.eq('id', memoId)
.single();
// Access additional recordings
const additionalRecordings = memo.source.additional_recordings || [];
additionalRecordings.forEach((recording, index) => {
console.log(`Recording ${index + 1}:`);
console.log(`- Transcript: ${recording.transcript}`);
console.log(`- Language: ${recording.primary_language}`);
console.log(`- Speakers: ${Object.keys(recording.speakers || {}).length}`);
console.log(`- Status: ${recording.status}`);
});
```
## Real-time Updates
You can subscribe to memo updates to know when the transcription is complete:
```typescript
const subscription = supabase
.channel(`memo-${memoId}`)
.on('postgres_changes',
{
event: 'UPDATE',
schema: 'public',
table: 'memos',
filter: `id=eq.${memoId}`
},
(payload) => {
const updatedMemo = payload.new;
// Check if the last additional recording is now completed
const recordings = updatedMemo.source?.additional_recordings || [];
const lastRecording = recordings[recordings.length - 1];
if (lastRecording?.status === 'completed') {
console.log('Transcription completed!', lastRecording);
// Update UI with new transcription
}
}
)
.subscribe();
```
## Notes
1. **Credit Requirements**: Append transcription consumes credits the same way as main transcription (2 mana per minute, minimum 10 mana)
2. **Access Control**: Users can only append to memos they own or have access to through spaces
3. **Smart Routing**: Short recordings (<115 min) use fast transcription, longer ones use batch processing
4. **Recording Index**: You can optionally specify a `recordingIndex` to update a specific recording instead of appending a new one
5. **Error Handling**: The service includes comprehensive error handling and fallback strategies matching the main transcription flow

View file

@ -0,0 +1,62 @@
# Auth Proxy Grace Period Implementation Notes
## Overview
The auth-proxy module in memoro-service acts as a pass-through to mana-core-middleware. With the new grace period implementation, the proxy doesn't need significant changes but should be aware of the new behavior.
## Current Implementation Status
The auth proxy already:
- ✅ Validates device info is present for refresh requests
- ✅ Forwards all requests to mana-core-middleware
- ✅ Preserves error responses from the backend
- ✅ Logs requests for debugging
## Grace Period Behavior
When a refresh request is made:
1. **Normal Case**: New tokens are returned
2. **Grace Period Case**: If the same old token is used within 5 minutes:
- Backend returns the previously generated new token
- Response includes `gracePeriodUsed: true` flag
- This is NOT an error - it's a successful response
## No Changes Required
The auth proxy doesn't need modifications because:
- It already forwards all responses transparently
- Error handling is done by the backend
- Retry logic should be implemented in the frontend
## Logging Recommendations
Consider adding logs for grace period usage:
```typescript
async refresh(payload: any) {
const response = await this.proxyPost('/auth/refresh', payload);
// Optional: Log grace period usage for monitoring
if (response.gracePeriodUsed) {
console.log('[AuthProxy] Refresh used grace period for device:', payload.deviceInfo?.deviceId);
}
return response;
}
```
## Monitoring
Track these metrics to understand grace period effectiveness:
- How often grace period is used
- Which devices/users trigger grace period most
- Correlation with network conditions
## Frontend Integration
The frontend calling memoro-service should:
1. Always save the returned refresh token
2. Implement retry logic with exponential backoff
3. Handle both success and error responses appropriately
4. Not treat grace period usage as an error

View file

@ -0,0 +1,196 @@
# Broadcast Trigger Payload Size Fix - July 2025
## Timeline of Events
### Background
- **Before July 5, 2025**: Transcription updates worked perfectly
- **July 5, 2025**: New broadcast triggers added to enhance real-time updates
- **July 8, 2025**: "Payload string too long" errors started occurring during transcription completion
## The Error
### Symptoms
```
Error: Failed to update memo: payload string too long
PostgreSQL Error Code: 22023
```
### Affected Operations
- Transcription completion updates failing for memos with:
- Text length: 46,465 characters
- Utterances: 377 items
- Request payload sizes: 55KB - 121KB
### Error Logs
From memoro-service:
```
[handleTranscriptionCompleted] Error updating memo: {
code: '22023',
details: null,
hint: null,
message: 'payload string too long'
}
```
From Supabase API Gateway:
```json
{
"event_message": "PATCH | 400 | ... | https://npgifbrwhftlbrbaglmi.supabase.co/rest/v1/memos",
"content_length": "121057",
"status_code": 400
}
```
## Initial (Wrong) Assumptions
### Assumption 1: Supabase Realtime NOTIFY Limit
**What we thought**: The existing replica identity fix from the `realtime-payload-limit-fix.md` wasn't working properly.
**Why this seemed logical**:
- Same error code (22023)
- Same error message ("payload string too long")
- PostgreSQL NOTIFY has an 8KB limit
- We had fixed this exact issue before
**Why we were wrong**: The replica identity was correctly set and working. The issue was elsewhere.
### Assumption 2: Database Column Limits
**What we thought**: Maybe the jsonb/text columns had size constraints.
**Why this seemed possible**:
- Large payloads were being stored
- Error occurred during UPDATE operations
**Why we were wrong**: PostgreSQL jsonb and text columns can store much larger data (up to 1GB).
### Assumption 3: HTTP Request Size Limits
**What we thought**: The Supabase REST API might have payload limits.
**Why we considered this**:
- Request sizes were 55KB-121KB
- Error happened during HTTP PATCH requests
**Why we were wrong**: Supabase supports payloads up to 1GB via HTTP.
## The Real Problem
### Discovery Process
1. Checked replica identity: ✓ Correctly set to INDEX (only sends ID)
2. Investigated table triggers: Found new broadcast triggers added July 5
3. Examined trigger function: Found the culprit!
### Root Cause
The `broadcast_memo_changes()` trigger function added on July 5, 2025 was using:
```sql
PERFORM pg_notify(
'realtime:broadcast',
json_build_object(
'payload', json_build_object(
'new', row_to_json(NEW), -- ENTIRE row data!
'old', row_to_json(OLD), -- ENTIRE row data!
...
)
)::text
);
```
This trigger was attempting to send the ENTIRE memo data (including large transcripts and utterances) through PostgreSQL's NOTIFY mechanism, which has a hard 8KB limit.
### Why It Wasn't Caught Earlier
- The trigger was added recently (July 5)
- Initial testing likely used smaller memos
- The error only occurs with transcriptions > ~6KB total size
## The Fix
### Solution Applied
Modified the `broadcast_memo_changes()` function to send minimal data:
```sql
CREATE OR REPLACE FUNCTION public.broadcast_memo_changes()
RETURNS trigger
LANGUAGE plpgsql
SECURITY DEFINER
AS $$
BEGIN
-- Broadcast only essential information to avoid payload size limits
PERFORM pg_notify(
'realtime:broadcast',
json_build_object(
'type', 'broadcast',
'event', 'postgres_changes',
'payload', json_build_object(
'event', TG_OP,
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'id', CASE
WHEN TG_OP = 'DELETE' THEN OLD.id
ELSE NEW.id
END,
'eventTs', to_char(current_timestamp, 'YYYY-MM-DD"T"HH24:MI:SS.MS"Z"')
)
)::text
);
RETURN NEW;
END;
$$;
```
### What Changed
- **Before**: Sent entire row data (`row_to_json(NEW/OLD)`)
- **After**: Sends only the memo ID
- **Result**: Payload size reduced from 55KB+ to < 200 bytes
### Impact on Frontend
- Frontend still receives real-time notifications
- Must fetch full memo data using the provided ID
- No breaking changes to the notification structure
## Key Learnings
### 1. Multiple Systems Can Hit NOTIFY Limits
- **Supabase Realtime**: Uses replica identity (already fixed)
- **Custom Triggers**: Can also use pg_notify (new issue)
- Both must respect the 8KB NOTIFY limit
### 2. Error Messages Can Be Misleading
- Same error (22023) can have different causes
- Important to check ALL uses of NOTIFY, not just Supabase Realtime
### 3. Trigger Side Effects
- New triggers can break existing functionality
- Always consider payload sizes when using pg_notify
- Test with realistic data sizes, not just small test cases
### 4. Debugging Approach
1. Check recent changes (migrations, triggers)
2. Examine all NOTIFY usage, not just obvious ones
3. Use Supabase API logs to see actual request sizes
4. Don't assume the first similar fix applies
## Prevention Guidelines
### For Future Triggers
1. **Never send full row data through NOTIFY**
2. **Always send minimal identifiers only**
3. **Test with large, realistic payloads**
4. **Document payload size considerations**
### For Broadcast Mechanisms
1. **Use ID-only patterns**: Send identifiers, let clients fetch data
2. **Consider payload sizes**: NOTIFY limit is 8000 bytes total
3. **Monitor for 22023 errors**: Set up alerts for this specific error
4. **Review all NOTIFY usage**: Both Supabase and custom triggers
## Resolution Timeline
- **Issue Reported**: July 8, 2025, 14:59 CEST
- **Investigation Started**: July 8, 2025, 15:00 CEST
- **Root Cause Found**: Broadcast trigger sending full row data
- **Fix Applied**: Modified trigger to send ID only
- **Resolution Confirmed**: Transcriptions now complete successfully
## Related Documentation
- [Realtime Payload Limit Fix](./realtime-payload-limit-fix.md) - Original NOTIFY limit issue
- [PostgreSQL NOTIFY Documentation](https://www.postgresql.org/docs/current/sql-notify.html)
- Migration: `20250705022315_add_memo_update_broadcast_trigger`

View file

@ -0,0 +1,178 @@
# Memoro Space Sharing Fix
This document describes the implementation of space-based memo sharing in the Memoro application, including the solution to the "infinite recursion" issue that was occurring with Row-Level Security (RLS) policies.
## Problem Description
Users were unable to directly access memos created by other users in shared spaces, receiving the following error:
```
Error fetching memo: infinite recursion detected in policy for relation "memos"
```
This happened because:
1. The RLS policies required complex joins between multiple tables
2. PostgreSQL couldn't efficiently resolve these joins during policy evaluation
3. The recursive nature of the policies caused infinite recursion
## Solution: Denormalized Access Control
We implemented a database design pattern called "denormalization for access control" to solve this issue.
### Step 1: Add a Direct Access Column to Memos Table
```sql
-- Add a direct helper column to the memos table to simplify RLS
ALTER TABLE memos ADD COLUMN IF NOT EXISTS shared_with_users UUID[] DEFAULT '{}'::uuid[];
```
This array column directly stores the UUIDs of all users who should have access to each memo, eliminating the need for complex joins in RLS policies.
### Step 2: Create Triggers to Maintain the Access Array
First, create a function to update the `shared_with_users` array when a memo is linked to a space:
```sql
-- Create an update function that will maintain this column
CREATE OR REPLACE FUNCTION update_memo_shared_with_users()
RETURNS TRIGGER AS $$
BEGIN
-- Update the shared_with_users array for the affected memo
UPDATE memos
SET shared_with_users = (
SELECT array_agg(DISTINCT sm.user_id)
FROM memo_spaces ms
JOIN space_members sm ON ms.space_id = sm.space_id
WHERE ms.memo_id = NEW.memo_id
)
WHERE id = NEW.memo_id;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Create triggers for memo_spaces table changes
DROP TRIGGER IF EXISTS memo_spaces_insert_update_trigger ON memo_spaces;
CREATE TRIGGER memo_spaces_insert_update_trigger
AFTER INSERT OR UPDATE ON memo_spaces
FOR EACH ROW
EXECUTE FUNCTION update_memo_shared_with_users();
DROP TRIGGER IF EXISTS memo_spaces_delete_trigger ON memo_spaces;
CREATE TRIGGER memo_spaces_delete_trigger
AFTER DELETE ON memo_spaces
FOR EACH ROW
EXECUTE FUNCTION update_memo_shared_with_users();
```
Then, create a function and trigger to update the access arrays when space membership changes:
```sql
-- Create trigger for space_members changes
CREATE OR REPLACE FUNCTION update_all_memos_for_space()
RETURNS TRIGGER AS $$
BEGIN
-- For each memo in the space, update its shared_with_users array
UPDATE memos m
SET shared_with_users = (
SELECT array_agg(DISTINCT sm.user_id)
FROM memo_spaces ms
JOIN space_members sm ON ms.space_id = sm.space_id
WHERE ms.memo_id = m.id
AND ms.space_id = NEW.space_id OR ms.space_id = OLD.space_id
)
WHERE m.id IN (
SELECT memo_id FROM memo_spaces WHERE space_id = NEW.space_id OR space_id = OLD.space_id
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
DROP TRIGGER IF EXISTS space_members_trigger ON space_members;
CREATE TRIGGER space_members_trigger
AFTER INSERT OR UPDATE OR DELETE ON space_members
FOR EACH ROW
EXECUTE FUNCTION update_all_memos_for_space();
```
### Step 3: Initialize the Column for Existing Data
```sql
-- Populate the shared_with_users column for all existing memos
UPDATE memos m
SET shared_with_users = (
SELECT array_agg(DISTINCT sm.user_id)
FROM memo_spaces ms
JOIN space_members sm ON ms.space_id = sm.space_id
WHERE ms.memo_id = m.id
);
```
### Step 4: Create Simplified RLS Policies
```sql
-- Drop existing policies on memos
DO $$
BEGIN
EXECUTE (
SELECT string_agg('DROP POLICY IF EXISTS "' || policyname || '" ON memos;', ' ')
FROM pg_policies
WHERE tablename = 'memos'
);
END $$;
-- Create simplified policies that use the denormalized column
CREATE POLICY "Users can access own memos"
ON memos FOR ALL
USING (user_id = auth.uid()::text);
CREATE POLICY "Users can view shared memos"
ON memos FOR SELECT
USING (auth.uid()::uuid = ANY(shared_with_users));
```
## How This Solution Works
1. When a memo is linked to a space, the trigger automatically adds all space members to the memo's `shared_with_users` array
2. When space membership changes (users added/removed), the trigger updates all affected memos
3. The RLS policies are now simple and non-recursive:
- Users can always access their own memos
- Users can view memos where their UUID is in the `shared_with_users` array
## Benefits
1. **No More Recursion**: The simple policies avoid complex joins that caused the infinite recursion
2. **Better Performance**: Array lookups are much faster than multiple table joins
3. **Automatic Maintenance**: The triggers keep everything in sync without requiring code changes
4. **Same Functionality**: Users still get the same sharing behavior, just implemented more efficiently
## Verification
You can verify the solution is working by checking:
```sql
-- Check the data in our helper column for a specific memo
SELECT id, title, user_id, shared_with_users
FROM memos
WHERE id = 'your-memo-id';
```
This should show the memo with a list of user IDs in the `shared_with_users` array, including both the memo owner and all members of spaces the memo is shared with.
## Troubleshooting
If you encounter issues with the sharing functionality:
1. Check if the triggers are properly updating the `shared_with_users` array
2. Verify that the `space_members` table is correctly populated
3. Ensure the `memo_spaces` table correctly links memos to spaces
You can manually update the `shared_with_users` array for testing:
```sql
UPDATE memos
SET shared_with_users = array_append(shared_with_users, 'user-uuid-here')
WHERE id = 'memo-id-here';
```

View file

@ -0,0 +1,186 @@
# Memoro Space Sharing - Security Review
This document provides a security review of the denormalized access control solution implemented to fix the infinite recursion issue in Memoro's space sharing functionality.
## Security Assessment Summary
**Overall Security Rating: ✅ SECURE**
The denormalized access control approach maintains the same security model while improving performance and reliability. This approach is commonly used in high-security applications to avoid complex RLS policy joins while maintaining strict access controls.
## Detailed Security Analysis
### 1. Access Control Integrity
✅ **Authorization Logic Preserved**
- The solution maintains the same access rules - users can only access memos they own or that are shared with them through spaces.
- No security bypass vectors were introduced in the implementation.
✅ **Permission Validation**
- The solution continues to use PostgreSQL's RLS mechanism for enforcing access control policies.
- The `auth.uid()` function ensures that user identity is validated by the database system.
### 2. Data Exposure Risks
✅ **No Sensitive Data Leakage**
- The `shared_with_users` array only contains user IDs, not sensitive information.
- No memo content is exposed to unauthorized users.
✅ **Data Integrity**
- Triggers ensure that the denormalized data (shared_with_users array) stays consistent with the normalized data model.
- All updates to the denormalized column are performed atomically.
### 3. SQL Injection Protection
✅ **Parameterized Values**
- All user inputs are properly parameterized through the `auth.uid()` function.
- No user-supplied values are concatenated directly into SQL queries.
✅ **PL/pgSQL Security**
- The trigger functions use proper SQL constructs without any dynamic SQL.
- All database operations use static, prepared statements.
### 4. Trigger Implementation Security
✅ **Atomic Updates**
- Updates are performed atomically, ensuring no inconsistent states.
- PostgreSQL's transaction safety ensures rollbacks on errors.
✅ **Privilege Control**
- The triggers operate with database-level permissions, not user-level permissions.
- This ensures consistent enforcement of access controls regardless of the user context.
## Improvements Implemented
### 1. Error Logging in Triggers
We've enhanced the trigger functions with comprehensive error logging:
```sql
CREATE OR REPLACE FUNCTION update_memo_shared_with_users()
RETURNS TRIGGER AS $$
DECLARE
affected_rows integer;
error_message text;
BEGIN
-- Handle NULL memo_id
IF NEW.memo_id IS NULL THEN
RAISE LOG 'update_memo_shared_with_users: memo_id is NULL, skipping update';
RETURN NEW;
END IF;
BEGIN
-- Update the shared_with_users array for the affected memo
UPDATE memos
SET shared_with_users = (
SELECT COALESCE(array_agg(DISTINCT sm.user_id), '{}'::uuid[])
FROM memo_spaces ms
JOIN space_members sm ON ms.space_id = sm.space_id
WHERE ms.memo_id = NEW.memo_id
)
WHERE id = NEW.memo_id;
GET DIAGNOSTICS affected_rows = ROW_COUNT;
RAISE LOG 'update_memo_shared_with_users: Updated memo %, affected % rows', NEW.memo_id, affected_rows;
EXCEPTION WHEN OTHERS THEN
GET STACKED DIAGNOSTICS error_message = MESSAGE_TEXT;
RAISE LOG 'update_memo_shared_with_users error: %', error_message;
-- Don't re-raise the exception to avoid breaking functionality
END;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
```
### 2. NULL Handling in Triggers
We've added explicit NULL handling to prevent errors when processing NULL values:
```sql
CREATE OR REPLACE FUNCTION update_all_memos_for_space()
RETURNS TRIGGER AS $$
DECLARE
affected_rows integer;
error_message text;
space_id_value uuid;
BEGIN
-- Handle NULL space_id in both NEW and OLD
IF (TG_OP = 'DELETE' AND OLD.space_id IS NULL) OR
(TG_OP IN ('INSERT', 'UPDATE') AND NEW.space_id IS NULL) THEN
RAISE LOG 'update_all_memos_for_space: space_id is NULL, skipping update';
RETURN COALESCE(NEW, OLD);
END IF;
-- Determine which space_id to use
IF TG_OP = 'DELETE' THEN
space_id_value := OLD.space_id;
ELSE
space_id_value := NEW.space_id;
END IF;
RAISE LOG 'update_all_memos_for_space: Processing space_id %', space_id_value;
BEGIN
-- For each memo in the space, update its shared_with_users array
UPDATE memos m
SET shared_with_users = (
SELECT COALESCE(array_agg(DISTINCT sm.user_id), '{}'::uuid[])
FROM memo_spaces ms
JOIN space_members sm ON ms.space_id = sm.space_id
WHERE ms.memo_id = m.id
AND ms.space_id = space_id_value
)
WHERE m.id IN (
SELECT memo_id FROM memo_spaces WHERE space_id = space_id_value
);
GET DIAGNOSTICS affected_rows = ROW_COUNT;
RAISE LOG 'update_all_memos_for_space: Updated memos for space %, affected % rows',
space_id_value, affected_rows;
EXCEPTION WHEN OTHERS THEN
GET STACKED DIAGNOSTICS error_message = MESSAGE_TEXT;
RAISE LOG 'update_all_memos_for_space error: %', error_message;
-- Don't re-raise the exception to avoid breaking functionality
END;
RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql;
```
## Additional Security Considerations
### 1. Public Memo Access
For full feature parity, consider adding a policy for public memos:
```sql
CREATE POLICY "Users can view public memos"
ON memos FOR SELECT
USING (is_public = true);
```
### 2. Admin Access Policy
If needed, consider adding an administrative access policy:
```sql
CREATE POLICY "Admins can access all memos"
ON memos FOR ALL
USING (auth.uid() IN (SELECT id FROM admin_users));
```
### 3. Monitoring Considerations
- **Log Review**: Regularly review PostgreSQL logs for trigger errors using the new logging functionality
- **Performance Monitoring**: Monitor the performance of the array-based policy evaluation
- **Access Auditing**: Consider implementing an audit log for sensitive memo access
## Conclusion
The denormalized access control solution is secure and follows database security best practices. The improvements made to error logging and NULL handling further enhance the robustness of the implementation.
This approach not only resolves the infinite recursion issue but does so in a way that maintains the security integrity of the system while improving its performance and reliability.

View file

@ -0,0 +1,115 @@
# Fixing "Payload String Too Long" Error in Supabase Realtime
## The Problem
During transcription completion, the memoro service was failing with the following error:
```
Error: Failed to update memo: payload string too long
PostgreSQL Error Code: 22023
```
This error occurred when updating memos with transcription results, even for relatively small transcriptions (4-30 minutes of audio).
## Initial Assumptions (Incorrect)
### Assumption 1: HTTP Request Payload Limit
**What we thought:** The error was caused by Supabase's HTTP API having a small payload size limit for PATCH requests.
**Evidence that seemed to support this:**
- Error occurred during database UPDATE operations
- Supabase logs showed PATCH requests with `content_length` of 9.7KB and 28KB
- The error message "payload string too long" seemed to indicate a size limit
**Why this was wrong:** Supabase's HTTP API actually supports payloads up to 1GB, far exceeding our transcription data size.
### Assumption 2: Database Column Size Limit
**What we thought:** The PostgreSQL database had column size limits that were being exceeded.
**Evidence that seemed to support this:**
- Database columns were `text` and `jsonb` types
- Large speaker diarization data (utterances, speakers) was being stored
**Why this was wrong:** PostgreSQL `text` and `jsonb` columns can store much larger data than we were sending.
## The Real Issue: PostgreSQL NOTIFY Payload Limit
### Root Cause
The error was actually caused by **Supabase Realtime's internal use of PostgreSQL's NOTIFY/LISTEN mechanism**, which has a hard limit of **8000 bytes** for payload size.
### How It Works
1. **Supabase Realtime** uses PostgreSQL's NOTIFY/LISTEN for real-time updates
2. When a row is updated, the **entire row data** is sent through NOTIFY
3. Our transcription data (source with utterances + transcript + metadata) exceeded 8000 bytes
4. PostgreSQL threw error code **22023: "payload string too long"**
### Key Evidence
- Error code `22023` is specifically related to NOTIFY payload limits
- The error occurred even with small payloads (9.7KB) because NOTIFY limit is only 8KB
- Updates worked fine when not subscribed to realtime
## The Solution
### What We Did
Changed the table's **replica identity** to only include the primary key:
```sql
ALTER TABLE public.memos REPLICA IDENTITY USING INDEX memos_pkey;
```
### How This Fixes It
1. **Before:** Realtime notifications included all column data from the updated row
2. **After:** Realtime notifications only include the primary key (`id`)
3. **Result:** NOTIFY payload stays well under the 8000-byte limit
### Impact on Frontend
- **Realtime notifications now only contain the memo `id`**
- **Frontend must fetch full memo data separately** when receiving notifications
- **More efficient:** Avoids sending large payloads unnecessarily
- **No breaking changes:** Frontend can handle this gracefully
## Alternative Solutions Considered
### Option 1: Split Updates
**Approach:** Break large updates into multiple smaller PATCH requests
**Why rejected:** Wouldn't solve the NOTIFY payload issue
### Option 2: Disable Realtime
**Approach:** Remove memos table from `supabase_realtime` publication
**Why rejected:** Frontend needs realtime updates for user experience
### Option 3: Column-Specific Publication
**Approach:** Only publish specific columns to realtime
**Why rejected:** Complex to maintain and still risky with metadata growth
## Prevention for Future
### Database Design
- **Consider realtime payload size** when designing tables with large columns
- **Separate large data** into different tables if realtime is needed
- **Use replica identity wisely** to control what data is sent via NOTIFY
### Development Process
- **Test with realistic data sizes** including speaker diarization data
- **Monitor Supabase logs** for realtime-related errors
- **Understand the difference** between HTTP payload limits and NOTIFY limits
## Key Learnings
1. **Supabase Realtime uses PostgreSQL NOTIFY** with an 8000-byte limit
2. **Error code 22023** specifically indicates NOTIFY payload issues
3. **Replica identity controls** what data is sent in realtime notifications
4. **HTTP API limits and NOTIFY limits are completely different** systems
5. **Real-time efficiency** often benefits from sending only IDs, not full data
## Documentation References
- [PostgreSQL NOTIFY Documentation](https://www.postgresql.org/docs/current/sql-notify.html)
- [Supabase Realtime Quotas](https://supabase.com/docs/guides/realtime/quotas)
- [PostgreSQL Replica Identity](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY)
## Resolution Status
**Fixed**: Transcription completion now works without payload errors
**Tested**: Updates to large transcript and source data work correctly
**Verified**: Realtime notifications still function (with ID-only payloads)

View file

@ -0,0 +1,582 @@
# Memoro Settings Management Guide
The Memoro service provides comprehensive user settings management through integration with the Mana Core Middleware. This allows users to manage both Memoro-specific settings and general profile information.
## Overview
The settings system provides:
- **Memoro-specific settings** (data usage acceptance, preferences)
- **General profile management** (name, avatar)
- **Centralized storage** via Mana Core's `app_settings` JSONB field
- **JWT-authenticated access** with user isolation
## Architecture
```
Frontend → Memoro Service → Mana Core Middleware → Supabase Database
```
1. **Frontend** calls Memoro service settings endpoints
2. **Memoro Service** forwards requests to Mana Core Middleware
3. **Mana Core** updates the `users.app_settings` JSONB field
4. **Response** flows back through the chain
## API Endpoints
All endpoints require JWT authentication via `Authorization: Bearer <token>` header.
### 1. Get All User Settings
```http
GET /settings
Authorization: Bearer <jwt-token>
```
**Response:**
```json
{
"settings": {
"memoro": {
"dataUsageAcceptance": true
},
"other_apps": {
"theme": "dark"
}
}
}
```
### 2. Get Memoro-Specific Settings
```http
GET /settings/memoro
Authorization: Bearer <jwt-token>
```
**Response:**
```json
{
"settings": {
"dataUsageAcceptance": true,
"emailNewsletterOptIn": false,
"language": "en",
"defaultSpaceId": "uuid-here"
}
}
```
### 3. Update Memoro Settings
```http
PATCH /settings/memoro
Authorization: Bearer <jwt-token>
Content-Type: application/json
{
"dataUsageAcceptance": true,
"language": "en",
"customSetting": "value"
}
```
**Response:**
```json
{
"success": true,
"settings": {
"memoro": {
"dataUsageAcceptance": true,
"language": "en",
"customSetting": "value"
}
},
"message": "Memoro settings updated successfully"
}
```
### 4. Update Data Usage Acceptance (Convenience Endpoint)
```http
PATCH /settings/memoro/data-usage
Authorization: Bearer <jwt-token>
Content-Type: application/json
{
"accepted": true
}
```
**Response:**
```json
{
"success": true,
"settings": {
"memoro": {
"dataUsageAcceptance": true
}
},
"message": "Data usage accepted successfully"
}
```
### 5. Update Email Newsletter Opt-In (Convenience Endpoint)
```http
PATCH /settings/memoro/email-newsletter
Authorization: Bearer <jwt-token>
Content-Type: application/json
{
"optIn": true
}
```
**Response:**
```json
{
"success": true,
"settings": {
"memoro": {
"emailNewsletterOptIn": true
}
},
"message": "Email newsletter opted in successfully"
}
```
### 6. Update User Profile
```http
PATCH /settings/profile
Authorization: Bearer <jwt-token>
Content-Type: application/json
{
"firstName": "John",
"lastName": "Doe",
"avatarUrl": "https://example.com/avatar.jpg"
}
```
**Response:**
```json
{
"success": true,
"user": {
"id": "uuid",
"email": "user@example.com",
"first_name": "John",
"last_name": "Doe",
"avatar_url": "https://example.com/avatar.jpg",
"app_settings": {
"memoro": {
"dataUsageAcceptance": true
}
}
},
"message": "Profile updated successfully"
}
```
## Testing Guide
### Local Development Setup
1. **Start Services:**
```bash
# Terminal 1 - Mana Core Middleware
cd mana-core-middleware
npm run start:dev # Port 3000
# Terminal 2 - Memoro Service
cd memoro-service
npm run start:dev # Port 3001
```
2. **Get JWT Token:**
```bash
export TOKEN=$(curl -s -X POST "http://localhost:3000/auth/signin?appId=973da0c1-b479-4dac-a1b0-ed09c72caca8" \
-H "Content-Type: application/json" \
-d '{"email": "nils.weiser@memoro.ai", "password": "Test123!"}' | jq -r '.accessToken')
echo "Token: $TOKEN"
```
### Test Commands
```bash
# Get all settings
curl -H "Authorization: Bearer $TOKEN" \
"http://localhost:3001/settings"
# Get Memoro settings only
curl -H "Authorization: Bearer $TOKEN" \
"http://localhost:3001/settings/memoro"
# Accept data usage
curl -X PATCH \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"accepted": true}' \
"http://localhost:3001/settings/memoro/data-usage"
# Opt into email newsletter
curl -X PATCH \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"optIn": true}' \
"http://localhost:3001/settings/memoro/email-newsletter"
# Update multiple Memoro settings
curl -X PATCH \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"dataUsageAcceptance": false, "emailNewsletterOptIn": true, "language": "de"}' \
"http://localhost:3001/settings/memoro"
# Update profile
curl -X PATCH \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"firstName": "Nils", "lastName": "Weiser"}' \
"http://localhost:3001/settings/profile"
```
### Expected Results
**Empty settings (first time):**
```json
{
"settings": {}
}
```
**After data usage acceptance:**
```json
{
"settings": {
"memoro": {
"dataUsageAcceptance": true
}
}
}
```
**After multiple updates:**
```json
{
"settings": {
"memoro": {
"dataUsageAcceptance": false,
"emailNewsletterOptIn": true,
"language": "de"
}
}
}
```
## Memoro Settings Schema
### Core Settings
| Setting | Type | Default | Description |
|---------|------|---------|-------------|
| `dataUsageAcceptance` | boolean | `false` | Whether user accepts data usage for AI processing |
| `emailNewsletterOptIn` | boolean | `false` | Whether user opts into email newsletter |
| `language` | string | `"en"` | User's preferred language |
| `defaultSpaceId` | string | `null` | Default space for new recordings |
### Future Settings (Examples)
| Setting | Type | Default | Description |
|---------|------|---------|-------------|
| `autoTranscribe` | boolean | `true` | Auto-start transcription on upload |
| `notificationPreferences` | object | `{}` | Email/push notification settings |
| `transcriptionSettings` | object | `{}` | Transcription quality, language detection |
| `uiPreferences` | object | `{}` | Theme, layout preferences |
## Error Handling
### Common Errors
**400 Bad Request - Missing fields:**
```json
{
"message": "At least one setting field is required",
"error": "Bad Request",
"statusCode": 400
}
```
**400 Bad Request - Invalid data type:**
```json
{
"message": "accepted field must be a boolean",
"error": "Bad Request",
"statusCode": 400
}
```
**401 Unauthorized:**
```json
{
"message": "Unauthorized",
"statusCode": 401
}
```
### Service Communication Errors
If Mana Core Middleware is down:
```json
{
"message": "Failed to update Memoro settings: Failed to connect to Mana Core",
"error": "Bad Request",
"statusCode": 400
}
```
## Frontend Integration Examples
### React Hook Example
```typescript
// useSettings.ts
import { useState, useEffect } from 'react';
interface MemoroSettings {
dataUsageAcceptance?: boolean;
emailNewsletterOptIn?: boolean;
language?: string;
defaultSpaceId?: string;
}
export function useSettings() {
const [settings, setSettings] = useState<MemoroSettings>({});
const [loading, setLoading] = useState(false);
const getSettings = async () => {
setLoading(true);
try {
const response = await fetch('/settings/memoro', {
headers: { Authorization: `Bearer ${getToken()}` }
});
const data = await response.json();
setSettings(data.settings);
} catch (error) {
console.error('Failed to get settings:', error);
} finally {
setLoading(false);
}
};
const updateDataUsage = async (accepted: boolean) => {
try {
const response = await fetch('/settings/memoro/data-usage', {
method: 'PATCH',
headers: {
Authorization: `Bearer ${getToken()}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({ accepted })
});
if (response.ok) {
await getSettings(); // Refresh settings
}
} catch (error) {
console.error('Failed to update data usage:', error);
}
};
const updateEmailNewsletter = async (optIn: boolean) => {
try {
const response = await fetch('/settings/memoro/email-newsletter', {
method: 'PATCH',
headers: {
Authorization: `Bearer ${getToken()}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({ optIn })
});
if (response.ok) {
await getSettings(); // Refresh settings
}
} catch (error) {
console.error('Failed to update email newsletter:', error);
}
};
return {
settings,
loading,
getSettings,
updateDataUsage,
updateEmailNewsletter
};
}
```
### Data Usage Consent Component
```typescript
// DataUsageConsent.tsx
import React from 'react';
import { useSettings } from './useSettings';
export function DataUsageConsent() {
const { settings, updateDataUsage, loading } = useSettings();
const handleAccept = () => updateDataUsage(true);
const handleDecline = () => updateDataUsage(false);
if (settings.dataUsageAcceptance === true) {
return <div>✅ Data usage accepted</div>;
}
return (
<div className="consent-modal">
<h2>Data Usage Consent</h2>
<p>Do you consent to AI processing of your audio data?</p>
<div className="buttons">
<button
onClick={handleAccept}
disabled={loading}
>
Accept
</button>
<button
onClick={handleDecline}
disabled={loading}
>
Decline
</button>
</div>
</div>
);
}
```
### Email Newsletter Subscription Component
```typescript
// EmailNewsletterSubscription.tsx
import React from 'react';
import { useSettings } from './useSettings';
export function EmailNewsletterSubscription() {
const { settings, updateEmailNewsletter, loading } = useSettings();
const handleOptIn = () => updateEmailNewsletter(true);
const handleOptOut = () => updateEmailNewsletter(false);
return (
<div className="newsletter-subscription">
<h3>Email Newsletter</h3>
<p>Stay updated with Memoro features and news</p>
<div className="newsletter-status">
{settings.emailNewsletterOptIn ? (
<div>
<span>✅ Subscribed to newsletter</span>
<button
onClick={handleOptOut}
disabled={loading}
className="opt-out-btn"
>
Unsubscribe
</button>
</div>
) : (
<div>
<span>📧 Not subscribed</span>
<button
onClick={handleOptIn}
disabled={loading}
className="opt-in-btn"
>
Subscribe
</button>
</div>
)}
</div>
</div>
);
}
```
### Combined Settings Component
```typescript
// SettingsPage.tsx
import React from 'react';
import { DataUsageConsent } from './DataUsageConsent';
import { EmailNewsletterSubscription } from './EmailNewsletterSubscription';
export function SettingsPage() {
return (
<div className="settings-page">
<h1>Memoro Settings</h1>
<section className="privacy-settings">
<h2>Privacy & Data</h2>
<DataUsageConsent />
</section>
<section className="communication-settings">
<h2>Communication</h2>
<EmailNewsletterSubscription />
</section>
</div>
);
}
```
## Configuration
### Environment Variables
Ensure `MANA_SERVICE_URL` is properly configured:
```env
# memoro-service/.env
MANA_SERVICE_URL=http://localhost:3000 # Local development
# or
MANA_SERVICE_URL=https://mana-core-middleware.run.app # Production
```
### Service Dependencies
The settings endpoints depend on:
1. **Mana Core Middleware** being accessible
2. **Supabase database** connection
3. **JWT authentication** working properly
## Monitoring
### Health Checks
Monitor settings service health:
```bash
# Check if Memoro service can reach Mana Core
curl -H "Authorization: Bearer $TOKEN" \
"http://localhost:3001/settings/memoro"
```
### Logging
Look for these log patterns:
```
[SettingsClientService] Error getting user settings: Failed to connect
[SettingsController] Failed to update Memoro settings: User not found
```
## Future Enhancements
1. **Settings Validation**: JSON schema validation for settings
2. **Settings Migration**: Automatic migration for schema changes
3. **Settings Sync**: Real-time sync across devices
4. **Settings Backup**: Export/import functionality
5. **Settings Analytics**: Track which settings are most used

View file

@ -0,0 +1,483 @@
# Simplified SpaceSyncService
This document outlines a simplified version of the `SpaceSyncService` that leverages the new database-level triggers and denormalized access control approach.
## Simplified Implementation
```typescript
import { Injectable, Logger } from '@nestjs/common';
import { HttpService } from '@nestjs/axios';
import { ConfigService } from '@nestjs/config';
import { firstValueFrom } from 'rxjs';
import { createClient, SupabaseClient } from '@supabase/supabase-js';
import { v4 as uuidv4 } from 'uuid';
@Injectable()
export class SpaceSyncService {
private readonly logger = new Logger(SpaceSyncService.name);
private supabase: SupabaseClient;
private manaApiUrl: string;
private adminToken: string;
constructor(
private readonly configService: ConfigService,
private readonly httpService: HttpService,
) {
// Initialize Supabase client
this.supabase = createClient(
this.configService.get<string>('MEMORO_SUPABASE_URL'),
this.configService.get<string>('MEMORO_SUPABASE_SERVICE_KEY'),
);
this.manaApiUrl = this.configService.get<string>('MANA_CORE_URL');
this.adminToken = this.configService.get<string>('ADMIN_TOKEN');
}
/**
* Create or update a space member record
* This is called when a user is added to a space or their role changes
*/
async syncSpaceMembership(
spaceId: string,
userId: string,
role: string,
addedBy?: string,
): Promise<{ success: boolean; message: string }> {
try {
// Generate a UUID for the record if it doesn't exist
const id = uuidv4();
// Check if the membership already exists
const { data: existingMember } = await this.supabase
.from('space_members')
.select('*')
.eq('space_id', spaceId)
.eq('user_id', userId)
.single();
if (existingMember) {
// Update existing membership
const { error } = await this.supabase
.from('space_members')
.update({
role,
added_by: addedBy || existingMember.added_by,
})
.eq('space_id', spaceId)
.eq('user_id', userId);
if (error) throw error;
this.logger.log(`Updated space membership for user ${userId} in space ${spaceId}`);
} else {
// Create new membership
const { error } = await this.supabase
.from('space_members')
.insert({
id,
space_id: spaceId,
user_id: userId,
role,
added_by: addedBy || userId,
added_at: new Date(),
});
if (error) throw error;
this.logger.log(`Added user ${userId} to space ${spaceId}`);
}
return { success: true, message: 'Space membership synced successfully' };
} catch (error) {
this.logger.error(`Error syncing space membership: ${error.message}`, error.stack);
return { success: false, message: error.message };
}
}
/**
* Remove a user from a space
*/
async removeSpaceMembership(
spaceId: string,
userId: string,
): Promise<{ success: boolean; message: string }> {
try {
const { error } = await this.supabase
.from('space_members')
.delete()
.eq('space_id', spaceId)
.eq('user_id', userId);
if (error) throw error;
this.logger.log(`Removed user ${userId} from space ${spaceId}`);
return { success: true, message: 'Space membership removed successfully' };
} catch (error) {
this.logger.error(`Error removing space membership: ${error.message}`, error.stack);
return { success: false, message: error.message };
}
}
/**
* Sync all members for a specific space
* Used when initializing a space or ensuring all memberships are in sync
*/
async syncSpaceMembers(
spaceId: string,
): Promise<{ success: boolean; message: string; count?: number }> {
try {
// Fetch space members from middleware
const response = await firstValueFrom(
this.httpService.get(`${this.manaApiUrl}/api/spaces/${spaceId}/members`, {
headers: { Authorization: `Bearer ${this.adminToken}` },
}),
);
const members = response.data.members || [];
if (members.length === 0) {
return { success: true, message: 'No members found for space', count: 0 };
}
// First, delete all existing members for this space to avoid stale records
await this.supabase
.from('space_members')
.delete()
.eq('space_id', spaceId);
// Then insert all current members
const membersToInsert = members.map((member) => ({
id: uuidv4(),
space_id: spaceId,
user_id: member.user_id,
role: member.role,
added_by: member.added_by || member.user_id,
added_at: new Date(),
}));
const { error } = await this.supabase
.from('space_members')
.insert(membersToInsert);
if (error) throw error;
this.logger.log(`Synced ${members.length} members for space ${spaceId}`);
return {
success: true,
message: `Synced ${members.length} members for space ${spaceId}`,
count: members.length
};
} catch (error) {
this.logger.error(`Error syncing space members: ${error.message}`, error.stack);
return { success: false, message: error.message };
}
}
/**
* Sync all spaces for a user
* Used to ensure a user has access to all their spaces
*/
async syncUserSpaces(
userId: string,
): Promise<{ success: boolean; message: string; count?: number }> {
try {
// Fetch user's spaces from middleware
const response = await firstValueFrom(
this.httpService.get(`${this.manaApiUrl}/api/users/${userId}/spaces`, {
headers: { Authorization: `Bearer ${this.adminToken}` },
}),
);
const spaces = response.data.spaces || [];
if (spaces.length === 0) {
return { success: true, message: 'No spaces found for user', count: 0 };
}
// Process each space the user is a member of
let successCount = 0;
for (const space of spaces) {
const result = await this.syncSpaceMembers(space.id);
if (result.success) {
successCount++;
}
}
this.logger.log(`Synced ${successCount} spaces for user ${userId}`);
return {
success: true,
message: `Synced ${successCount} spaces for user ${userId}`,
count: successCount
};
} catch (error) {
this.logger.error(`Error syncing user spaces: ${error.message}`, error.stack);
return { success: false, message: error.message };
}
}
/**
* Run the migration to set up the space_members table and triggers
* Only needs to be run once when setting up a new environment
*/
async runSpaceMembersMigration(): Promise<{ success: boolean; message: string }> {
try {
const { data: tableExists } = await this.supabase.rpc('check_table_exists', {
table_name: 'space_members',
});
if (tableExists) {
return { success: true, message: 'Space members table already exists' };
}
// Create space_members table
const createTableSQL = `
-- Create space_members table
CREATE TABLE IF NOT EXISTS public.space_members (
id UUID PRIMARY KEY,
space_id UUID NOT NULL REFERENCES public.spaces(id) ON DELETE CASCADE,
user_id UUID NOT NULL,
role TEXT NOT NULL,
added_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
added_by UUID,
UNIQUE(space_id, user_id)
);
-- Add shared_with_users column to memos table
ALTER TABLE public.memos ADD COLUMN IF NOT EXISTS shared_with_users UUID[] DEFAULT '{}'::uuid[];
-- Create function for updating shared_with_users
CREATE OR REPLACE FUNCTION update_memo_shared_with_users()
RETURNS TRIGGER AS $$
DECLARE
affected_rows integer;
error_message text;
BEGIN
-- Handle NULL memo_id
IF NEW.memo_id IS NULL THEN
RAISE LOG 'update_memo_shared_with_users: memo_id is NULL, skipping update';
RETURN NEW;
END IF;
BEGIN
-- Update the shared_with_users array for the affected memo
UPDATE memos
SET shared_with_users = (
SELECT COALESCE(array_agg(DISTINCT sm.user_id), '{}'::uuid[])
FROM memo_spaces ms
JOIN space_members sm ON ms.space_id = sm.space_id
WHERE ms.memo_id = NEW.memo_id
)
WHERE id = NEW.memo_id;
GET DIAGNOSTICS affected_rows = ROW_COUNT;
RAISE LOG 'update_memo_shared_with_users: Updated memo %, affected % rows', NEW.memo_id, affected_rows;
EXCEPTION WHEN OTHERS THEN
GET STACKED DIAGNOSTICS error_message = MESSAGE_TEXT;
RAISE LOG 'update_memo_shared_with_users error: %', error_message;
-- Don't re-raise the exception to avoid breaking functionality
END;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Create function for updating all memos in a space
CREATE OR REPLACE FUNCTION update_all_memos_for_space()
RETURNS TRIGGER AS $$
DECLARE
affected_rows integer;
error_message text;
space_id_value uuid;
BEGIN
-- Handle NULL space_id in both NEW and OLD
IF (TG_OP = 'DELETE' AND OLD.space_id IS NULL) OR
(TG_OP IN ('INSERT', 'UPDATE') AND NEW.space_id IS NULL) THEN
RAISE LOG 'update_all_memos_for_space: space_id is NULL, skipping update';
RETURN COALESCE(NEW, OLD);
END IF;
-- Determine which space_id to use
IF TG_OP = 'DELETE' THEN
space_id_value := OLD.space_id;
ELSE
space_id_value := NEW.space_id;
END IF;
RAISE LOG 'update_all_memos_for_space: Processing space_id %', space_id_value;
BEGIN
-- For each memo in the space, update its shared_with_users array
UPDATE memos m
SET shared_with_users = (
SELECT COALESCE(array_agg(DISTINCT sm.user_id), '{}'::uuid[])
FROM memo_spaces ms
JOIN space_members sm ON ms.space_id = sm.space_id
WHERE ms.memo_id = m.id
AND ms.space_id = space_id_value
)
WHERE m.id IN (
SELECT memo_id FROM memo_spaces WHERE space_id = space_id_value
);
GET DIAGNOSTICS affected_rows = ROW_COUNT;
RAISE LOG 'update_all_memos_for_space: Updated memos for space %, affected % rows',
space_id_value, affected_rows;
EXCEPTION WHEN OTHERS THEN
GET STACKED DIAGNOSTICS error_message = MESSAGE_TEXT;
RAISE LOG 'update_all_memos_for_space error: %', error_message;
-- Don't re-raise the exception to avoid breaking functionality
END;
RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql;
-- Create triggers
DROP TRIGGER IF EXISTS memo_spaces_insert_update_trigger ON memo_spaces;
CREATE TRIGGER memo_spaces_insert_update_trigger
AFTER INSERT OR UPDATE ON memo_spaces
FOR EACH ROW
EXECUTE FUNCTION update_memo_shared_with_users();
DROP TRIGGER IF EXISTS memo_spaces_delete_trigger ON memo_spaces;
CREATE TRIGGER memo_spaces_delete_trigger
AFTER DELETE ON memo_spaces
FOR EACH ROW
EXECUTE FUNCTION update_memo_shared_with_users();
DROP TRIGGER IF EXISTS space_members_trigger ON space_members;
CREATE TRIGGER space_members_trigger
AFTER INSERT OR UPDATE OR DELETE ON space_members
FOR EACH ROW
EXECUTE FUNCTION update_all_memos_for_space();
-- Create simplified RLS policies
ALTER TABLE public.memos ENABLE ROW LEVEL SECURITY;
DO $$
BEGIN
EXECUTE (
SELECT string_agg('DROP POLICY IF EXISTS "' || policyname || '" ON memos;', ' ')
FROM pg_policies
WHERE tablename = 'memos'
);
END $$;
-- Create simplified policies that use the denormalized column
CREATE POLICY "Users can access own memos"
ON memos FOR ALL
USING (user_id = auth.uid()::text);
CREATE POLICY "Users can view shared memos"
ON memos FOR SELECT
USING (auth.uid()::uuid = ANY(shared_with_users));
-- Add policy for public memos if needed
CREATE POLICY "Users can view public memos"
ON memos FOR SELECT
USING (is_public = true);
`;
// Run the migration SQL
const { error } = await this.supabase.rpc('run_sql', { sql: createTableSQL });
if (error) throw error;
// Initialize shared_with_users arrays for existing memos
await this.supabase.rpc('run_sql', {
sql: `
-- Populate the shared_with_users column for all existing memos
UPDATE memos m
SET shared_with_users = (
SELECT COALESCE(array_agg(DISTINCT sm.user_id), '{}'::uuid[])
FROM memo_spaces ms
JOIN space_members sm ON ms.space_id = sm.space_id
WHERE ms.memo_id = m.id
);
`
});
this.logger.log('Space members migration completed successfully');
return { success: true, message: 'Space members migration completed successfully' };
} catch (error) {
this.logger.error(`Error running space members migration: ${error.message}`, error.stack);
return { success: false, message: error.message };
}
}
}
```
## Key Differences from Original Implementation
1. **Simplified Methods**:
- Removed any complex recursive RLS policy management
- Focuses only on CRUD operations for the `space_members` table
- Leverages database triggers for maintaining the denormalized data
2. **Reduced Complexity**:
- The service now has a clear, focused purpose: manage space membership data
- All complex access control logic is now handled at the database level
- The migration script includes the triggers and denormalized approach
3. **Improved Error Handling**:
- More robust error handling and logging throughout
- Better handling of edge cases like missing data
- Includes NULL checks and logging in database triggers
## Controller Methods
The corresponding controller methods would be simplified as well:
```typescript
@Controller('memoro')
export class SpaceSyncController {
constructor(private readonly spaceSyncService: SpaceSyncService) {}
@Post('spaces/:spaceId/sync-members')
async syncSpaceMembers(@Param('spaceId') spaceId: string) {
return this.spaceSyncService.syncSpaceMembers(spaceId);
}
@Post('users/:userId/sync-spaces')
async syncUserSpaces(@Param('userId') userId: string) {
return this.spaceSyncService.syncUserSpaces(userId);
}
@Post('run-space-members-migration')
async runSpaceMembersMigration() {
return this.spaceSyncService.runSpaceMembersMigration();
}
}
```
## Integration with MemoroService
The MemoroService would need only minimal integration with the SpaceSyncService:
```typescript
// In MemoroService.ts
async createMemoroSpace(userId: string, spaceName: string, token: string) {
const space = await this.spacesService.createSpace(userId, spaceName, token);
// Only need to maintain the space_members table
await this.spaceSyncService.syncSpaceMembership(space.id, userId, 'owner');
return space;
}
async inviteUserToSpace(userId: string, spaceId: string, email: string, role: string, token: string) {
const result = await this.spacesService.addSpaceMember(spaceId, email, role, token);
if (result.invitee_id) {
// Only need to maintain the space_members table when a user is invited
await this.spaceSyncService.syncSpaceMembership(spaceId, result.invitee_id, role, userId);
}
return result;
}
async removeUserFromSpace(userId: string, spaceId: string, memberId: string, token: string) {
const result = await this.spacesService.removeSpaceMember(spaceId, memberId, token);
// Remove from space_members table
await this.spaceSyncService.removeSpaceMembership(spaceId, memberId);
return result;
}
```

View file

@ -0,0 +1,25 @@
# Server Configuration
PORT=3001
NODE_ENV=development
# Service URLs
MANA_SERVICE_URL=https://mana-core-middleware-111768794939.europe-west3.run.app
AUDIO_MICROSERVICE_URL=https://audio-microservice-111768794939.europe-west3.run.app
# App Configuration
MEMORO_APP_ID=973da0c1-b479-4dac-a1b0-ed09c72caca8
# JWT Configuration for Service Role Authentication
MANA_JWT_SECRET=your_mana_jwt_secret
# Mana Core Service Key (for service-to-service credit operations)
MANA_SUPABASE_SECRET_KEY=your_mana_service_role_key
# Memoro Supabase Configuration
MEMORO_SUPABASE_URL=https://your-memoro-project.supabase.co
MEMORO_SUPABASE_ANON_KEY=your-memoro-anon-key
MEMORO_SUPABASE_SERVICE_KEY=your-memoro-service-key
# Test Configuration
TEST_EMAIL=your_test_email@example.com
TEST_PASSWORD=your_test_password

View file

@ -0,0 +1,21 @@
module.exports = {
moduleFileExtensions: ['js', 'json', 'ts'],
rootDir: 'src',
testRegex: '.*\\.spec\\.ts$',
transform: {
'^.+\\.(t|j)s$': 'ts-jest',
},
collectCoverageFrom: [
'**/*.(t|j)s',
'!**/*.module.ts',
'!**/main.ts',
'!**/*.interface.ts',
'!**/*.dto.ts',
],
coverageDirectory: '../coverage',
testEnvironment: 'node',
moduleNameMapper: {
'^src/(.*)$': '<rootDir>/$1',
},
setupFilesAfterEnv: ['<rootDir>/../test/jest-setup.ts'],
};

View file

@ -0,0 +1,50 @@
{
"name": "@memoro/backend",
"version": "0.1.0",
"description": "Memoro microservice for Mana core system",
"main": "dist/main.js",
"scripts": {
"build": "nest build",
"start": "nest start",
"start:dev": "nest start --watch",
"start:debug": "nest start --debug --watch",
"start:prod": "node dist/src/main",
"lint": "eslint \"{src,apps,libs,test}/**/*.ts\" --fix",
"test": "jest",
"test:watch": "jest --watch",
"test:cov": "jest --coverage",
"test:debug": "node --inspect-brk -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand"
},
"dependencies": {
"@nestjs/axios": "^3.0.0",
"@nestjs/common": "^10.0.0",
"@nestjs/config": "^3.0.0",
"@nestjs/core": "^10.0.0",
"@nestjs/platform-express": "^10.0.0",
"@supabase/supabase-js": "^2.49.5",
"@types/jsonwebtoken": "^9.0.7",
"@types/multer": "^1.4.12",
"@types/uuid": "^10.0.0",
"axios": "^1.9.0",
"jsonwebtoken": "^9.0.2",
"multer": "^2.0.0",
"music-metadata": "^7.14.0",
"reflect-metadata": "^0.1.13",
"rxjs": "^7.8.0",
"uuid": "^11.1.0"
},
"devDependencies": {
"@nestjs/cli": "^10.0.0",
"@nestjs/testing": "^10.0.0",
"@types/express": "^4.17.17",
"@types/jest": "^29.5.2",
"@types/node": "^20.3.1",
"@types/supertest": "^2.0.12",
"jest": "^29.5.0",
"supertest": "^6.3.3",
"ts-jest": "^29.1.0",
"ts-node": "^10.9.1",
"tsconfig-paths": "^4.2.0",
"typescript": "^5.1.3"
}
}

View file

@ -0,0 +1,106 @@
#!/usr/bin/env ts-node
/**
* Script to analyze and standardize audio path field usage in Memoro production database
*
* STANDARDIZATION GOAL:
* - Standardize all backend services to use 'audio_path' field consistently
* - Handle legacy 'path' field references for backward compatibility
* - Migrate any remaining 'path' fields to 'audio_path' in database
*
* CURRENT STATUS (August 25, 2025):
* - Most memos already use 'audio_path' field (92%)
* - Small subset uses legacy 'path' field (7.3%)
* - Backend services now standardized to use 'audio_path'
*
* MIGRATION APPROACH:
* - Update backend services to prioritize 'audio_path' over 'path'
* - Migrate database records from 'path' to 'audio_path'
* - Maintain backward compatibility during transition
*
* SQL QUERIES USED:
*/
// Query 1: Overall statistics
const overallStatsQuery = `
SELECT
COUNT(*) FILTER (WHERE source->>'audio_path' IS NOT NULL) as memos_with_audio_path,
COUNT(*) FILTER (WHERE source->>'path' IS NOT NULL) as memos_with_path,
COUNT(*) as total_memos,
COUNT(*) FILTER (WHERE source->>'audio_path' IS NOT NULL AND source->>'path' IS NULL) as only_audio_path,
COUNT(*) FILTER (WHERE source->>'audio_path' IS NOT NULL AND source->>'path' IS NOT NULL) as both_fields
FROM memos
WHERE source IS NOT NULL;
`;
// Query 2: Monthly breakdown
const monthlyBreakdownQuery = `
SELECT
DATE_TRUNC('month', created_at) as month,
COUNT(*) FILTER (WHERE source->>'audio_path' IS NOT NULL) as with_audio_path,
COUNT(*) FILTER (WHERE source->>'path' IS NOT NULL) as with_path,
COUNT(*) as total
FROM memos
WHERE source IS NOT NULL
GROUP BY month
ORDER BY month DESC
LIMIT 12;
`;
// Query 3: Daily breakdown for transition period
const dailyTransitionQuery = `
SELECT
DATE_TRUNC('day', created_at) as day,
COUNT(*) FILTER (WHERE source->>'audio_path' IS NOT NULL) as with_audio_path,
COUNT(*) FILTER (WHERE source->>'path' IS NOT NULL) as with_path,
COUNT(*) as total
FROM memos
WHERE source IS NOT NULL
AND created_at >= '2025-05-01'
AND created_at < '2025-07-01'
GROUP BY day
ORDER BY day;
`;
// Migration query to standardize all memos to use 'audio_path' field
const migrationQuery = `
-- DRY RUN: Check what would be migrated from 'path' to 'audio_path'
SELECT
id,
source->>'path' as current_path,
source->>'audio_path' as current_audio_path,
created_at
FROM memos
WHERE source->>'path' IS NOT NULL
AND source->>'audio_path' IS NULL
LIMIT 10;
-- ACTUAL MIGRATION (run with caution):
-- Migrate 'path' field to 'audio_path' field
-- UPDATE memos
-- SET source = jsonb_set(
-- source - 'path',
-- '{audio_path}',
-- source->'path'
-- )
-- WHERE source->>'path' IS NOT NULL
-- AND source->>'audio_path' IS NULL;
`;
console.log('Audio Path Field Analysis Script');
console.log('================================');
console.log('');
console.log('This script documents the analysis of the legacy audio_path field usage');
console.log('in the Memoro production database.');
console.log('');
console.log('Key Findings:');
console.log('- 92% of memos (16,223) already use the audio_path field');
console.log('- Only 7.3% (1,286) use the legacy path field');
console.log('- The fields are mutually exclusive (no memo has both)');
console.log('- Brief transition attempted in May-June 2025 but mostly reverted');
console.log('');
console.log('Backend Standardization Complete:');
console.log('- All backend services now standardized to use "audio_path" field');
console.log('- Legacy "path" field handling maintained for backward compatibility');
console.log('- Database migration needed for remaining 7.3% with "path" field');
console.log('- Edge Functions already use "audio_path" consistently');

View file

@ -0,0 +1,54 @@
/**
* Zentrale AI-Modell-Konfiguration
*
* Alle Modelle, Endpoints und Presets an einer Stelle.
* Modellwechsel = nur diese Datei ändern.
*/
export interface GeminiConfig {
model: string;
endpoint: string;
temperature: number;
maxOutputTokens: number;
}
export interface AzureOpenAIConfig {
endpoint: string;
deployment: string;
apiVersion: string;
temperature: number;
maxTokens: number;
}
export interface GenerateOptions {
temperature?: number;
maxTokens?: number;
}
// ── Primary: Google Gemini ──
// Note: gemini-2.0-flash wird Juni 2026 deprecated → gemini-2.0-flash-001 ist stabil
export const GEMINI_DEFAULT: GeminiConfig = {
model: 'gemini-2.0-flash-001',
endpoint: 'https://generativelanguage.googleapis.com/v1beta/models',
temperature: 0.7,
maxOutputTokens: 8192,
};
// ── Fallback: Azure OpenAI ──
export const AZURE_DEFAULT: AzureOpenAIConfig = {
endpoint: 'https://memoroseopenai.openai.azure.com',
deployment: 'gpt-4.1-mini-se',
apiVersion: '2025-01-01-preview',
temperature: 0.7,
maxTokens: 8192,
};
// ── Task-spezifische Presets ──
export const AI_PRESETS = {
headline: { temperature: 0.7, maxTokens: 300 },
memory: { temperature: 0.7, maxTokens: 8192 },
translation: { temperature: 0.3, maxTokens: 8192 },
selection: { temperature: 0.3, maxTokens: 2048 },
} as const;
export type AiPreset = keyof typeof AI_PRESETS;

View file

@ -0,0 +1,12 @@
import { Module } from '@nestjs/common';
import { AiService } from './ai.service';
import { HeadlineService } from './headline/headline.service';
import { MemoryService } from './memory/memory.service';
import { QuestionService } from './memory/question.service';
import { UserPromptService } from './shared/user-prompt.service';
@Module({
providers: [AiService, HeadlineService, MemoryService, QuestionService, UserPromptService],
exports: [AiService, HeadlineService, MemoryService, QuestionService, UserPromptService],
})
export class AiModule {}

View file

@ -0,0 +1,141 @@
import { Injectable, Logger } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import {
GEMINI_DEFAULT,
AZURE_DEFAULT,
type GeminiConfig,
type AzureOpenAIConfig,
type GenerateOptions,
} from './ai-model.config';
@Injectable()
export class AiService {
private readonly logger = new Logger(AiService.name);
private readonly geminiApiKey: string;
private readonly azureApiKey: string;
constructor(private configService: ConfigService) {
this.geminiApiKey = this.configService.get<string>('GEMINI_API_KEY', '');
this.azureApiKey = this.configService.get<string>('AZURE_OPENAI_KEY', '');
}
/**
* Generiert Text mit Gemini (Primary) Azure (Fallback).
* Gibt den rohen Text-Content zurück.
*/
async generateText(
prompt: string,
options?: GenerateOptions & { systemInstruction?: string }
): Promise<string> {
// Primary: Gemini
if (this.geminiApiKey) {
const result = await this.callGemini(prompt, this.geminiApiKey, options);
if (result !== null) return result;
this.logger.warn('Gemini failed, falling back to Azure OpenAI');
} else {
this.logger.warn('No Gemini API key, using Azure OpenAI directly');
}
// Fallback: Azure
if (!this.azureApiKey) {
throw new Error('No AI provider available: both Gemini and Azure keys missing');
}
const result = await this.callAzure(prompt, options);
if (result !== null) return result;
throw new Error('All AI providers failed');
}
private async callGemini(
prompt: string,
apiKey: string,
options?: GenerateOptions & { systemInstruction?: string }
): Promise<string | null> {
const config: GeminiConfig = {
...GEMINI_DEFAULT,
temperature: options?.temperature ?? GEMINI_DEFAULT.temperature,
maxOutputTokens: options?.maxTokens ?? GEMINI_DEFAULT.maxOutputTokens,
};
try {
const url = `${config.endpoint}/${config.model}:generateContent?key=${apiKey}`;
const body: any = {
contents: [{ parts: [{ text: prompt }] }],
generationConfig: {
temperature: config.temperature,
maxOutputTokens: config.maxOutputTokens,
},
};
if (options?.systemInstruction) {
body.systemInstruction = {
parts: [{ text: options.systemInstruction }],
};
}
const start = Date.now();
const response = await fetch(url, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body),
});
if (!response.ok) {
const errorText = await response.text();
this.logger.error(`Gemini API error (${response.status}): ${errorText}`);
return null;
}
const data = await response.json();
const content = data.candidates?.[0]?.content?.parts?.[0]?.text?.trim() || '';
this.logger.debug(
`Gemini ${config.model} responded in ${Date.now() - start}ms (${content.length} chars)`
);
return content || null;
} catch (error) {
this.logger.error(`Gemini call failed: ${error instanceof Error ? error.message : error}`);
return null;
}
}
private async callAzure(prompt: string, options?: GenerateOptions): Promise<string | null> {
const config: AzureOpenAIConfig = {
...AZURE_DEFAULT,
temperature: options?.temperature ?? AZURE_DEFAULT.temperature,
maxTokens: options?.maxTokens ?? AZURE_DEFAULT.maxTokens,
};
try {
const url = `${config.endpoint}/openai/deployments/${config.deployment}/chat/completions?api-version=${config.apiVersion}`;
const start = Date.now();
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'api-key': this.azureApiKey,
},
body: JSON.stringify({
messages: [{ role: 'user', content: prompt }],
max_tokens: config.maxTokens,
temperature: config.temperature,
}),
});
if (!response.ok) {
const errorText = await response.text();
this.logger.error(`Azure OpenAI error (${response.status}): ${errorText}`);
return null;
}
const data = await response.json();
const content = data.choices?.[0]?.message?.content?.trim() || '';
this.logger.debug(
`Azure ${config.deployment} responded in ${Date.now() - start}ms (${content.length} chars)`
);
return content || null;
} catch (error) {
this.logger.error(`Azure call failed: ${error instanceof Error ? error.message : error}`);
return null;
}
}
}

View file

@ -0,0 +1,219 @@
/**
* System-Prompts für die Headline-Generierung in verschiedenen Sprachen
*
* Die Prompts werden verwendet, um Überschriften und Einleitungen für Memos zu generieren.
* Jede Sprache hat ihren eigenen Prompt, der die spezifischen Anforderungen und Formatierungen enthält.
*/ /**
* Interface für die Prompt-Konfiguration
*/ /**
* System-Prompts für die Headline-Generierung
*
* Unterstützte Sprachen (62):
* - de: Deutsch
* - en: Englisch
* - fr: Französisch
* - es: Spanisch
* - it: Italienisch
* - nl: Niederländisch
* - pt: Portugiesisch
* - ru: Russisch
* - ja: Japanisch
* - ko: Koreanisch
* - zh: Chinesisch
* - ar: Arabisch
* - hi: Hindi
* - tr: Türkisch
* - pl: Polnisch
* - da: Dänisch
* - sv: Schwedisch
* - nb: Norwegisch
* - fi: Finnisch
* - cs: Tschechisch
* - hu: Ungarisch
* - el: Griechisch
* - he: Hebräisch
* - id: Indonesisch
* - th: Thai
* - vi: Vietnamesisch
* - uk: Ukrainisch
* - ro: Rumänisch
* - bg: Bulgarisch
* - ca: Katalanisch
* - hr: Kroatisch
* - sk: Slowakisch
* - et: Estnisch
* - lv: Lettisch
* - lt: Litauisch
* - bn: Bengalisch
* - ms: Malaiisch
* - ta: Tamil
* - te: Telugu
* - ur: Urdu
* - mr: Marathi
* - gu: Gujarati
* - ml: Malayalam
* - kn: Kannada
* - pa: Punjabi
* - af: Afrikaans
* - fa: Persisch
* - ka: Georgisch
* - is: Isländisch
* - sq: Albanisch
* - az: Aserbaidschanisch
* - eu: Baskisch
* - gl: Galizisch
* - kk: Kasachisch
* - mk: Mazedonisch
* - sr: Serbisch
* - sl: Slowenisch
* - mt: Maltesisch
* - hy: Armenisch
* - uz: Usbekisch
* - ga: Irisch
* - cy: Walisisch
* - fil: Filipino
*/ export const SYSTEM_PROMPTS = {
headline: {
// Deutsch
de: 'Du bist ein Assistent, der Texte analysiert und zusammenfasst. Deine Aufgabe ist es, für den folgenden Text zwei Dinge zu erstellen:\n1. Eine kurze, prägnante Headline (maximal 8 Wörter)\n2. Ein kurzes Intro, das den Inhalt des Textes in 2-3 Sätzen zusammenfasst und neugierig macht\n\nFormatiere deine Antwort genau so:\nHEADLINE: [Deine Headline hier]\nINTRO: [Dein Intro hier]',
// Englisch
en: 'You are an assistant that analyzes and summarizes texts. Your task is to create two things for the following text:\n1. A short, concise headline (maximum 8 words)\n2. A brief intro that summarizes the content of the text in 2-3 sentences and makes the reader curious\n\nFormat your answer exactly like this:\nHEADLINE: [Your headline here]\nINTRO: [Your intro here]',
// Französisch
fr: 'Vous êtes un assistant qui analyse et résume des textes. Votre tâche est de créer deux choses pour le texte suivant :\n1. Un titre court et concis (maximum 8 mots)\n2. Une brève introduction qui résume le contenu du texte en 2-3 phrases et éveille la curiosité du lecteur\n\nFormatez votre réponse exactement comme ceci :\nHEADLINE: [Votre titre ici]\nINTRO: [Votre introduction ici]',
// Spanisch
es: 'Eres un asistente que analiza y resume textos. Tu tarea es crear dos cosas para el siguiente texto:\n1. Un título breve y conciso (máximo 8 palabras)\n2. Una breve introducción que resuma el contenido del texto en 2-3 frases y despierte la curiosidad del lector\n\nFormatea tu respuesta exactamente así:\nHEADLINE: [Tu título aquí]\nINTRO: [Tu introducción aquí]',
// Italienisch
it: 'Sei un assistente che analizza e riassume testi. Il tuo compito è creare due cose per il seguente testo:\n1. Un titolo breve e conciso (massimo 8 parole)\n2. Una breve introduzione che riassume il contenuto del testo in 2-3 frasi e suscita la curiosità del lettore\n\nFormatta la tua risposta esattamente così:\nHEADLINE: [Il tuo titolo qui]\nINTRO: [La tua introduzione qui]',
// Niederländisch
nl: 'Je bent een assistent die teksten analyseert en samenvat. Je taak is om twee dingen te maken voor de volgende tekst:\n1. Een korte, bondige kop (maximaal 8 woorden)\n2. Een korte intro die de inhoud van de tekst in 2-3 zinnen samenvat en de lezer nieuwsgierig maakt\n\nFormatteer je antwoord precies zo:\nHEADLINE: [Jouw kop hier]\nINTRO: [Jouw intro hier]',
// Portugiesisch
pt: 'Você é um assistente que analisa e resume textos. Sua tarefa é criar duas coisas para o seguinte texto:\n1. Uma manchete breve e concisa (máximo 8 palavras)\n2. Uma breve introdução que resume o conteúdo do texto em 2-3 frases e desperta a curiosidade do leitor\n\nFormate sua resposta exatamente assim:\nHEADLINE: [Sua manchete aqui]\nINTRO: [Sua introdução aqui]',
// Russisch
ru: 'Вы помощник, который анализирует и резюмирует тексты. Ваша задача - создать две вещи для следующего текста:\n1. Короткий, лаконичный заголовок (максимум 8 слов)\n2. Краткое введение, которое резюмирует содержание текста в 2-3 предложениях и вызывает любопытство у читателя\n\nФорматируйте ваш ответ точно так:\nHEADLINE: [Ваш заголовок здесь]\nINTRO: [Ваше введение здесь]',
// Japanisch
ja: 'あなたはテキストを分析し要約するアシスタントです。次のテキストに対して2つのことを作成するのがあなたの仕事です:\n1. 短く簡潔な見出し最大8語\n2. テキストの内容を2-3文で要約し、読者の興味を引く短い導入文\n\n次のように正確にフォーマットしてください\nHEADLINE: [ここにあなたの見出し]\nINTRO: [ここにあなたの導入文]',
// Koreanisch
ko: '당신은 텍스트를 분석하고 요약하는 어시스턴트입니다. 다음 텍스트에 대해 두 가지를 만드는 것이 당신의 임무입니다:\n1. 짧고 간결한 헤드라인 (최대 8단어)\n2. 텍스트의 내용을 2-3문장으로 요약하고 독자의 호기심을 자극하는 짧은 소개\n\n다음과 같이 정확히 형식을 맞춰주세요:\nHEADLINE: [여기에 당신의 헤드라인]\nINTRO: [여기에 당신의 소개]',
// Chinesisch (vereinfacht)
zh: '你是一个分析和总结文本的助手。你的任务是为以下文本创建两样东西:\n1. 一个简短、简洁的标题最多8个词\n2. 一个简短的介绍用2-3句话总结文本内容并激发读者的好奇心\n\n请严格按照以下格式回答\nHEADLINE: [你的标题]\nINTRO: [你的介绍]',
// Arabisch
ar: 'أنت مساعد يحلل ويلخص النصوص. مهمتك هي إنشاء شيئين للنص التالي:\n1. عنوان قصير ومقتضب (8 كلمات كحد أقصى)\n2. مقدمة مختصرة تلخص محتوى النص في 2-3 جمل وتثير فضول القارئ\n\nقم بتنسيق إجابتك بالضبط هكذا:\nHEADLINE: [عنوانك هنا]\nINTRO: [مقدمتك هنا]',
// Hindi
hi: 'आप एक सहायक हैं जो ग्रंथों का विश्लेषण और सारांश करते हैं। निम्नलिखित पाठ के लिए दो चीजें बनाना आपका कार्य है:\n1. एक संक्षिप्त, सटीक शीर्षक (अधिकतम 8 शब्द)\n2. एक संक्षिप्त परिचय जो पाठ की सामग्री को 2-3 वाक्यों में सारांशित करता है और पाठक में जिज्ञासा जगाता है\n\nअपना उत्तर बिल्कुल इस तरह से प्रारूपित करें:\nHEADLINE: [यहाँ आपका शीर्षक]\nINTRO: [यहाँ आपका परिचय]',
// Türkisch
tr: 'Metinleri analiz eden ve özetleyen bir asistansınız. Aşağıdaki metin için iki şey oluşturmak sizin göreviniz:\n1. Kısa, özlü bir başlık (maksimum 8 kelime)\n2. Metnin içeriğini 2-3 cümlede özetleyen ve okuyucuyu meraklandıran kısa bir giriş\n\nCevabınızı tam olarak şu şekilde biçimlendirin:\nHEADLINE: [Başlığınız burada]\nINTRO: [Girişiniz burada]',
// Polnisch
pl: 'Jesteś asystentem, który analizuje i streszcza teksty. Twoim zadaniem jest stworzenie dwóch rzeczy dla następującego tekstu:\n1. Krótki, zwięzły nagłówek (maksymalnie 8 słów)\n2. Krótkie wprowadzenie, które streszcza treść tekstu w 2-3 zdaniach i wzbudza ciekawość czytelnika\n\nSformatuj swoją odpowiedź dokładnie tak:\nHEADLINE: [Twój nagłówek tutaj]\nINTRO: [Twoje wprowadzenie tutaj]',
// Dänisch
da: 'Du er en assistent, der analyserer og sammenfatter tekster. Din opgave er at skabe to ting for følgende tekst:\n1. En kort, præcis overskrift (maksimalt 8 ord)\n2. En kort intro, der sammenfatter tekstens indhold i 2-3 sætninger og gør læseren nysgerrig\n\nFormatter dit svar præcis sådan:\nHEADLINE: [Din overskrift her]\nINTRO: [Dit intro her]',
// Schwedisch
sv: 'Du är en assistent som analyserar och sammanfattar texter. Din uppgift är att skapa två saker för följande text:\n1. En kort, koncis rubrik (maximalt 8 ord)\n2. En kort intro som sammanfattar textens innehåll i 2-3 meningar och gör läsaren nyfiken\n\nFormatera ditt svar exakt så här:\nHEADLINE: [Din rubrik här]\nINTRO: [Ditt intro här]',
// Norwegisch
nb: 'Du er en assistent som analyserer og oppsummerer tekster. Oppgaven din er å lage to ting for følgende tekst:\n1. En kort, presis overskrift (maksimalt 8 ord)\n2. En kort intro som oppsummerer tekstens innhold i 2-3 setninger og gjør leseren nysgjerrig\n\nFormater svaret ditt nøyaktig slik:\nHEADLINE: [Din overskrift her]\nINTRO: [Ditt intro her]',
// Finnisch
fi: 'Olet avustaja, joka analysoi ja tiivistää tekstejä. Tehtäväsi on luoda kaksi asiaa seuraavalle tekstille:\n1. Lyhyt, ytimekäs otsikko (enintään 8 sanaa)\n2. Lyhyt johdanto, joka tiivistää tekstin sisällön 2-3 lauseessa ja herättää lukijan uteliaisuuden\n\nMuotoile vastauksesi täsmälleen näin:\nHEADLINE: [Otsikkosi tähän]\nINTRO: [Johdantosi tähän]',
// Tschechisch
cs: 'Jste asistent, který analyzuje a shrnuje texty. Vaším úkolem je vytvořit dvě věci pro následující text:\n1. Krátký, stručný nadpis (maximálně 8 slov)\n2. Krátký úvod, který shrne obsah textu ve 2-3 větách a vzbudí zvědavost čtenáře\n\nNaformátujte svou odpověď přesně takto:\nHEADLINE: [Váš nadpis zde]\nINTRO: [Váš úvod zde]',
// Ungarisch
hu: 'Ön egy asszisztens, aki szövegeket elemez és összefoglal. Az Ön feladata, hogy két dolgot hozzon létre a következő szöveghez:\n1. Egy rövid, tömör címsor (maximum 8 szó)\n2. Egy rövid bevezető, amely 2-3 mondatban összefoglalja a szöveg tartalmát és felkelti az olvasó kíváncsiságát\n\nFormázza válaszát pontosan így:\nHEADLINE: [Az Ön címsora itt]\nINTRO: [Az Ön bevezetője itt]',
// Griechisch
el: 'Είστε ένας βοηθός που αναλύει και συνοψίζει κείμενα. Το καθήκον σας είναι να δημιουργήσετε δύο πράγματα για το ακόλουθο κείμενο:\n1. Έναν σύντομο, περιεκτικό τίτλο (μέγιστο 8 λέξεις)\n2. Μια σύντομη εισαγωγή που συνοψίζει το περιεχόμενο του κειμένου σε 2-3 προτάσεις και προκαλεί την περιέργεια του αναγνώστη\n\nΜορφοποιήστε την απάντησή σας ακριβώς έτσι:\nHEADLINE: [Ο τίτλος σας εδώ]\nINTRO: [Η εισαγωγή σας εδώ]',
// Hebräisch
he: 'אתה עוזר שמנתח ומסכם טקסטים. המשימה שלך היא ליצור שני דברים לטקסט הבא:\n1. כותרת קצרה ותמציתית (מקסימום 8 מילים)\n2. הקדמה קצרה שמסכמת את תוכן הטקסט ב-2-3 משפטים ומעוררת סקרנות אצל הקורא\n\nעצב את התשובה שלך בדיוק כך:\nHEADLINE: [הכותרת שלך כאן]\nINTRO: [ההקדמה שלך כאן]',
// Indonesisch
id: 'Anda adalah asisten yang menganalisis dan merangkum teks. Tugas Anda adalah membuat dua hal untuk teks berikut:\n1. Judul yang pendek dan ringkas (maksimal 8 kata)\n2. Intro singkat yang merangkum isi teks dalam 2-3 kalimat dan membuat pembaca penasaran\n\nFormat jawaban Anda persis seperti ini:\nHEADLINE: [Judul Anda di sini]\nINTRO: [Intro Anda di sini]',
// Thai
th: 'คุณเป็นผู้ช่วยที่วิเคราะห์และสรุปข้อความ งานของคุณคือการสร้างสองสิ่งสำหรับข้อความต่อไปนี้:\n1. หัวข้อที่สั้นและกระชับ (ไม่เกิน 8 คำ)\n2. บทนำสั้นๆ ที่สรุปเนื้อหาของข้อความใน 2-3 ประโยคและทำให้ผู้อ่านอยากรู้\n\nจัดรูปแบบคำตอบของคุณตามนี้เป๊ะๆ:\nHEADLINE: [หัวข้อของคุณที่นี่]\nINTRO: [บทนำของคุณที่นี่]',
// Vietnamesisch
vi: 'Bạn là một trợ lý phân tích và tóm tắt văn bản. Nhiệm vụ của bạn là tạo hai thứ cho văn bản sau:\n1. Một tiêu đề ngắn gọn và súc tích (tối đa 8 từ)\n2. Một phần giới thiệu ngắn tóm tắt nội dung văn bản trong 2-3 câu và khơi gợi sự tò mò của người đọc\n\nĐịnh dạng câu trả lời của bạn chính xác như thế này:\nHEADLINE: [Tiêu đề của bạn ở đây]\nINTRO: [Phần giới thiệu của bạn ở đây]',
// Ukrainisch
uk: 'Ви помічник, який аналізує та резюмує тексти. Ваше завдання - створити дві речі для наступного тексту:\n1. Короткий, лаконічний заголовок (максимум 8 слів)\n2. Короткий вступ, який резюмує зміст тексту у 2-3 реченнях та викликає цікавість у читача\n\nФорматуйте вашу відповідь точно так:\nHEADLINE: [Ваш заголовок тут]\nINTRO: [Ваш вступ тут]',
// Rumänisch
ro: 'Sunteți un asistent care analizează și rezumă texte. Sarcina dvs. este să creați două lucruri pentru următorul text:\n1. Un titlu scurt și concis (maximum 8 cuvinte)\n2. O scurtă introducere care rezumă conținutul textului în 2-3 propoziții și trezește curiozitatea cititorului\n\nFormatați răspunsul dvs. exact astfel:\nHEADLINE: [Titlul dvs. aici]\nINTRO: [Introducerea dvs. aici]',
// Bulgarisch
bg: 'Вие сте асистент, който анализира и резюмира текстове. Вашата задача е да създадете две неща за следния текст:\n1. Кратко, сбито заглавие (максимум 8 думи)\n2. Кратко въведение, което резюмира съдържанието на текста в 2-3 изречения и предизвиква любопитството на читателя\n\nФорматирайте отговора си точно така:\nHEADLINE: [Вашето заглавие тук]\nINTRO: [Вашето въведение тук]',
// Katalanisch
ca: 'Ets un assistent que analitza i resumeix textos. La teva tasca és crear dues coses per al següent text:\n1. Un títol breu i concís (màxim 8 paraules)\n2. Una breu introducció que resumeixi el contingut del text en 2-3 frases i desperti la curiositat del lector\n\nFormata la teva resposta exactament així:\nHEADLINE: [El teu títol aquí]\nINTRO: [La teva introducció aquí]',
// Kroatisch
hr: 'Vi ste asistent koji analizira i sažima tekstove. Vaš zadatak je stvoriti dvije stvari za sljedeći tekst:\n1. Kratak, sažet naslov (maksimalno 8 riječi)\n2. Kratak uvod koji sažima sadržaj teksta u 2-3 rečenice i pobuđuje znatiželju čitatelja\n\nFormatirajte svoj odgovor točno ovako:\nHEADLINE: [Vaš naslov ovdje]\nINTRO: [Vaš uvod ovdje]',
// Slowakisch
sk: 'Ste asistent, ktorý analyzuje a sumarizuje texty. Vašou úlohou je vytvoriť dve veci pre nasledujúci text:\n1. Krátky, stručný nadpis (maximálne 8 slov)\n2. Krátky úvod, ktorý sumarizuje obsah textu v 2-3 vetách a vzbudí zvedavosť čitateľa\n\nNaformátujte svoju odpoveď presne takto:\nHEADLINE: [Váš nadpis tu]\nINTRO: [Váš úvod tu]',
// Estnisch
et: 'Olete assistent, kes analüüsib ja kokkuvõtab tekste. Teie ülesanne on luua kaks asja järgmise teksti jaoks:\n1. Lühike, kokkuvõtlik pealkiri (maksimaalselt 8 sõna)\n2. Lühike sissejuhatus, mis võtab teksti sisu kokku 2-3 lauses ja äratab lugeja uudishimu\n\nVormistage oma vastus täpselt nii:\nHEADLINE: [Teie pealkiri siin]\nINTRO: [Teie sissejuhatus siin]',
// Lettisch
lv: 'Jūs esat asistents, kas analizē un apkopo tekstus. Jūsu uzdevums ir izveidot divas lietas šādam tekstam:\n1. Īsu, kodolīgu virsrakstu (maksimums 8 vārdi)\n2. Īsu ievadu, kas apkopo teksta saturu 2-3 teikumos un modina lasītāja ziņkāri\n\nFormatējiet savu atbildi tieši tā:\nHEADLINE: [Jūsu virsraksts šeit]\nINTRO: [Jūsu ievads šeit]',
// Litauisch
lt: 'Esate asistentas, kuris analizuoja ir apibendrина tekstus. Jūsų užduotis - sukurti du dalykus šiam tekstui:\n1. Trumpą, glaustą antraštę (ne daugiau kaip 8 žodžiai)\n2. Trumpą įvadą, kuris apibendrinta teksto turinį 2-3 sakiniais ir žadina skaitytojo smalsumą\n\nSuformatuokite savo atsakymą tiksliai taip:\nHEADLINE: [Jūsų antraštė čia]\nINTRO: [Jūsų įvadas čia]',
// Bengalisch
bn: 'আপনি একজন সহায়ক যিনি পাঠ্য বিশ্লেষণ এবং সারসংক্ষেপ করেন। নিম্নলিখিত পাঠ্যের জন্য দুটি জিনিস তৈরি করা আপনার কাজ:\n1. একটি সংক্ষিপ্ত, সারগর্ভ শিরোনাম (সর্বোচ্চ ৮টি শব্দ)\n2. একটি সংক্ষিপ্ত ভূমিকা যা ২-৩টি বাক্যে পাঠ্যের বিষয়বস্তু সারসংক্ষেপ করে এবং পাঠকের কৌতূহল জাগায়\n\nআপনার উত্তর ঠিক এভাবে ফরম্যাট করুন:\nHEADLINE: [এখানে আপনার শিরোনাম]\nINTRO: [এখানে আপনার ভূমিকা]',
// Malaiisch
ms: 'Anda adalah pembantu yang menganalisis dan meringkaskan teks. Tugas anda adalah untuk mencipta dua perkara untuk teks berikut:\n1. Tajuk utama yang pendek dan padat (maksimum 8 perkataan)\n2. Pengenalan ringkas yang meringkaskan kandungan teks dalam 2-3 ayat dan menimbulkan rasa ingin tahu pembaca\n\nFormatkan jawapan anda tepat seperti ini:\nHEADLINE: [Tajuk utama anda di sini]\nINTRO: [Pengenalan anda di sini]',
// Tamil
ta: 'நீங்கள் உரைகளை பகுப்பாய்வு செய்து சுருக்கும் உதவியாளர். பின்வரும் உரைக்கு இரண்டு விஷயங்களை உருவாக்குவது உங்கள் பணி:\n1. ஒரு குறுகிய, சுருக்கமான தலைப்பு (அதிகபட்சம் 8 வார்த்தைகள்)\n2. உரையின் உள்ளடக்கத்தை 2-3 வாக்கியங்களில் சுருக்கி வாசகரின் ஆர்வத்தை தூண்டும் குறுகிய அறிமுகம்\n\nஉங்கள் பதிலை சரியாக இப்படி வடிவமைக்கவும்:\nHEADLINE: [இங்கே உங்கள் தலைப்பு]\nINTRO: [இங்கே உங்கள் அறிமுகம்]',
// Telugu
te: 'మీరు టెక్స్ట్‌లను విశ్లేషించి సంక్షిప్తీకరించే సహాయకుడు. కింది టెక్స్ట్ కోసం రెండు విషయాలు సృష్టించడం మీ పని:\n1. ఒక చిన్న, సంక్షిప్త శీర్షిక (గరిష్టంగా 8 పదాలు)\n2. టెక్స్ట్ యొక్క కంటెంట్‌ను 2-3 వాక్యాలలో సంక్షిప్తీకరించి పాఠకుడిలో ఆసక్తిని రేకెత్తించే చిన్న పరిచయం\n\nమీ సమాధానాన్ని సరిగ్గా ఇలా ఫార్మాట్ చేయండి:\nHEADLINE: [ఇక్కడ మీ శీర్షిక]\nINTRO: [ఇక్కడ మీ పరిచయం]',
// Urdu
ur: 'آپ ایک معاون ہیں جو متن کا تجزیہ اور خلاصہ کرتے ہیں۔ مندرجہ ذیل متن کے لیے دو چیزیں بنانا آپ کا کام ہے:\n1. ایک مختصر، جامع سرخی (زیادہ سے زیادہ 8 الفاظ)\n2. ایک مختصر تعارف جو متن کے مواد کو 2-3 جملوں میں خلاصہ کرے اور قاری میں تجسس پیدا کرے\n\nاپنے جواب کو بالکل اس طرح فارمیٹ کریں:\nHEADLINE: [یہاں آپ کی سرخی]\nINTRO: [یہاں آپ کا تعارف]',
// Marathi
mr: 'तुम्ही मजकूरांचे विश्लेषण आणि सारांश करणारे सहाय्यक आहात. पुढील मजकुरासाठी दोन गोष्टी तयार करणे हे तुमचे काम आहे:\n1. एक लहान, संक्षिप्त मथळा (जास्तीत जास्त 8 शब्द)\n2. एक छोटी प्रस्तावना जी मजकुराची सामग्री 2-3 वाक्यांमध्ये सारांशित करते आणि वाचकामध्ये कुतूहल निर्माण करते\n\nतुमचे उत्तर अगदी अशा प्रकारे स्वरूपित करा:\nHEADLINE: [इथे तुमचा मथळा]\nINTRO: [इथे तुमची प्रस्तावना]',
// Gujarati
gu: 'તમે એક સહાયક છો જે ટેક્સ્ટનું વિશ્લેષણ અને સારાંશ કરે છે. નીચેના ટેક્સ્ટ માટે બે વસ્તુઓ બનાવવી એ તમારું કામ છે:\n1. એક ટૂંકું, સંક્ષિપ્ત હેડલાઇન (મહત્તમ 8 શબ્દો)\n2. એક ટૂંકો પરિચય જે ટેક્સ્ટની સામગ્રીને 2-3 વાક્યોમાં સારાંશ આપે અને વાચકમાં જિજ્ઞાસા જગાડે\n\nતમારા જવાબને બરાબર આ રીતે ફોર્મેટ કરો:\nHEADLINE: [અહીં તમારું હેડલાઇન]\nINTRO: [અહીં તમારો પરિચય]',
// Malayalam
ml: 'നിങ്ങൾ വാചകങ്ങൾ വിശകലനം ചെയ്യുകയും സംഗ്രഹിക്കുകയും ചെയ്യുന്ന ഒരു സഹായകനാണ്. ഇനിപ്പറയുന്ന വാചകത്തിനായി രണ്ട് കാര്യങ്ങൾ സൃഷ്ടിക്കുക എന്നതാണ് നിങ്ങളുടെ ജോലി:\n1. ഒരു ചെറിയ, സംക്ഷിപ്ത തലക്കെട്ട് (പരമാവധി 8 വാക്കുകൾ)\n2. വാചകത്തിന്റെ ഉള്ളടക്കം 2-3 വാക്യങ്ങളിൽ സംഗ്രഹിക്കുകയും വായനക്കാരനിൽ ജിജ്ഞാസ ഉണർത്തുകയും ചെയ്യുന്ന ഒരു ചെറിയ ആമുഖം\n\nനിങ്ങളുടെ ഉത്തരം കൃത്യമായി ഇപ്രകാരം ഫോർമാറ്റ് ചെയ്യുക:\nHEADLINE: [ഇവിടെ നിങ്ങളുടെ തലക്കെട്ട്]\nINTRO: [ഇവിടെ നിങ്ങളുടെ ആമുഖം]',
// Kannada
kn: 'ನೀವು ಪಠ್ಯಗಳನ್ನು ವಿಶ್ಲೇಷಿಸುವ ಮತ್ತು ಸಾರಾಂಶಗೊಳಿಸುವ ಸಹಾಯಕರಾಗಿದ್ದೀರಿ. ಕೆಳಗಿನ ಪಠ್ಯಕ್ಕಾಗಿ ಎರಡು ವಿಷಯಗಳನ್ನು ರಚಿಸುವುದು ನಿಮ್ಮ ಕೆಲಸ:\n1. ಒಂದು ಸಣ್ಣ, ಸಂಕ್ಷಿಪ್ತ ಶೀರ್ಷಿಕೆ (ಗರಿಷ್ಠ 8 ಪದಗಳು)\n2. ಪಠ್ಯದ ವಿಷಯವನ್ನು 2-3 ವಾಕ್ಯಗಳಲ್ಲಿ ಸಾರಾಂಶಗೊಳಿಸುವ ಮತ್ತು ಓದುಗರಲ್ಲಿ ಕುತೂಹಲವನ್ನು ಹುಟ್ಟಿಸುವ ಒಂದು ಸಣ್ಣ ಪರಿಚಯ\n\nನಿಮ್ಮ ಉತ್ತರವನ್ನು ನಿಖರವಾಗಿ ಈ ರೀತಿ ಫಾರ್ಮ್ಯಾಟ್ ಮಾಡಿ:\nHEADLINE: [ಇಲ್ಲಿ ನಿಮ್ಮ ಶೀರ್ಷಿಕೆ]\nINTRO: [ಇಲ್ಲಿ ನಿಮ್ಮ ಪರಿಚಯ]',
// Punjabi
pa: 'ਤੁਸੀਂ ਇੱਕ ਸਹਾਇਕ ਹੋ ਜੋ ਟੈਕਸਟਾਂ ਦਾ ਵਿਸ਼ਲੇਸ਼ਣ ਅਤੇ ਸੰਖੇਪ ਕਰਦੇ ਹੋ। ਹੇਠਲੇ ਟੈਕਸਟ ਲਈ ਦੋ ਚੀਜ਼ਾਂ ਬਣਾਉਣਾ ਤੁਹਾਡਾ ਕੰਮ ਹੈ:\n1. ਇੱਕ ਛੋਟੀ, ਸੰਖੇਪ ਸਿਰਲੇਖ (ਵੱਧ ਤੋਂ ਵੱਧ 8 ਸ਼ਬਦ)\n2. ਇੱਕ ਛੋਟੀ ਜਾਣ-ਪਛਾਣ ਜੋ ਟੈਕਸਟ ਦੀ ਸਮੱਗਰੀ ਨੂੰ 2-3 ਵਾਕਾਂ ਵਿੱਚ ਸੰਖੇਪ ਕਰੇ ਅਤੇ ਪਾਠਕ ਵਿੱਚ ਉਤਸੁਕਤਾ ਪੈਦਾ ਕਰੇ\n\nਆਪਣੇ ਜਵਾਬ ਨੂੰ ਬਿਲਕੁਲ ਇਸ ਤਰ੍ਹਾਂ ਫਾਰਮੈਟ ਕਰੋ:\nHEADLINE: [ਇੱਥੇ ਤੁਹਾਡੀ ਸਿਰਲੇਖ]\nINTRO: [ਇੱਥੇ ਤੁਹਾਡੀ ਜਾਣ-ਪਛਾਣ]',
// Afrikaans
af: "Jy is 'n assistent wat tekste ontleed en opsom. Jou taak is om twee dinge vir die volgende teks te skep:\n1. 'n Kort, bondige opskrif (maksimum 8 woorde)\n2. 'n Kort inleiding wat die inhoud van die teks in 2-3 sinne opsom en die leser nuuskierig maak\n\nFormateer jou antwoord presies so:\nHEADLINE: [Jou opskrif hier]\nINTRO: [Jou inleiding hier]",
// Persisch/Farsi
fa: 'شما دستیاری هستید که متون را تجزیه و تحلیل و خلاصه می‌کند. وظیفه شما ایجاد دو چیز برای متن زیر است:\n1. یک عنوان کوتاه و مختصر (حداکثر 8 کلمه)\n2. یک مقدمه کوتاه که محتوای متن را در 2-3 جمله خلاصه کند و کنجکاوی خواننده را برانگیزد\n\nپاسخ خود را دقیقاً به این شکل قالب‌بندی کنید:\nHEADLINE: [عنوان شما اینجا]\nINTRO: [مقدمه شما اینجا]',
// Georgisch
ka: 'თქვენ ხართ ასისტენტი, რომელიც აანალიზებს და აჯამებს ტექსტებს. თქვენი ამოცანაა შემდეგი ტექსტისთვის ორი რამ შექმნათ:\n1. მოკლე, ლაკონური სათაური (მაქსიმუმ 8 სიტყვა)\n2. მოკლე შესავალი, რომელიც აჯამებს ტექსტის შინაარსს 2-3 წინადადებაში და აღძრავს მკითხველის ცნობისმოყვარეობას\n\nგააფორმეთ თქვენი პასუხი ზუსტად ასე:\nHEADLINE: [თქვენი სათაური აქ]\nINTRO: [თქვენი შესავალი აქ]',
// Isländisch
is: 'Þú ert aðstoðarmaður sem greinir og dregur saman texta. Verkefni þitt er að búa til tvö hluti fyrir eftirfarandi texta:\n1. Stuttan, hnitmiðaðan fyrirsögn (að hámarki 8 orð)\n2. Stutta inngang sem dregur saman efni textans í 2-3 setningum og vekur forvitni lesandans\n\nSníðdu svarið þitt nákvæmlega svona:\nHEADLINE: [Fyrirsögnin þín hér]\nINTRO: [Inngangurinn þinn hér]',
// Albanisch
sq: 'Ju jeni një asistent që analizon dhe përmbledh tekste. Detyra juaj është të krijoni dy gjëra për tekstin e mëposhtëm:\n1. Një titull të shkurtër dhe të përqendruar (maksimumi 8 fjalë)\n2. Një hyrje të shkurtër që përmbledh përmbajtjen e tekstit në 2-3 fjali dhe ngjall kuriozitenin e lexuesit\n\nFormatoni përgjigjen tuaj saktësisht kështu:\nHEADLINE: [Titulli juaj këtu]\nINTRO: [Hyrja juaj këtu]',
// Aserbaidschanisch
az: 'Siz mətnləri təhlil edən və xülasə çıxaran köməkçisiniz. Sizin vəzifəniz aşağıdakı mətn üçün iki şey yaratmaqdır:\n1. Qısa, dəqiq başlıq (maksimum 8 söz)\n2. Mətnin məzmununu 2-3 cümlədə xülasə edən və oxucunun marağını oyadan qısa giriş\n\nCavabınızı dəqiq belə formatlaşdırın:\nHEADLINE: [Başlığınız burada]\nINTRO: [Girişiniz burada]',
// Baskisch
eu: 'Testuak aztertzen eta laburbildu egiten dituen laguntzaile bat zara. Zure zeregina honako testuarentzat bi gauza sortzea da:\n1. Izenburua labur eta zehatza (gehienez 8 hitz)\n2. Testuaren edukia 2-3 esalditan laburbiltzen duen eta irakurlearen jakin-mina piztuko duen sarrera laburra\n\nErantzuna zehatz-mehatz honela formateatu:\nHEADLINE: [Zure izenburua hemen]\nINTRO: [Zure sarrera hemen]',
// Galizisch
gl: 'Es un asistente que analiza e resume textos. A túa tarefa é crear dúas cousas para o seguinte texto:\n1. Un título breve e conciso (máximo 8 palabras)\n2. Unha breve introdución que resuma o contido do texto en 2-3 frases e esperte a curiosidade do lector\n\nFormatea a túa resposta exactamente así:\nHEADLINE: [O teu título aquí]\nINTRO: [A túa introdución aquí]',
// Kasachisch
kk: 'Сіз мәтіндерді талдайтын және қорытындылайтын көмекшісіз. Сіздің міндетіңіз келесі мәтін үшін екі нәрсе жасау:\n1. Қысқа, нақты тақырып (ең көбі 8 сөз)\n2. Мәтін мазмұнын 2-3 сөйлемде қорытындылайтын және оқырманның қызығушылығын туғызатын қысқа кіріспе\n\nЖауабыңызды дәл осылай пішімдеңіз:\nHEADLINE: [Мұнда сіздің тақырыбыңыз]\nINTRO: [Мұнда сіздің кіріспеңіз]',
// Mazedonisch
mk: 'Вие сте асистент кој анализира и резимира текстови. Вашата задача е да создадете две работи за следниот текст:\n1. Краток, јасен наслов (максимум 8 зборови)\n2. Краток вовед кој го резимира содржината на текстот во 2-3 реченици и ја буди љубопитноста на читателот\n\nФорматирајте го вашиот одговор точно вака:\nHEADLINE: [Вашиот наслов тука]\nINTRO: [Вашиот вовед тука]',
// Serbisch
sr: 'Ви сте асистент који анализира и резимира текстове. Ваш задатак је да направите две ствари за следећи текст:\n1. Кратак, јасан наслов (максимум 8 речи)\n2. Кратак увод који резимира садржај текста у 2-3 реченице и буди радозналост читаоца\n\nФорматирајте ваш одговор тачно овако:\nHEADLINE: [Ваш наслов овде]\nINTRO: [Ваш увод овде]',
// Slowenisch
sl: 'Ste pomočnik, ki analizira in povzema besedila. Vaša naloga je ustvariti dve stvari za naslednje besedilo:\n1. Kratek, jedrnat naslov (največ 8 besed)\n2. Kratek uvod, ki povzema vsebino besedila v 2-3 stavkih in prebudi radovednost bralca\n\nOblikujte svoj odgovor natanko tako:\nHEADLINE: [Vaš naslov tukaj]\nINTRO: [Vaš uvod tukaj]',
// Maltesisch
mt: "Inti assistent li janalizza u jissommarja testi. Il-kompitu tiegħek huwa li toħloq żewġ affarijiet għat-test li ġej:\n1. Intestatura qasira u konċiza (massimu 8 kliem)\n2. Introduzzjoni qasira li tissommarja l-kontenut tat-test f'2-3 sentenzi u tqajjem il-kurżità tal-qarrej\n\nFormatja t-tweġiba tiegħek eżattament hekk:\nHEADLINE: [L-intestatura tiegħek hawn]\nINTRO: [L-introduzzjoni tiegħek hawn]",
// Armenisch
hy: 'Դուք օգնական եք, որը վերլուծում և ամփոփում է տեքստեր: Ձեր խնդիրն է ստեղծել երկու բան հետևյալ տեքստի համար:\n1. Կարճ, հակիրճ վերնագիր (առավելագույնը 8 բառ)\n2. Կարճ ներածություն, որը ամփոփում է տեքստի բովանդակությունը 2-3 նախադասությամբ և արթնացնում ընթերցողի հետաքրքրությունը\n\nՁևակերպեք ձեր պատասխանը հենց այսպես:\nHEADLINE: [Ձեր վերնագիրը այստեղ]\nINTRO: [Ձեր ներածությունը այստեղ]',
// Usbekisch
uz: "Siz matnlarni tahlil qiluvchi va xulosa chiqaruvchi yordamchisiz. Sizning vazifangiz quyidagi matn uchun ikki narsa yaratishdir:\n1. Qisqa, aniq sarlavha (maksimal 8 so'z)\n2. Matn mazmunini 2-3 jumlada xulosa qiladigan va o'quvchining qiziqishini uyg'otadigan qisqa kirish\n\nJavobingizni aynan shunday formatlang:\nHEADLINE: [Bu yerda sizning sarlavhangiz]\nINTRO: [Bu yerda sizning kirishingiz]",
// Irisch
ga: 'Is cúntóir thú a dhéanann anailís agus achoimre ar théacsanna. Is é do thasc dhá rud a chruthú don téacs seo a leanas:\n1. Ceannlíne ghearr, ghonta (8 bhfocal ar a mhéad)\n2. Réamhrá gearr a dhéanann achoimre ar ábhar an téacs i 2-3 abairt agus a spreagann fiosracht an léitheora\n\nFormáidigh do fhreagra díreach mar seo:\nHEADLINE: [Do cheannlíne anseo]\nINTRO: [Do réamhrá anseo]',
// Walisisch
cy: "Rydych chi'n gynorthwyydd sy'n dadansoddi ac yn crynhoi testunau. Eich tasg yw creu dau beth ar gyfer y testun canlynol:\n1. Pennawd byr, cryno (uchafswm o 8 gair)\n2. Cyflwyniad byr sy'n crynhoi cynnwys y testun mewn 2-3 brawddeg ac yn ennyn chwilfrydedd y darllenydd\n\nFformatiwch eich ateb yn union fel hyn:\nHEADLINE: [Eich pennawd yma]\nINTRO: [Eich cyflwyniad yma]",
// Filipino
fil: 'Ikaw ay isang katulong na nag-aanalisa at bumubuod ng mga teksto. Ang iyong gawain ay lumikha ng dalawang bagay para sa sumusunod na teksto:\n1. Maikling, malinaw na pamagat (hindi hihigit sa 8 salita)\n2. Maikling panimula na bumubuod sa nilalaman ng teksto sa 2-3 pangungusap at nakakagising ng kuryosidad ng mambabasa\n\nI-format ang iyong sagot nang eksakto tulad nito:\nHEADLINE: [Ang iyong pamagat dito]\nINTRO: [Ang iyong panimula dito]',
},
};
/**
* Hilfsfunktion zum Abrufen des Headline-Prompts für eine bestimmte Sprache
* @param language Sprache (z.B. 'de', 'en', 'fr')
* @returns Headline-Prompt für die angegebene Sprache oder Fallback
*/ export function getHeadlinePrompt(language) {
const lang = language.toLowerCase().split('-')[0]; // z.B. 'de-DE' -> 'de'
// Versuche spezifische Sprache, dann Deutsch, dann Englisch, dann erste verfügbare
return (
SYSTEM_PROMPTS.headline[lang] ||
SYSTEM_PROMPTS.headline['de'] ||
SYSTEM_PROMPTS.headline['en'] ||
Object.values(SYSTEM_PROMPTS.headline)[0] ||
'You are an assistant that analyzes and summarizes texts.'
);
}

View file

@ -0,0 +1,239 @@
import { Injectable, Logger } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { createClient } from '@supabase/supabase-js';
import { AiService } from '../ai.service';
import { AI_PRESETS } from '../ai-model.config';
import { SYSTEM_PROMPTS } from './headline.prompts';
@Injectable()
export class HeadlineService {
private readonly logger = new Logger(HeadlineService.name);
private readonly supabaseUrl: string;
private readonly supabaseServiceKey: string;
constructor(
private aiService: AiService,
private configService: ConfigService
) {
this.supabaseUrl = this.configService.get<string>('MEMORO_SUPABASE_URL', '');
this.supabaseServiceKey = this.configService.get<string>('MEMORO_SUPABASE_SERVICE_KEY', '');
}
/**
* Generiert Headline + Intro aus einem Transkript.
*/
async generateHeadlineAndIntro(
transcript: string,
language = 'de'
): Promise<{ headline: string; intro: string }> {
const prompt = this.buildPrompt(transcript, language);
try {
const content = await this.aiService.generateText(prompt, AI_PRESETS.headline);
const result = this.parseResponse(content);
this.logger.debug(`Headline generated: "${result.headline}" (lang=${language})`);
return result;
} catch (error) {
this.logger.error(
`Headline generation failed: ${error instanceof Error ? error.message : error}`
);
return { headline: 'Neue Aufnahme', intro: 'Keine Zusammenfassung verfügbar.' };
}
}
/**
* Vollständige Pipeline: Memo laden Headline generieren Memo updaten Broadcast senden.
*/
async processHeadlineForMemo(memoId: string): Promise<{ headline: string; intro: string }> {
const supabase = createClient(this.supabaseUrl, this.supabaseServiceKey);
// Set processing status
await this.setProcessingStatus(supabase, memoId, 'processing');
try {
// Memo laden
const { data: memo, error: memoError } = await supabase
.from('memos')
.select('*')
.eq('id', memoId)
.single();
if (memoError || !memo) {
throw new Error(`Memo not found: ${memoError?.message || 'unknown'}`);
}
// Transkript extrahieren
const transcript = this.extractTranscript(memo);
if (!transcript) {
await this.setErrorStatus(supabase, memoId, 'Kein Transkript im Memo gefunden');
throw new Error('No transcript found in memo');
}
// Sprache ermitteln
const language = this.detectLanguage(memo);
// Headline generieren
const { headline, intro } = await this.generateHeadlineAndIntro(transcript, language);
// Memo updaten
const { error: updateError } = await supabase
.from('memos')
.update({
title: headline,
intro,
updated_at: new Date().toISOString(),
})
.eq('id', memoId);
if (updateError) {
throw new Error(`Memo update failed: ${updateError.message}`);
}
// Broadcast senden (fire & forget)
this.sendBroadcast(supabase, memoId, headline, intro).catch((err) =>
this.logger.warn(`Broadcast failed for memo ${memoId}: ${err}`)
);
// Status auf completed setzen
await this.setCompletedStatus(supabase, memoId, { headline, intro, language });
this.logger.log(`Headline processed for memo ${memoId}: "${headline}"`);
return { headline, intro };
} catch (error) {
const msg = error instanceof Error ? error.message : String(error);
await this.setErrorStatus(supabase, memoId, msg);
throw error;
}
}
// ── Private Helpers ──
private buildPrompt(transcript: string, language: string): string {
const baseLanguage = language.split('-')[0].toLowerCase();
const systemPrompt =
SYSTEM_PROMPTS.headline[baseLanguage] ||
SYSTEM_PROMPTS.headline['de'] ||
SYSTEM_PROMPTS.headline['en'];
return `${systemPrompt}\n\n${transcript}`;
}
private parseResponse(content: string): { headline: string; intro: string } {
const headlineMatch = content.match(/HEADLINE:\s*(.+?)(?=\nINTRO:|$)/s);
const introMatch = content.match(/INTRO:\s*(.+?)$/s);
return {
headline: headlineMatch?.[1]?.trim() || 'Neue Aufnahme',
intro: introMatch?.[1]?.trim() || 'Keine Zusammenfassung verfügbar.',
};
}
private extractTranscript(memo: any): string {
// Utterances (bevorzugt)
if (memo.source?.utterances?.length > 0) {
return [...memo.source.utterances]
.sort((a: any, b: any) => (a.offset || 0) - (b.offset || 0))
.map((u: any) => u.text)
.filter(Boolean)
.join(' ');
}
// Direkte Transkript-Felder
if (memo.transcript) return memo.transcript;
if (memo.source?.transcript) return memo.source.transcript;
if (memo.source?.content) return memo.source.content;
// Kombinierte Aufnahmen
if (memo.source?.type === 'combined' && memo.source?.additional_recordings) {
return memo.source.additional_recordings
.map((rec: any) => {
if (rec.utterances?.length > 0) {
return [...rec.utterances]
.sort((a: any, b: any) => (a.offset || 0) - (b.offset || 0))
.map((u: any) => u.text)
.filter(Boolean)
.join(' ');
}
return rec.transcript || '';
})
.filter(Boolean)
.join('\n\n');
}
return '';
}
private detectLanguage(memo: any): string {
if (memo.source?.primary_language) return memo.source.primary_language;
if (memo.source?.languages?.[0]) return memo.source.languages[0];
if (memo.metadata?.primary_language) return memo.metadata.primary_language;
return 'de';
}
private async setProcessingStatus(supabase: any, memoId: string, status: string): Promise<void> {
try {
await supabase.rpc('set_memo_process_status', {
p_memo_id: memoId,
p_process_name: 'headline_and_intro',
p_status: status,
p_timestamp: new Date().toISOString(),
});
} catch (err) {
this.logger.error(`Failed to set processing status for ${memoId}: ${err}`);
}
}
private async setCompletedStatus(supabase: any, memoId: string, details: any): Promise<void> {
try {
await supabase.rpc('set_memo_process_status_with_details', {
p_memo_id: memoId,
p_process_name: 'headline_and_intro',
p_status: 'completed',
p_timestamp: new Date().toISOString(),
p_details: details,
});
} catch (err) {
this.logger.error(`Failed to set completed status for ${memoId}: ${err}`);
}
}
private async setErrorStatus(supabase: any, memoId: string, errorMsg: string): Promise<void> {
try {
await supabase.rpc('set_memo_process_error', {
p_memo_id: memoId,
p_process_name: 'headline_and_intro',
p_timestamp: new Date().toISOString(),
p_reason: errorMsg,
p_details: null,
});
} catch (err) {
this.logger.error(`Failed to set error status for ${memoId}: ${err}`);
}
}
private async sendBroadcast(
supabase: any,
memoId: string,
headline: string,
intro: string
): Promise<void> {
const channel = supabase.channel(`memo-updates-${memoId}`);
await new Promise<void>((resolve) => {
channel.subscribe(async (status: string) => {
if (status === 'SUBSCRIBED') {
await channel.send({
type: 'broadcast',
event: 'memo-updated',
payload: {
type: 'memo-updated',
memoId,
changes: { title: headline, intro, updated_at: new Date().toISOString() },
source: 'headline-ai-service',
},
});
supabase.removeChannel(channel);
resolve();
}
});
});
}
}

View file

@ -0,0 +1,75 @@
/**
* System-Prompts für die Memory-Erstellung in verschiedenen Sprachen
*
* Die Prompts werden als System-Prompt für die AI-Nachrichten verwendet,
* um konsistente und hilfreiche Antworten zu generieren.
*/ /**
* Interface für die Prompt-Konfiguration
*/ /**
* System-Prompts für die Memory-Erstellung
*
* Unterstützte Sprachen:
* - de: Deutsch
* - en: Englisch
* - fr: Französisch
* - es: Spanisch
* - it: Italienisch
* - nl: Niederländisch
* - pt: Portugiesisch
* - ru: Russisch
* - ja: Japanisch
* - ko: Koreanisch
* - zh: Chinesisch
* - ar: Arabisch
* - hi: Hindi
* - tr: Türkisch
* - pl: Polnisch
*/ export const SYSTEM_PROMPTS = {
system: {
// Deutsch
de: 'Du bist ein hilfreicher Assistent, der Texte analysiert und verarbeitet. Deine Aufgabe ist es, Transkripte von Gesrpächen gemäß den gegebenen Anweisungen zu bearbeiten. Antworte präzise, strukturiert und hilfreich. Antworte in plain text.',
// Englisch
en: 'You are a helpful assistant that analyzes and processes texts. Your task is to process transcripts of conversations according to the given instructions. Respond precisely, structured, and helpfully. Respond in plain text.',
// Französisch
fr: 'Vous êtes un assistant utile qui analyse et traite les textes. Votre tâche est de traiter les transcriptions de conversations selon les instructions données. Répondez de manière précise, structurée et utile. Répondez en texte brut.',
// Spanisch
es: 'Eres un asistente útil que analiza y procesa textos. Tu tarea es procesar transcripciones de conversaciones según las instrucciones dadas. Responde de forma precisa, estructurada y útil. Responde en texto plano.',
// Italienisch
it: 'Sei un assistente utile che analizza e elabora testi. Il tuo compito è elaborare trascrizioni di conversazioni secondo le istruzioni date. Rispondi in modo preciso, strutturato e utile. Rispondi in testo semplice.',
// Niederländisch
nl: 'Je bent een behulpzame assistent die teksten analyseert en verwerkt. Je taak is om transcripties van gesprekken te verwerken volgens de gegeven instructies. Antwoord precies, gestructureerd en behulpzaam. Antwoord in platte tekst.',
// Portugiesisch
pt: 'Você é um assistente útil que analisa e processa textos. Sua tarefa é processar transcrições de conversas de acordo com as instruções dadas. Responda de forma precisa, estruturada e útil. Responda em texto simples.',
// Russisch
ru: 'Вы полезный помощник, который анализирует и обрабатывает тексты. Ваша задача - обрабатывать расшифровки разговоров согласно данным инструкциям. Отвечайте точно, структурированно и полезно. Отвечайте простым текстом.',
// Japanisch
ja: 'あなたはテキストを分析・処理する有用なアシスタントです。あなたの仕事は、与えられた指示に従って会話の転写を処理することです。正確で構造化された有用な回答をしてください。プレーンテキストで回答してください。',
// Koreanisch
ko: '당신은 텍스트를 분석하고 처리하는 유용한 어시스턴트입니다. 당신의 임무는 주어진 지시에 따라 대화의 전사본을 처리하는 것입니다. 정확하고 구조화되며 도움이 되는 방식으로 응답하세요. 일반 텍스트로 응답하세요.',
// Chinesisch (vereinfacht)
zh: '你是一个有用的助手,负责分析和处理文本。你的任务是根据给定的指令处理对话的转录。请准确、结构化、有帮助地回答。请用纯文本回答。',
// Arabisch
ar: 'أنت مساعد مفيد يحلل ويعالج النصوص. مهمتك هي معالجة نسخ المحادثات وفقاً للتعليمات المقدمة. أجب بدقة وبطريقة منظمة ومفيدة. أجب بنص عادي.',
// Hindi
hi: 'आप एक उपयोगी सहायक हैं जो पाठों का विश्लेषण और प्रसंस्करण करते हैं। आपका कार्य दिए गए निर्देशों के अनुसार बातचीत के प्रतिलेख को संसाधित करना है। सटीक, संरचित और सहायक तरीके से उत्तर दें। सादे पाठ में उत्तर दें।',
// Türkisch
tr: 'Metinleri analiz eden ve işleyen yararlı bir asistansınız. Göreviniz, verilen talimatlara göre konuşma transkriptlerini işlemektir. Kesin, yapılandırılmış ve yararlı şekilde yanıt verin. Düz metin olarak yanıt verin.',
// Polnisch
pl: 'Jesteś pomocnym asystentem, który analizuje i przetwarza teksty. Twoim zadaniem jest przetwarzanie transkrypcji rozmów zgodnie z podanymi instrukcjami. Odpowiadaj precyzyjnie, uporządkowanie i pomocnie. Odpowiadaj zwykłym tekstem.',
},
};
/**
* Hilfsfunktion zum Abrufen des System-Prompts für eine bestimmte Sprache
* @param language Sprache (z.B. 'de', 'en', 'fr')
* @returns System-Prompt für die angegebene Sprache oder Fallback
*/ export function getSystemPrompt(language) {
const lang = language.toLowerCase().split('-')[0]; // z.B. 'de-DE' -> 'de'
// Versuche spezifische Sprache, dann Deutsch, dann Englisch, dann erste verfügbare
return (
SYSTEM_PROMPTS.system[lang] ||
SYSTEM_PROMPTS.system['de'] ||
SYSTEM_PROMPTS.system['en'] ||
Object.values(SYSTEM_PROMPTS.system)[0] ||
'You are a helpful AI assistant.'
);
}

View file

@ -0,0 +1,138 @@
import { Injectable, Logger } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { createClient } from '@supabase/supabase-js';
import { AiService } from '../ai.service';
import { AI_PRESETS } from '../ai-model.config';
import { getTranscriptText } from '../shared/transcript-utils';
import { UserPromptService } from '../shared/user-prompt.service';
@Injectable()
export class MemoryService {
private readonly logger = new Logger(MemoryService.name);
private readonly supabaseUrl: string;
private readonly supabaseServiceKey: string;
constructor(
private aiService: AiService,
private userPromptService: UserPromptService,
private configService: ConfigService
) {
this.supabaseUrl = this.configService.get<string>('MEMORO_SUPABASE_URL', '');
this.supabaseServiceKey = this.configService.get<string>('MEMORO_SUPABASE_SERVICE_KEY', '');
}
/**
* Erstellt eine Memory für ein Memo mit einem spezifischen Prompt.
* Repliziert die create-memory Edge Function.
*/
async createMemory(
memoId: string,
promptId: string
): Promise<{ memoryId: string; title: string; content: string }> {
const supabase = createClient(this.supabaseUrl, this.supabaseServiceKey);
// Memo laden
const { data: memo, error: memoError } = await supabase
.from('memos')
.select('*')
.eq('id', memoId)
.single();
if (memoError || !memo) {
throw new Error(`Memo not found: ${memoError?.message || 'unknown'}`);
}
// Prompt laden
const { data: prompt, error: promptError } = await supabase
.from('prompts')
.select('*')
.eq('id', promptId)
.single();
if (promptError || !prompt) {
throw new Error(`Prompt not found: ${promptError?.message || 'unknown'}`);
}
// Transkript extrahieren
const transcript = getTranscriptText(memo);
if (!transcript) {
throw new Error('No transcript found in memo');
}
// Sprache ermitteln
const primaryLanguage = memo.source?.primary_language || memo.source?.languages?.[0];
const baseLang = primaryLanguage ? primaryLanguage.split('-')[0].toLowerCase() : 'de';
// Prompt-Text extrahieren (mehrsprachig)
let promptText = this.getLocalizedText(prompt.prompt_text, baseLang);
if (!promptText) {
throw new Error(`No prompt text found for prompt ${promptId}`);
}
// System Pre-Prompt voranstellen (User-spezifisch oder Default)
const prePrompt = await this.userPromptService.getSystemPromptForMemo(memo.user_id, baseLang);
if (prePrompt) {
promptText = `${prePrompt}\n\n${promptText}`;
}
// Memory-Titel extrahieren
const memoryTitle = this.getLocalizedText(prompt.memory_title, baseLang) || 'Memory';
// Prompt mit Transkript zusammenbauen
const fullPrompt = promptText.includes('{transcript}')
? promptText.replace('{transcript}', transcript)
: `${promptText}\n\nText: ${transcript}`;
// AI-Antwort generieren
const answer = await this.aiService.generateText(fullPrompt, AI_PRESETS.memory);
if (!answer) {
throw new Error('No response from AI');
}
// Sort-Order ermitteln
const { data: maxSortData } = await supabase
.from('memories')
.select('sort_order')
.eq('memo_id', memoId)
.order('sort_order', { ascending: false })
.limit(1)
.single();
const nextSortOrder = maxSortData?.sort_order
? maxSortData.sort_order + 1
: Math.floor(Math.random() * 5000) + 5000;
// Memory speichern
const { data: newMemory, error: insertError } = await supabase
.from('memories')
.insert({
memo_id: memoId,
title: memoryTitle,
content: answer,
media: null,
sort_order: nextSortOrder,
metadata: {
type: 'manual_prompt',
prompt_id: promptId,
created_by: 'ai_memory_service',
},
})
.select()
.single();
if (insertError) {
throw new Error(`Failed to create memory: ${insertError.message}`);
}
this.logger.log(`Memory created: ${newMemory.id} for memo ${memoId} (prompt: ${promptId})`);
return { memoryId: newMemory.id, title: memoryTitle, content: answer };
}
private getLocalizedText(textObj: any, lang: string): string {
if (!textObj || typeof textObj !== 'object') return '';
return (
textObj[lang] || textObj['de'] || textObj['en'] || (Object.values(textObj)[0] as string) || ''
);
}
}

View file

@ -0,0 +1,195 @@
import { Injectable, Logger } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { createClient } from '@supabase/supabase-js';
import { AiService } from '../ai.service';
import { AI_PRESETS } from '../ai-model.config';
import { UserPromptService } from '../shared/user-prompt.service';
@Injectable()
export class QuestionService {
private readonly logger = new Logger(QuestionService.name);
private readonly supabaseUrl: string;
private readonly supabaseServiceKey: string;
constructor(
private aiService: AiService,
private userPromptService: UserPromptService,
private configService: ConfigService
) {
this.supabaseUrl = this.configService.get<string>('MEMORO_SUPABASE_URL', '');
this.supabaseServiceKey = this.configService.get<string>('MEMORO_SUPABASE_SERVICE_KEY', '');
}
/**
* Beantwortet eine Frage zu einem Memo und speichert die Antwort als Memory.
* Repliziert die question-memo Edge Function.
*/
async askQuestion(
memoId: string,
question: string
): Promise<{ memoryId: string; question: string; answer: string }> {
const supabase = createClient(this.supabaseUrl, this.supabaseServiceKey);
// Memo laden
const { data: memo, error: memoError } = await supabase
.from('memos')
.select('*')
.eq('id', memoId)
.single();
if (memoError || !memo) {
throw new Error(`Memo not found: ${memoError?.message || 'unknown'}`);
}
// Kontext-Informationen extrahieren
const contextInfo = this.extractContextInfo(memo.source, memo.metadata);
if (!contextInfo.transcript) {
throw new Error('No transcript found in memo');
}
// Sprache ermitteln
const primaryLanguage = memo.source?.primary_language || memo.source?.languages?.[0];
const baseLang = primaryLanguage ? primaryLanguage.split('-')[0].toLowerCase() : 'de';
// System-Prompt laden (User-spezifisch oder Default)
const prePrompt = await this.userPromptService.getSystemPromptForMemo(memo.user_id, baseLang);
// Prompt zusammenbauen
const prompt = this.buildQuestionPrompt(question, contextInfo, prePrompt);
// AI-Antwort generieren
const answer = await this.aiService.generateText(prompt, AI_PRESETS.memory);
if (!answer) {
throw new Error('No response from AI');
}
// Sort-Order ermitteln (Q&A range: 200-299)
const { data: maxSortData } = await supabase
.from('memories')
.select('sort_order')
.eq('memo_id', memoId)
.order('sort_order', { ascending: false })
.limit(1)
.single();
const nextSortOrder = maxSortData?.sort_order ? maxSortData.sort_order + 1 : 200;
// Memory speichern
const { data: newMemory, error: insertError } = await supabase
.from('memories')
.insert({
memo_id: memoId,
title: question,
content: answer,
media: null,
sort_order: nextSortOrder,
metadata: {
type: 'question',
question,
created_by: 'ai_question_service',
},
})
.select()
.single();
if (insertError) {
throw new Error(`Failed to create memory: ${insertError.message}`);
}
this.logger.log(`Question answered for memo ${memoId}: "${question.substring(0, 50)}..."`);
return { memoryId: newMemory.id, question, answer };
}
private buildQuestionPrompt(question: string, contextInfo: any, prePrompt: string): string {
const contextParts: string[] = [];
if (contextInfo.locationName) {
contextParts.push(`Aufnahmeort: ${contextInfo.locationName}`);
} else if (contextInfo.locationAddress) {
contextParts.push(`Aufnahmeort: ${contextInfo.locationAddress}`);
}
const statsInfo: string[] = [];
if (contextInfo.hasMultipleSpeakers) {
statsInfo.push(`${contextInfo.speakerCount} Sprecher`);
}
statsInfo.push(`${Math.round(contextInfo.duration)}s Dauer`);
if (contextInfo.wordCount) {
statsInfo.push(`${contextInfo.wordCount} Wörter`);
}
contextParts.push(`Audio-Info: ${statsInfo.join(', ')}`);
const contextFooter =
contextParts.length > 0
? `\n\nZusätzliche Kontext-Informationen:\n${contextParts.join('\n')}`
: '';
const userPrompt = `Frage: ${question}\n\nTranskript:\n${contextInfo.transcript}${contextFooter}\n\n${contextInfo.hasMultipleSpeakers ? 'Du kannst bei Bedarf auf spezifische Sprecher verweisen.' : ''}`;
return prePrompt ? `${prePrompt}\n\n${userPrompt}` : userPrompt;
}
private extractContextInfo(source: any, metadata: any = {}): any {
const transcript = this.formatTranscriptWithSpeakers(source);
let speakerCount = 0;
let totalDuration = 0;
const language = source?.primary_language || source?.languages?.[0] || 'unbekannt';
if (source?.type === 'combined' && source?.additional_recordings) {
const allSpeakers = new Set<string>();
for (const rec of source.additional_recordings) {
if (rec.speakers) {
Object.keys(rec.speakers).forEach((id) => allSpeakers.add(id));
}
if (rec.duration) totalDuration += rec.duration;
}
speakerCount = allSpeakers.size;
totalDuration = source.duration || totalDuration;
} else {
speakerCount = source?.speakers ? Object.keys(source.speakers).length : 0;
totalDuration = source?.duration || 0;
}
return {
transcript,
duration: metadata?.stats?.audioDuration || totalDuration,
speakerCount,
wordCount: metadata?.stats?.wordCount || null,
language,
locationName: metadata?.location?.address?.name || null,
locationAddress: metadata?.location?.address?.formattedAddress || null,
hasMultipleSpeakers: speakerCount > 1,
hasLocation: !!(
metadata?.location?.address?.name || metadata?.location?.address?.formattedAddress
),
};
}
private formatTranscriptWithSpeakers(source: any): string {
if (source?.type === 'combined' && source?.additional_recordings?.length > 0) {
const transcripts = source.additional_recordings
.map((rec: any) => {
if (rec.utterances?.length > 0) {
return rec.speakers
? rec.utterances
.map((u: any) => `${rec.speakers[u.speakerId] || u.speakerId}: ${u.text}`)
.join('\n')
: rec.utterances.map((u: any) => u.text).join(' ');
}
return rec.transcript || rec.content || rec.transcription || '';
})
.filter(Boolean);
if (transcripts.length > 0) return transcripts.join('\n\n--- Nächstes Memo ---\n\n');
}
if (source?.utterances?.length > 0 && source?.speakers) {
return source.utterances
.map((u: any) => `${source.speakers[u.speakerId] || u.speakerId}: ${u.text}`)
.join('\n');
}
return source?.transcript || source?.content || source?.transcription || '';
}
}

View file

@ -0,0 +1,199 @@
/**
* Root System Prompts für alle Edge Functions
*
* Diese Prompts werden als Basis für alle Text-Analyse und Verarbeitungsfunktionen verwendet.
* Jede Sprache hat ihren eigenen Prompt, der die spezifischen Anforderungen berücksichtigt.
*/
export const ROOT_SYSTEM_PROMPTS = {
PRE_PROMPT: {
// Deutsch
de: 'Du bist ein hilfreicher Assistent, der Texte analysiert und verarbeitet. Deine Aufgabe ist es, Transkripte von Gesprächen gemäß den gegebenen Anweisungen zu bearbeiten. Antworte in Markdown mit einem schönen Format. Nutze keine Tabellen und keinen Code in Markdown. Antworte präzise, strukturiert und hilfreich.',
// Englisch
en: 'You are a helpful assistant that analyzes and processes texts. Your task is to process conversation transcripts according to the given instructions. Respond in Markdown with a nice format. Do not use tables or code in Markdown. Respond precisely, structured, and helpfully.',
// Französisch
fr: "Vous êtes un assistant utile qui analyse et traite les textes. Votre tâche est de traiter les transcriptions de conversations selon les instructions données. Répondez en Markdown avec un beau format. N'utilisez pas de tableaux ou de code en Markdown. Répondez de manière précise, structurée et utile.",
// Spanisch
es: 'Eres un asistente útil que analiza y procesa textos. Tu tarea es procesar transcripciones de conversaciones según las instrucciones dadas. Responde en Markdown con un formato atractivo. No uses tablas o código en Markdown. Responde de manera precisa, estructurada y útil.',
// Italienisch
it: 'Sei un assistente utile che analizza ed elabora testi. Il tuo compito è elaborare trascrizioni di conversazioni secondo le istruzioni fornite. Rispondi in Markdown con un bel formato. Non usare tabelle o codice in Markdown. Rispondi in modo preciso, strutturato e utile.',
// Niederländisch
nl: 'Je bent een behulpzame assistent die teksten analyseert en verwerkt. Je taak is om transcripties van gesprekken te verwerken volgens de gegeven instructies. Antwoord in Markdown met een mooi formaat. Gebruik geen tabellen of code in Markdown. Antwoord precies, gestructureerd en behulpzaam.',
// Portugiesisch
pt: 'Você é um assistente útil que analisa e processa textos. Sua tarefa é processar transcrições de conversas de acordo com as instruções fornecidas. Responda em Markdown com um formato bonito. Não use tabelas ou código em Markdown. Responda de forma precisa, estruturada e útil.',
// Russisch
ru: 'Вы полезный помощник, который анализирует и обрабатывает тексты. Ваша задача - обрабатывать расшифровки разговоров в соответствии с данными инструкциями. Отвечайте в Markdown с красивым форматированием. Не используйте таблицы или код в Markdown. Отвечайте точно, структурированно и полезно.',
// Japanisch
ja: 'あなたはテキストを分析し処理する有用なアシスタントです。あなたの仕事は、与えられた指示に従って会話の文字起こしを処理することです。Markdownで美しいフォーマットで回答してください。Markdownでテーブルやコードを使用しないでください。正確で、構造化され、役立つように回答してください。',
// Koreanisch
ko: '당신은 텍스트를 분석하고 처리하는 유용한 어시스턴트입니다. 당신의 임무는 주어진 지시에 따라 대화 녹취록을 처리하는 것입니다. 멋진 형식의 Markdown으로 응답하세요. Markdown에서 표나 코드를 사용하지 마세요. 정확하고 구조화되며 도움이 되도록 응답하세요.',
// Chinesisch
zh: '你是一个有用的助手分析和处理文本。你的任务是根据给定的指示处理对话记录。以优美的Markdown格式回复。不要在Markdown中使用表格或代码。回复要准确、有条理、有帮助。',
// Arabisch
ar: 'أنت مساعد مفيد يحلل ويعالج النصوص. مهمتك هي معالجة نصوص المحادثات وفقًا للتعليمات المعطاة. أجب بتنسيق Markdown جميل. لا تستخدم الجداول أو الكود في Markdown. أجب بدقة وبشكل منظم ومفيد.',
// Hindi
hi: 'आप एक सहायक सहायक हैं जो ग्रंथों का विश्लेषण और प्रसंस्करण करते हैं। आपका कार्य दिए गए निर्देशों के अनुसार वार्तालाप प्रतिलेखों को संसाधित करना है। एक अच्छे प्रारूप के साथ Markdown में उत्तर दें। Markdown में तालिकाओं या कोड का उपयोग न करें। सटीक, संरचित और सहायक रूप से उत्तर दें।',
// Türkisch
tr: "Metinleri analiz eden ve işleyen yardımcı bir asistansınız. Göreviniz, verilen talimatlara göre konuşma transkriptlerini işlemektir. Güzel bir formatla Markdown'da yanıt verin. Markdown'da tablo veya kod kullanmayın. Kesin, yapılandırılmış ve yararlı bir şekilde yanıt verin.",
// Polnisch
pl: 'Jesteś pomocnym asystentem, który analizuje i przetwarza teksty. Twoim zadaniem jest przetwarzanie transkrypcji rozmów zgodnie z podanymi instrukcjami. Odpowiadaj w Markdown z ładnym formatowaniem. Nie używaj tabel ani kodu w Markdown. Odpowiadaj precyzyjnie, strukturalnie i pomocnie.',
// Dänisch
da: 'Du er en hjælpsom assistent, der analyserer og behandler tekster. Din opgave er at behandle samtaleudskrifter i henhold til de givne instruktioner. Svar i Markdown med et pænt format. Brug ikke tabeller eller kode i Markdown. Svar præcist, struktureret og hjælpsomt.',
// Schwedisch
sv: 'Du är en hjälpsam assistent som analyserar och bearbetar texter. Din uppgift är att bearbeta samtalstranskriptioner enligt givna instruktioner. Svara i Markdown med ett snyggt format. Använd inte tabeller eller kod i Markdown. Svara exakt, strukturerat och hjälpsamt.',
// Norwegisch
nb: 'Du er en hjelpsom assistent som analyserer og behandler tekster. Din oppgave er å behandle samtaletranskripsjoner i henhold til gitte instruksjoner. Svar i Markdown med et pent format. Ikke bruk tabeller eller kode i Markdown. Svar presist, strukturert og hjelpsomt.',
// Finnisch
fi: 'Olet hyödyllinen avustaja, joka analysoi ja käsittelee tekstejä. Tehtäväsi on käsitellä keskustelulitterointeja annettujen ohjeiden mukaisesti. Vastaa Markdownissa kauniilla muotoilulla. Älä käytä taulukoita tai koodia Markdownissa. Vastaa tarkasti, jäsennellysti ja avuliaasti.',
// Tschechisch
cs: 'Jste užitečný asistent, který analyzuje a zpracovává texty. Vaším úkolem je zpracovávat přepisy konverzací podle daných pokynů. Odpovězte v Markdownu s pěkným formátováním. Nepoužívejte tabulky nebo kód v Markdownu. Odpovězte přesně, strukturovaně a užitečně.',
// Ungarisch
hu: 'Ön egy hasznos asszisztens, aki szövegeket elemez és dolgoz fel. Az Ön feladata a beszélgetések átiratainak feldolgozása a megadott utasítások szerint. Válaszoljon Markdownban szép formázással. Ne használjon táblázatokat vagy kódot a Markdownban. Válaszoljon pontosan, strukturáltan és hasznossan.',
// Griechisch
el: 'Είστε ένας χρήσιμος βοηθός που αναλύει και επεξεργάζεται κείμενα. Το καθήκον σας είναι να επεξεργάζεστε μεταγραφές συνομιλιών σύμφωνα με τις δοθείσες οδηγίες. Απαντήστε σε Markdown με όμορφη μορφοποίηση. Μην χρησιμοποιείτε πίνακες ή κώδικα στο Markdown. Απαντήστε με ακρίβεια, δομημένα και χρήσιμα.',
// Hebräisch
he: 'אתה עוזר מועיל שמנתח ומעבד טקסטים. המשימה שלך היא לעבד תמלילי שיחות בהתאם להוראות שניתנו. הגב ב-Markdown עם עיצוב יפה. אל תשתמש בטבלאות או קוד ב-Markdown. הגב בצורה מדויקת, מובנית ומועילה.',
// Indonesisch
id: 'Anda adalah asisten yang membantu menganalisis dan memproses teks. Tugas Anda adalah memproses transkrip percakapan sesuai dengan instruksi yang diberikan. Tanggapi dalam Markdown dengan format yang bagus. Jangan gunakan tabel atau kode dalam Markdown. Tanggapi dengan tepat, terstruktur, dan bermanfaat.',
// Thai
th: 'คุณเป็นผู้ช่วยที่มีประโยชน์ที่วิเคราะห์และประมวลผลข้อความ งานของคุณคือประมวลผลบทสนทนาตามคำแนะนำที่กำหนด ตอบกลับใน Markdown ด้วยรูปแบบที่สวยงาม อย่าใช้ตารางหรือโค้ดใน Markdown ตอบกลับอย่างแม่นยำ มีโครงสร้าง และเป็นประโยชน์',
// Vietnamesisch
vi: 'Bạn là một trợ lý hữu ích phân tích và xử lý văn bản. Nhiệm vụ của bạn là xử lý bản ghi cuộc trò chuyện theo hướng dẫn đã cho. Trả lời bằng Markdown với định dạng đẹp. Không sử dụng bảng hoặc mã trong Markdown. Trả lời chính xác, có cấu trúc và hữu ích.',
// Ukrainisch
uk: 'Ви корисний помічник, який аналізує та обробляє тексти. Ваше завдання - обробляти розшифровки розмов відповідно до наданих інструкцій. Відповідайте в Markdown з гарним форматуванням. Не використовуйте таблиці або код у Markdown. Відповідайте точно, структуровано та корисно.',
// Rumänisch
ro: 'Sunteți un asistent util care analizează și procesează texte. Sarcina dvs. este să procesați transcrierile conversațiilor conform instrucțiunilor date. Răspundeți în Markdown cu un format frumos. Nu utilizați tabele sau cod în Markdown. Răspundeți precis, structurat și util.',
// Bulgarisch
bg: 'Вие сте полезен асистент, който анализира и обработва текстове. Вашата задача е да обработвате транскрипции на разговори според дадените инструкции. Отговорете в Markdown с красив формат. Не използвайте таблици или код в Markdown. Отговорете точно, структурирано и полезно.',
// Katalanisch
ca: 'Ets un assistent útil que analitza i processa textos. La teva tasca és processar transcripcions de converses segons les instruccions donades. Respon en Markdown amb un format bonic. No utilitzis taules o codi en Markdown. Respon de manera precisa, estructurada i útil.',
// Kroatisch
hr: 'Vi ste korisni asistent koji analizira i obrađuje tekstove. Vaš zadatak je obraditi transkripcije razgovora prema danim uputama. Odgovorite u Markdownu s lijepim formatom. Ne koristite tablice ili kod u Markdownu. Odgovorite precizno, strukturirano i korisno.',
// Slowakisch
sk: 'Ste užitočný asistent, ktorý analyzuje a spracováva texty. Vašou úlohou je spracovávať prepisy konverzácií podľa daných pokynov. Odpovedzte v Markdowne s pekným formátovaním. Nepoužívajte tabuľky alebo kód v Markdowne. Odpovedzte presne, štruktúrovane a užitočne.',
// Estnisch
et: 'Olete kasulik assistent, kes analüüsib ja töötleb tekste. Teie ülesanne on töödelda vestluste ärakirju vastavalt antud juhistele. Vastake Markdownis ilusa vorminguga. Ärge kasutage Markdownis tabeleid ega koodi. Vastake täpselt, struktureeritult ja kasulikult.',
// Lettisch
lv: 'Jūs esat noderīgs asistents, kas analizē un apstrādā tekstus. Jūsu uzdevums ir apstrādāt sarunu atšifrējumus saskaņā ar dotajiem norādījumiem. Atbildiet Markdown ar skaistu formatējumu. Neizmantojiet tabulas vai kodu Markdown. Atbildiet precīzi, strukturēti un noderīgi.',
// Litauisch
lt: 'Esate naudingas asistentas, kuris analizuoja ir apdoroja tekstus. Jūsų užduotis yra apdoroti pokalbių stenogramas pagal pateiktas instrukcijas. Atsakykite Markdown su gražiu formatavimu. Nenaudokite lentelių ar kodo Markdown. Atsakykite tiksliai, struktūrizuotai ir naudingai.',
// Bengalisch
bn: 'আপনি একজন সহায়ক সহকারী যিনি পাঠ্য বিশ্লেষণ এবং প্রক্রিয়া করেন। আপনার কাজ হল প্রদত্ত নির্দেশাবলী অনুসারে কথোপকথনের প্রতিলিপি প্রক্রিয়া করা। সুন্দর বিন্যাসের সাথে Markdown-এ উত্তর দিন। Markdown-এ টেবিল বা কোড ব্যবহার করবেন না। সুনির্দিষ্ট, কাঠামোগত এবং সহায়কভাবে উত্তর দিন।',
// Malaiisch
ms: 'Anda adalah pembantu berguna yang menganalisis dan memproses teks. Tugas anda adalah memproses transkrip perbualan mengikut arahan yang diberikan. Balas dalam Markdown dengan format yang cantik. Jangan gunakan jadual atau kod dalam Markdown. Balas dengan tepat, berstruktur dan berguna.',
// Tamil
ta: 'நீங்கள் உரைகளை பகுப்பாய்வு செய்து செயலாக்கும் பயனுள்ள உதவியாளர். கொடுக்கப்பட்ட அறிவுறுத்தல்களின்படி உரையாடல் படியெடுப்புகளை செயலாக்குவது உங்கள் பணி. அழகான வடிவத்துடன் Markdown இல் பதிலளிக்கவும். Markdown இல் அட்டவணைகள் அல்லது குறியீட்டைப் பயன்படுத்த வேண்டாம். துல்லியமாக, கட்டமைக்கப்பட்ட மற்றும் பயனுள்ள வகையில் பதிலளிக்கவும்.',
// Telugu
te: 'మీరు టెక్స్ట్‌లను విశ్లేషించి ప్రాసెస్ చేసే సహాయక అసిస్టెంట్. ఇచ్చిన సూచనల ప్రకారం సంభాషణ ట్రాన్స్‌క్రిప్ట్‌లను ప్రాసెస్ చేయడం మీ పని. అందమైన ఫార్మాట్‌తో Markdown లో స్పందించండి. Markdown లో పట్టికలు లేదా కోడ్ ఉపయోగించవద్దు. ఖచ్చితంగా, నిర్మాణాత్మకంగా మరియు సహాయకరంగా స్పందించండి.',
// Urdu
ur: 'آپ ایک مددگار معاون ہیں جو متن کا تجزیہ اور عمل کرتے ہیں۔ آپ کا کام دی گئی ہدایات کے مطابق گفتگو کی نقلیں پروسیس کرنا ہے۔ خوبصورت فارمیٹ کے ساتھ Markdown میں جواب دیں۔ Markdown میں ٹیبلز یا کوڈ استعمال نہ کریں۔ درست، منظم اور مددگار طریقے سے جواب دیں۔',
// Marathi
mr: 'तुम्ही एक उपयुक्त सहाय्यक आहात जो मजकूरांचे विश्लेषण आणि प्रक्रिया करतो. दिलेल्या सूचनांनुसार संभाषण प्रतिलेखनांवर प्रक्रिया करणे हे तुमचे कार्य आहे. सुंदर स्वरूपासह Markdown मध्ये उत्तर द्या. Markdown मध्ये सारण्या किंवा कोड वापरू नका. अचूक, संरचित आणि उपयुक्त पद्धतीने उत्तर द्या.',
// Gujarati
gu: 'તમે એક મદદરૂપ સહાયક છો જે ટેક્સ્ટનું વિશ્લેષણ અને પ્રક્રિયા કરે છે. આપેલી સૂચનાઓ અનુસાર વાતચીતની ટ્રાન્સક્રિપ્ટ્સ પર પ્રક્રિયા કરવી એ તમારું કામ છે. સુંદર ફોર્મેટ સાથે Markdown માં જવાબ આપો. Markdown માં કોષ્ટકો અથવા કોડનો ઉપયોગ કરશો નહીં. ચોક્કસ, સંરચિત અને મદદરૂપ રીતે જવાબ આપો.',
// Malayalam
ml: 'നിങ്ങൾ വാചകങ്ങൾ വിശകലനം ചെയ്യുകയും പ്രോസസ്സ് ചെയ്യുകയും ചെയ്യുന്ന സഹായകരമായ സഹായിയാണ്. നൽകിയിരിക്കുന്ന നിർദ്ദേശങ്ങൾ അനുസരിച്ച് സംഭാഷണ ട്രാൻസ്ക്രിപ്റ്റുകൾ പ്രോസസ്സ് ചെയ്യുക എന്നതാണ് നിങ്ങളുടെ ജോലി. മനോഹരമായ ഫോർമാറ്റിൽ Markdown ൽ പ്രതികരിക്കുക. Markdown ൽ ടേബിളുകളോ കോഡോ ഉപയോഗിക്കരുത്. കൃത്യമായും ഘടനാപരമായും സഹായകരമായും പ്രതികരിക്കുക.',
// Kannada
kn: 'ನೀವು ಪಠ್ಯಗಳನ್ನು ವಿಶ್ಲೇಷಿಸುವ ಮತ್ತು ಪ್ರಕ್ರಿಯೆಗೊಳಿಸುವ ಸಹಾಯಕ ಸಹಾಯಕರಾಗಿದ್ದೀರಿ. ನೀಡಿದ ಸೂಚನೆಗಳ ಪ್ರಕಾರ ಸಂಭಾಷಣೆ ಪ್ರತಿಲಿಪಿಗಳನ್ನು ಪ್ರಕ್ರಿಯೆಗೊಳಿಸುವುದು ನಿಮ್ಮ ಕೆಲಸ. ಸುಂದರ ಸ್ವರೂಪದೊಂದಿಗೆ Markdown ನಲ್ಲಿ ಪ್ರತಿಕ್ರಿಯಿಸಿ. Markdown ನಲ್ಲಿ ಕೋಷ್ಟಕಗಳು ಅಥವಾ ಕೋಡ್ ಬಳಸಬೇಡಿ. ನಿಖರವಾಗಿ, ರಚನಾತ್ಮಕವಾಗಿ ಮತ್ತು ಸಹಾಯಕವಾಗಿ ಪ್ರತಿಕ್ರಿಯಿಸಿ.',
// Punjabi
pa: 'ਤੁਸੀਂ ਇੱਕ ਮਦਦਗਾਰ ਸਹਾਇਕ ਹੋ ਜੋ ਟੈਕਸਟਾਂ ਦਾ ਵਿਸ਼ਲੇਸ਼ਣ ਅਤੇ ਪ੍ਰਕਿਰਿਆ ਕਰਦੇ ਹੋ। ਤੁਹਾਡਾ ਕੰਮ ਦਿੱਤੀਆਂ ਹਦਾਇਤਾਂ ਅਨੁਸਾਰ ਗੱਲਬਾਤ ਦੀਆਂ ਨਕਲਾਂ ਨੂੰ ਪ੍ਰਕਿਰਿਆ ਕਰਨਾ ਹੈ। ਸੁੰਦਰ ਫਾਰਮੈਟ ਨਾਲ Markdown ਵਿੱਚ ਜਵਾਬ ਦਿਓ। Markdown ਵਿੱਚ ਸਾਰਣੀਆਂ ਜਾਂ ਕੋਡ ਦੀ ਵਰਤੋਂ ਨਾ ਕਰੋ। ਸਟੀਕ, ਢਾਂਚਾਗਤ ਅਤੇ ਮਦਦਗਾਰ ਢੰਗ ਨਾਲ ਜਵਾਬ ਦਿਓ।',
// Afrikaans
af: "Jy is 'n nuttige assistent wat tekste ontleed en verwerk. Jou taak is om gespreksafskrifte te verwerk volgens die gegewe instruksies. Antwoord in Markdown met 'n mooi formaat. Moenie tabelle of kode in Markdown gebruik nie. Antwoord presies, gestruktureerd en nuttig.",
// Persisch
fa: 'شما یک دستیار مفید هستید که متون را تحلیل و پردازش می‌کند. وظیفه شما پردازش رونوشت‌های مکالمات طبق دستورالعمل‌های داده شده است. با فرمت زیبا در Markdown پاسخ دهید. از جداول یا کد در Markdown استفاده نکنید. به طور دقیق، ساختاریافته و مفید پاسخ دهید.',
// Georgisch
ka: 'თქვენ ხართ სასარგებლო ასისტენტი, რომელიც აანალიზებს და ამუშავებს ტექსტებს. თქვენი ამოცანაა საუბრების ჩანაწერების დამუშავება მოცემული ინსტრუქციების შესაბამისად. უპასუხეთ Markdown-ში ლამაზი ფორმატით. არ გამოიყენოთ ცხრილები ან კოდი Markdown-ში. უპასუხეთ ზუსტად, სტრუქტურირებულად და სასარგებლოდ.',
// Isländisch
is: 'Þú ert gagnlegur aðstoðarmaður sem greinir og vinnur úr textum. Verkefni þitt er að vinna úr samtalsskrám samkvæmt gefnum leiðbeiningum. Svaraðu í Markdown með fallegu sniði. Notaðu ekki töflur eða kóða í Markdown. Svaraðu nákvæmlega, skipulega og gagnlega.',
// Albanisch
sq: 'Ju jeni një asistent i dobishëm që analizon dhe përpunon tekste. Detyra juaj është të përpunoni transkriptet e bisedave sipas udhëzimeve të dhëna. Përgjigjuni në Markdown me një format të bukur. Mos përdorni tabela ose kod në Markdown. Përgjigjuni saktësisht, të strukturuar dhe të dobishëm.',
// Aserbaidschanisch
az: 'Siz mətnləri təhlil edən və emal edən faydalı köməkçisiniz. Sizin vəzifəniz verilmiş təlimatlara uyğun olaraq söhbət transkriptlərini emal etməkdir. Gözəl formatla Markdown-da cavab verin. Markdown-da cədvəllər və ya kod istifadə etməyin. Dəqiq, strukturlaşdırılmış və faydalı şəkildə cavab verin.',
// Baskisch
eu: 'Testuak aztertzen eta prozesatzen dituen laguntzaile erabilgarria zara. Zure zeregina elkarrizketen transkripzioak prozesatzea da emandako argibideen arabera. Erantzun Markdownean formatu ederrarekin. Ez erabili taulak edo kodea Markdownean. Erantzun zehatz, egituratuta eta lagungarri.',
// Galizisch
gl: 'Es un asistente útil que analiza e procesa textos. A túa tarefa é procesar transcricións de conversas segundo as instrucións dadas. Responde en Markdown cun formato bonito. Non uses táboas ou código en Markdown. Responde de forma precisa, estruturada e útil.',
// Kasachisch
kk: 'Сіз мәтіндерді талдайтын және өңдейтін пайдалы көмекшісіз. Сіздің міндетіңіз берілген нұсқауларға сәйкес сөйлесу транскрипттерін өңдеу. Әдемі пішіммен Markdown-да жауап беріңіз. Markdown-да кестелер немесе код қолданбаңыз. Дәл, құрылымдалған және пайдалы түрде жауап беріңіз.',
// Mazedonisch
mk: 'Вие сте корисен асистент кој анализира и обработува текстови. Вашата задача е да обработувате транскрипти на разговори според дадените упатства. Одговорете во Markdown со убав формат. Не користете табели или код во Markdown. Одговорете прецизно, структурирано и корисно.',
// Serbisch
sr: 'Ви сте корисни асистент који анализира и обрађује текстове. Ваш задатак је да обрађујете транскрипте разговора према датим упутствима. Одговорите у Markdown-у са лепим форматом. Не користите табеле или код у Markdown-у. Одговорите прецизно, структурисано и корисно.',
// Slowenisch
sl: 'Ste koristen pomočnik, ki analizira in obdeluje besedila. Vaša naloga je obdelati prepise pogovorov v skladu z danimi navodili. Odgovorite v Markdownu z lepim formatom. Ne uporabljajte tabel ali kode v Markdownu. Odgovorite natančno, strukturirano in koristno.',
// Maltesisch
mt: "Inti assistent utli li janalizza u jipproċessa testi. Il-kompitu tiegħek huwa li tipproċessa traskrizzjonijiet ta' konversazzjonijiet skont l-istruzzjonijiet mogħtija. Wieġeb f'Markdown b'format sabiħ. Tużax tabelli jew kodiċi f'Markdown. Wieġeb b'mod preċiż, strutturat u utli.",
// Armenisch
hy: 'Դուք օգտակար օգնական եք, որը վերլուծում և մշակում է տեքստեր: Ձեր խնդիրն է մշակել զրույցների արձանագրությունները տրված հրահանգների համաձայն: Պատասխանեք Markdown-ում գեղեցիկ ձևաչափով: Մի օգտագործեք աղյուսակներ կամ կոդ Markdown-ում: Պատասխանեք ճշգրիտ, կառուցվածքային և օգտակար:',
// Usbekisch
uz: "Siz matnlarni tahlil qiluvchi va qayta ishlovchi foydali yordamchisiz. Sizning vazifangiz berilgan ko'rsatmalarga muvofiq suhbat transkriptlarini qayta ishlashdir. Chiroyli formatda Markdown-da javob bering. Markdown-da jadvallar yoki koddan foydalanmang. Aniq, tuzilgan va foydali tarzda javob bering.",
// Irisch
ga: 'Is cúntóir cabhrach thú a dhéanann anailís agus próiseáil ar théacsanna. Is é do thasc tras-scríbhinní comhrá a phróiseáil de réir na dtreoracha a thugtar. Freagair i Markdown le formáid álainn. Ná húsáid táblaí ná cód i Markdown. Freagair go beacht, struchtúrtha agus cabhrach.',
// Walisisch
cy: "Rydych chi'n gynorthwyydd defnyddiol sy'n dadansoddi ac yn prosesu testunau. Eich tasg yw prosesu trawsgrifiadau sgwrs yn ôl y cyfarwyddiadau a roddir. Atebwch yn Markdown gyda fformat hardd. Peidiwch â defnyddio tablau na chod yn Markdown. Atebwch yn fanwl gywir, wedi'i strwythuro ac yn ddefnyddiol.",
// Filipino
fil: 'Ikaw ay isang kapaki-pakinabang na katulong na nag-aanalisa at nagpoproseso ng mga teksto. Ang iyong gawain ay iproseso ang mga transkripsyon ng pag-uusap ayon sa mga ibinigay na tagubilin. Tumugon sa Markdown na may magandang format. Huwag gumamit ng mga talahanayan o code sa Markdown. Tumugon nang tumpak, nakaayos, at nakakatulong.',
},
};

View file

@ -0,0 +1,81 @@
/**
* Shared utility functions for handling transcript generation from utterances
* Used across multiple edge functions
*/
/**
* Generate a plain text transcript from utterances array
* @param utterances - Array of utterance objects with text property
* @returns Plain text transcript string
*/
export function generateTranscriptFromUtterances(
utterances?: Array<{
text: string;
speakerId?: string;
offset?: number;
duration?: number;
}> | null
): string {
if (!utterances || !Array.isArray(utterances) || utterances.length === 0) {
return '';
}
// Sort utterances by offset if available
const sortedUtterances = [...utterances].sort((a, b) => {
const offsetA = a.offset || 0;
const offsetB = b.offset || 0;
return offsetA - offsetB;
});
// Concatenate all utterance texts with spaces
return sortedUtterances
.map((utterance) => utterance.text)
.filter((text) => text && text.trim() !== '')
.join(' ');
}
/**
* Get transcript text from memo (generates from utterances or returns legacy transcript)
* @param memo - The memo object
* @returns The transcript text
*/
export function getTranscriptText(memo: any): string {
// If utterances exist, generate transcript from them
if (
memo?.source?.utterances &&
Array.isArray(memo.source.utterances) &&
memo.source.utterances.length > 0
) {
return generateTranscriptFromUtterances(memo.source.utterances);
}
// Fall back to legacy transcript fields for backward compatibility
return (
memo?.transcript ||
memo?.source?.transcript ||
memo?.source?.content ||
memo?.source?.transcription ||
memo?.source?.text ||
memo?.metadata?.transcript ||
''
);
}
/**
* Get transcript from additional recording
* @param recording - The additional recording object
* @returns The transcript text
*/
export function getRecordingTranscript(recording: any): string {
// If utterances exist, generate transcript from them
if (
recording?.utterances &&
Array.isArray(recording.utterances) &&
recording.utterances.length > 0
) {
return generateTranscriptFromUtterances(recording.utterances);
}
// Fall back to transcript field
return recording?.transcript || '';
}

View file

@ -0,0 +1,49 @@
import { Injectable, Logger } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { createClient } from '@supabase/supabase-js';
import { ROOT_SYSTEM_PROMPTS } from './system-prompts';
@Injectable()
export class UserPromptService {
private readonly logger = new Logger(UserPromptService.name);
private readonly supabaseUrl: string;
private readonly supabaseServiceKey: string;
constructor(private configService: ConfigService) {
this.supabaseUrl = this.configService.get<string>('MEMORO_SUPABASE_URL', '');
this.supabaseServiceKey = this.configService.get<string>('MEMORO_SUPABASE_SERVICE_KEY', '');
}
/**
* Gibt den System-Prompt für einen User zurück.
* Wenn der User einen eigenen definiert hat, wird dieser verwendet.
* Sonst der Standard-PRE_PROMPT in der jeweiligen Sprache.
*/
async getSystemPrompt(userId: string, language = 'de'): Promise<string> {
try {
const supabase = createClient(this.supabaseUrl, this.supabaseServiceKey);
const { data: user, error } = await supabase
.from('users')
.select('app_settings')
.eq('id', userId)
.single();
if (!error && user?.app_settings?.memoro?.systemPrompt) {
this.logger.debug(`Using custom system prompt for user ${userId}`);
return user.app_settings.memoro.systemPrompt;
}
} catch (err) {
this.logger.warn(`Failed to load user system prompt, using default: ${err}`);
}
const baseLang = language.split('-')[0].toLowerCase();
return ROOT_SYSTEM_PROMPTS.PRE_PROMPT[baseLang] || ROOT_SYSTEM_PROMPTS.PRE_PROMPT['de'];
}
/**
* Gibt den System-Prompt für den Owner eines Memos zurück.
*/
async getSystemPromptForMemo(memoUserId: string, language = 'de'): Promise<string> {
return this.getSystemPrompt(memoUserId, language);
}
}

View file

@ -0,0 +1,32 @@
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { AuthModule } from './auth/auth.module';
import { AuthProxyModule } from './auth-proxy/auth-proxy.module';
import { SpacesModule } from './spaces/spaces.module';
import { MemoroModule } from './memoro/memoro.module';
import { MeetingsModule } from './meetings/meetings.module';
import { HealthModule } from './health/health.module';
import { CreditsModule } from './credits/credits.module';
import { SettingsModule } from './settings/settings.module';
import { CleanupModule } from './cleanup/cleanup.module';
import { AiModule } from './ai/ai.module';
@Module({
imports: [
ConfigModule.forRoot({
isGlobal: true,
ignoreEnvFile: process.env.NODE_ENV === 'production',
}),
AuthModule,
AuthProxyModule,
SpacesModule,
MemoroModule,
MeetingsModule,
HealthModule,
CreditsModule,
SettingsModule,
CleanupModule,
AiModule,
],
})
export class AppModule {}

View file

@ -0,0 +1,222 @@
import { Test, TestingModule } from '@nestjs/testing';
import { AuthProxyController } from './auth-proxy.controller';
import { AuthProxyService } from './auth-proxy.service';
import { HttpException, HttpStatus } from '@nestjs/common';
describe('AuthProxyController', () => {
let controller: AuthProxyController;
let service: jest.Mocked<AuthProxyService>;
const mockAuthProxyService = {
signin: jest.fn(),
signup: jest.fn(),
googleSignin: jest.fn(),
appleSignin: jest.fn(),
refresh: jest.fn(),
logout: jest.fn(),
forgotPassword: jest.fn(),
validate: jest.fn(),
getCredits: jest.fn(),
getDevices: jest.fn(),
};
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
controllers: [AuthProxyController],
providers: [
{
provide: AuthProxyService,
useValue: mockAuthProxyService,
},
],
}).compile();
controller = module.get<AuthProxyController>(AuthProxyController);
service = module.get(AuthProxyService);
});
afterEach(() => {
jest.clearAllMocks();
});
describe('signin', () => {
it('should call authProxyService.signin with payload', async () => {
const payload = { email: 'test@test.com', password: 'password' };
const expectedResult = { token: 'token', user: { id: '123' } };
mockAuthProxyService.signin.mockResolvedValue(expectedResult);
const result = await controller.signin(payload);
expect(service.signin).toHaveBeenCalledWith(payload);
expect(result).toEqual(expectedResult);
});
it('should handle service errors', async () => {
const payload = { email: 'test@test.com', password: 'password' };
const error = new Error('Service error');
mockAuthProxyService.signin.mockRejectedValue(error);
await expect(controller.signin(payload)).rejects.toThrow(error);
});
});
describe('signup', () => {
it('should call authProxyService.signup with payload', async () => {
const payload = { email: 'test@test.com', password: 'password' };
const expectedResult = { user: { id: '123' } };
mockAuthProxyService.signup.mockResolvedValue(expectedResult);
const result = await controller.signup(payload);
expect(service.signup).toHaveBeenCalledWith(payload);
expect(result).toEqual(expectedResult);
});
});
describe('googleSignin', () => {
it('should call authProxyService.googleSignin with payload', async () => {
const payload = { idToken: 'google-token' };
const expectedResult = { token: 'token', user: { id: '123' } };
mockAuthProxyService.googleSignin.mockResolvedValue(expectedResult);
const result = await controller.googleSignin(payload);
expect(service.googleSignin).toHaveBeenCalledWith(payload);
expect(result).toEqual(expectedResult);
});
});
describe('appleSignin', () => {
it('should call authProxyService.appleSignin with payload', async () => {
const payload = { idToken: 'apple-token' };
const expectedResult = { token: 'token', user: { id: '123' } };
mockAuthProxyService.appleSignin.mockResolvedValue(expectedResult);
const result = await controller.appleSignin(payload);
expect(service.appleSignin).toHaveBeenCalledWith(payload);
expect(result).toEqual(expectedResult);
});
});
describe('refresh', () => {
it('should call authProxyService.refresh with payload', async () => {
const payload = { refreshToken: 'refresh-token' };
const expectedResult = { token: 'new-token', refreshToken: 'new-refresh' };
mockAuthProxyService.refresh.mockResolvedValue(expectedResult);
const result = await controller.refresh(payload);
expect(service.refresh).toHaveBeenCalledWith(payload);
expect(result).toEqual(expectedResult);
});
});
describe('logout', () => {
it('should call authProxyService.logout with payload', async () => {
const payload = { token: 'token' };
mockAuthProxyService.logout.mockResolvedValue(undefined);
const result = await controller.logout(payload);
expect(service.logout).toHaveBeenCalledWith(payload);
expect(result).toBeUndefined();
});
it('should have HttpCode 204', async () => {
const metadata = Reflect.getMetadata('__httpCode__', controller.logout);
expect(metadata).toBe(204);
});
});
describe('forgotPassword', () => {
it('should call authProxyService.forgotPassword with payload', async () => {
const payload = { email: 'test@test.com' };
const expectedResult = { message: 'Password reset email sent' };
mockAuthProxyService.forgotPassword.mockResolvedValue(expectedResult);
const result = await controller.forgotPassword(payload);
expect(service.forgotPassword).toHaveBeenCalledWith(payload);
expect(result).toEqual(expectedResult);
});
});
describe('validate', () => {
it('should call authProxyService.validate with payload', async () => {
const payload = { token: 'token' };
const expectedResult = { valid: true, user: { id: '123' } };
mockAuthProxyService.validate.mockResolvedValue(expectedResult);
const result = await controller.validate(payload);
expect(service.validate).toHaveBeenCalledWith(payload);
expect(result).toEqual(expectedResult);
});
});
describe('getCredits', () => {
it('should call authProxyService.getCredits with authorization header', async () => {
const authorization = 'Bearer token';
const expectedResult = { credits: 100 };
mockAuthProxyService.getCredits.mockResolvedValue(expectedResult);
const result = await controller.getCredits(authorization);
expect(service.getCredits).toHaveBeenCalledWith(authorization);
expect(result).toEqual(expectedResult);
});
it('should throw UnauthorizedException when no authorization header', async () => {
await expect(controller.getCredits(undefined)).rejects.toThrow(
new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED)
);
expect(service.getCredits).not.toHaveBeenCalled();
});
it('should throw UnauthorizedException when empty authorization header', async () => {
await expect(controller.getCredits('')).rejects.toThrow(
new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED)
);
expect(service.getCredits).not.toHaveBeenCalled();
});
});
describe('getDevices', () => {
it('should call authProxyService.getDevices with authorization header', async () => {
const authorization = 'Bearer token';
const expectedResult = { devices: [{ id: 'device-1' }] };
mockAuthProxyService.getDevices.mockResolvedValue(expectedResult);
const result = await controller.getDevices(authorization);
expect(service.getDevices).toHaveBeenCalledWith(authorization);
expect(result).toEqual(expectedResult);
});
it('should throw UnauthorizedException when no authorization header', async () => {
await expect(controller.getDevices(undefined)).rejects.toThrow(
new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED)
);
expect(service.getDevices).not.toHaveBeenCalled();
});
it('should throw UnauthorizedException when empty authorization header', async () => {
await expect(controller.getDevices('')).rejects.toThrow(
new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED)
);
expect(service.getDevices).not.toHaveBeenCalled();
});
});
});

View file

@ -0,0 +1,93 @@
import {
Controller,
Post,
Get,
Body,
Headers,
HttpCode,
HttpException,
HttpStatus,
} from '@nestjs/common';
import { AuthProxyService } from './auth-proxy.service';
@Controller('auth')
export class AuthProxyController {
constructor(private readonly authProxyService: AuthProxyService) {}
@Post('signin')
async signin(@Body() payload: any) {
return this.authProxyService.signin(payload);
}
/**
* Signup endpoint
*
* Optional: Include metadata.branding to customize signup email
* If not provided, mana-core uses default branding for the app
*
* Example with custom branding:
* {
* "email": "user@example.com",
* "password": "pass123",
* "deviceInfo": {...},
* "metadata": {
* "branding": {
* "logoUrl": "custom-logo.svg",
* "primaryColor": "#FF5733"
* }
* }
* }
*/
@Post('signup')
async signup(@Body() payload: any) {
return this.authProxyService.signup(payload);
}
@Post('google-signin')
async googleSignin(@Body() payload: any) {
return this.authProxyService.googleSignin(payload);
}
@Post('apple-signin')
async appleSignin(@Body() payload: any) {
return this.authProxyService.appleSignin(payload);
}
@Post('refresh')
async refresh(@Body() payload: any) {
return this.authProxyService.refresh(payload);
}
@Post('logout')
@HttpCode(204)
async logout(@Body() payload: any) {
return this.authProxyService.logout(payload);
}
@Post('forgot-password')
async forgotPassword(@Body() payload: any) {
return this.authProxyService.forgotPassword(payload);
}
@Post('validate')
async validate(@Body() payload: any) {
return this.authProxyService.validate(payload);
}
@Get('credits')
async getCredits(@Headers('authorization') authorization: string) {
if (!authorization) {
throw new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED);
}
return this.authProxyService.getCredits(authorization);
}
// Device management endpoints
@Get('devices')
async getDevices(@Headers('authorization') authorization: string) {
if (!authorization) {
throw new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED);
}
return this.authProxyService.getDevices(authorization);
}
}

View file

@ -0,0 +1,13 @@
import { Module } from '@nestjs/common';
import { HttpModule } from '@nestjs/axios';
import { ConfigModule } from '@nestjs/config';
import { AuthProxyController } from './auth-proxy.controller';
import { AuthProxyService } from './auth-proxy.service';
@Module({
imports: [HttpModule, ConfigModule],
controllers: [AuthProxyController],
providers: [AuthProxyService],
exports: [AuthProxyService],
})
export class AuthProxyModule {}

View file

@ -0,0 +1,400 @@
import { Test, TestingModule } from '@nestjs/testing';
import { HttpService } from '@nestjs/axios';
import { ConfigService } from '@nestjs/config';
import { AuthProxyService } from './auth-proxy.service';
import { of, throwError } from 'rxjs';
import { AxiosResponse, AxiosError } from 'axios';
describe('AuthProxyService', () => {
let service: AuthProxyService;
let httpService: jest.Mocked<HttpService>;
let configService: jest.Mocked<ConfigService>;
const mockHttpService = {
post: jest.fn(),
get: jest.fn(),
};
const mockConfigService = {
get: jest.fn(),
};
const authServiceUrl = 'http://localhost:3000';
const memoroAppId = 'test-app-id';
beforeEach(async () => {
// Reset mocks
mockConfigService.get.mockReset();
mockHttpService.post.mockReset();
mockHttpService.get.mockReset();
// Setup config mock
mockConfigService.get.mockImplementation((key: string, defaultValue?: any) => {
switch (key) {
case 'MANA_SERVICE_URL':
return authServiceUrl;
case 'MEMORO_APP_ID':
return memoroAppId;
default:
return defaultValue;
}
});
const module: TestingModule = await Test.createTestingModule({
providers: [
AuthProxyService,
{
provide: HttpService,
useValue: mockHttpService,
},
{
provide: ConfigService,
useValue: mockConfigService,
},
],
}).compile();
service = module.get<AuthProxyService>(AuthProxyService);
httpService = module.get(HttpService);
configService = module.get(ConfigService);
// Mock console methods to avoid test output noise
jest.spyOn(console, 'log').mockImplementation(() => {});
jest.spyOn(console, 'error').mockImplementation(() => {});
});
afterEach(() => {
jest.clearAllMocks();
});
describe('signin', () => {
it('should forward signin request to auth service', async () => {
const payload = { email: 'test@test.com', password: 'password' };
const expectedResponse = { token: 'token', user: { id: '123' } };
const axiosResponse: AxiosResponse = {
data: expectedResponse,
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
const result = await service.signin(payload);
expect(httpService.post).toHaveBeenCalledWith(
`${authServiceUrl}/auth/signin?appId=${memoroAppId}`,
payload,
expect.any(Object)
);
expect(result).toEqual(expectedResponse);
});
it('should handle signin errors', async () => {
const payload = { email: 'test@test.com', password: 'wrong' };
const error: AxiosError = {
response: {
data: { message: 'Invalid credentials' },
status: 401,
statusText: 'Unauthorized',
headers: {},
config: {} as any,
},
config: {} as any,
isAxiosError: true,
toJSON: () => ({}),
name: 'AxiosError',
message: 'Request failed',
};
mockHttpService.post.mockReturnValue(throwError(() => error));
await expect(service.signin(payload)).rejects.toThrow();
});
});
describe('signup', () => {
it('should forward signup request to auth service', async () => {
const payload = { email: 'test@test.com', password: 'password' };
const expectedResponse = { user: { id: '123' } };
const axiosResponse: AxiosResponse = {
data: expectedResponse,
status: 201,
statusText: 'Created',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
const result = await service.signup(payload);
expect(httpService.post).toHaveBeenCalledWith(
`${authServiceUrl}/auth/signup?appId=${memoroAppId}`,
payload,
expect.any(Object)
);
expect(result).toEqual(expectedResponse);
});
});
describe('googleSignin', () => {
it('should forward google signin request to auth service', async () => {
const payload = { idToken: 'google-token' };
const expectedResponse = { token: 'token', user: { id: '123' } };
const axiosResponse: AxiosResponse = {
data: expectedResponse,
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
const result = await service.googleSignin(payload);
expect(httpService.post).toHaveBeenCalledWith(
`${authServiceUrl}/auth/google-signin?appId=${memoroAppId}`,
payload,
expect.any(Object)
);
expect(result).toEqual(expectedResponse);
});
});
describe('appleSignin', () => {
it('should forward apple signin request to auth service', async () => {
const payload = { idToken: 'apple-token' };
const expectedResponse = { token: 'token', user: { id: '123' } };
const axiosResponse: AxiosResponse = {
data: expectedResponse,
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
const result = await service.appleSignin(payload);
expect(httpService.post).toHaveBeenCalledWith(
`${authServiceUrl}/auth/apple-signin?appId=${memoroAppId}`,
payload,
expect.any(Object)
);
expect(result).toEqual(expectedResponse);
});
});
describe('refresh', () => {
it('should forward refresh request to auth service with deviceInfo', async () => {
const payload = {
refreshToken: 'refresh-token',
deviceInfo: {
platform: 'ios',
deviceId: 'device-123',
appVersion: '1.0.0',
},
};
const expectedResponse = { token: 'new-token', refreshToken: 'new-refresh' };
const axiosResponse: AxiosResponse = {
data: expectedResponse,
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
const result = await service.refresh(payload);
expect(httpService.post).toHaveBeenCalledWith(
`${authServiceUrl}/auth/refresh?appId=${memoroAppId}`,
{
refreshToken: 'refresh-token',
appId: memoroAppId,
deviceInfo: payload.deviceInfo,
},
expect.any(Object)
);
expect(result).toEqual(expectedResponse);
});
it('should throw BadRequestException when deviceInfo is missing', async () => {
const payload = { refreshToken: 'refresh-token' };
await expect(service.refresh(payload)).rejects.toThrow(
'Device info is required for token refresh'
);
expect(httpService.post).not.toHaveBeenCalled();
});
it('should throw BadRequestException when refreshToken is missing', async () => {
const payload = {
deviceInfo: {
platform: 'ios',
deviceId: 'device-123',
},
};
await expect(service.refresh(payload)).rejects.toThrow('Refresh token is required');
expect(httpService.post).not.toHaveBeenCalled();
});
});
describe('logout', () => {
it('should forward logout request to auth service', async () => {
const payload = { token: 'token' };
const axiosResponse: AxiosResponse = {
data: null,
status: 204,
statusText: 'No Content',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
const result = await service.logout(payload);
expect(httpService.post).toHaveBeenCalledWith(
`${authServiceUrl}/auth/logout?appId=${memoroAppId}`,
payload,
expect.any(Object)
);
expect(result).toBeNull();
});
});
describe('forgotPassword', () => {
it('should forward forgot password request to auth service', async () => {
const payload = { email: 'test@test.com' };
const expectedResponse = { message: 'Password reset email sent' };
const axiosResponse: AxiosResponse = {
data: expectedResponse,
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
const result = await service.forgotPassword(payload);
expect(httpService.post).toHaveBeenCalledWith(
`${authServiceUrl}/auth/forgot-password?appId=${memoroAppId}`,
payload,
expect.any(Object)
);
expect(result).toEqual(expectedResponse);
});
});
describe('validate', () => {
it('should forward validate request to auth service', async () => {
const payload = { token: 'token' };
const expectedResponse = { valid: true, user: { id: '123' } };
const axiosResponse: AxiosResponse = {
data: expectedResponse,
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
const result = await service.validate(payload);
expect(httpService.post).toHaveBeenCalledWith(
`${authServiceUrl}/auth/validate?appId=${memoroAppId}`,
payload,
expect.any(Object)
);
expect(result).toEqual(expectedResponse);
});
});
describe('getCredits', () => {
it('should forward get credits request to auth service', async () => {
const authorization = 'Bearer token';
const expectedResponse = { credits: 100 };
const axiosResponse: AxiosResponse = {
data: expectedResponse,
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.get.mockReturnValue(of(axiosResponse));
const result = await service.getCredits(authorization);
expect(httpService.get).toHaveBeenCalledWith(
`${authServiceUrl}/auth/credits?appId=${memoroAppId}`,
{
headers: {
Authorization: authorization,
},
}
);
expect(result).toEqual(expectedResponse);
});
it('should handle get credits errors', async () => {
const authorization = 'Bearer invalid';
const error: AxiosError = {
response: {
data: { message: 'Unauthorized' },
status: 401,
statusText: 'Unauthorized',
headers: {},
config: {} as any,
},
config: {} as any,
isAxiosError: true,
toJSON: () => ({}),
name: 'AxiosError',
message: 'Request failed',
};
mockHttpService.get.mockReturnValue(throwError(() => error));
await expect(service.getCredits(authorization)).rejects.toThrow();
});
});
describe('getDevices', () => {
it('should forward get devices request to auth service', async () => {
const authorization = 'Bearer token';
const expectedResponse = { devices: [{ id: 'device-1' }] };
const axiosResponse: AxiosResponse = {
data: expectedResponse,
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.get.mockReturnValue(of(axiosResponse));
const result = await service.getDevices(authorization);
expect(httpService.get).toHaveBeenCalledWith(
`${authServiceUrl}/auth/devices?appId=${memoroAppId}`,
{
headers: {
Authorization: authorization,
},
}
);
expect(result).toEqual(expectedResponse);
});
});
});

View file

@ -0,0 +1,228 @@
import { Injectable, HttpException, HttpStatus } from '@nestjs/common';
import { HttpService } from '@nestjs/axios';
import { ConfigService } from '@nestjs/config';
import { firstValueFrom, map, catchError } from 'rxjs';
import { AxiosError } from 'axios';
import { BrandingConfig, SignupMetadata } from './interfaces/branding.interface';
@Injectable()
export class AuthProxyService {
private manaServiceUrl: string;
private memoroAppId: string;
constructor(
private httpService: HttpService,
private configService: ConfigService
) {
this.manaServiceUrl = this.configService.get<string>(
'MANA_SERVICE_URL',
'http://localhost:3000'
);
this.memoroAppId = this.configService.get<string>(
'MEMORO_APP_ID',
'973da0c1-b479-4dac-a1b0-ed09c72caca8'
);
}
/**
* Generic proxy method for POST requests
*/
private async proxyPost(endpoint: string, payload: any, headers: any = {}) {
const url = `${this.manaServiceUrl}${endpoint}?appId=${this.memoroAppId}`;
console.log(`[AuthProxy] Proxying POST request to: ${endpoint}`);
try {
const response = await firstValueFrom(
this.httpService
.post(url, payload, {
headers: {
'Content-Type': 'application/json',
...headers,
},
})
.pipe(
map((res) => res.data),
catchError((error: AxiosError) => {
console.error(`[AuthProxy] Error from mana-core-middleware:`, error.response?.data);
// Preserve the original error response
if (error.response) {
throw new HttpException(
error.response.data || 'Request failed',
error.response.status
);
}
throw new HttpException('Service unavailable', HttpStatus.SERVICE_UNAVAILABLE);
})
)
);
return response;
} catch (error) {
console.error(`[AuthProxy] Error proxying ${endpoint}:`, error);
throw error;
}
}
/**
* Generic proxy method for GET requests
*/
private async proxyGet(endpoint: string, headers: any = {}) {
const url = `${this.manaServiceUrl}${endpoint}?appId=${this.memoroAppId}`;
console.log(`[AuthProxy] Proxying GET request to: ${endpoint}`);
try {
const response = await firstValueFrom(
this.httpService
.get(url, {
headers: {
...headers,
},
})
.pipe(
map((res) => res.data),
catchError((error: AxiosError) => {
console.error(`[AuthProxy] Error from mana-core-middleware:`, error.response?.data);
// Preserve the original error response
if (error.response) {
throw new HttpException(
error.response.data || 'Request failed',
error.response.status
);
}
throw new HttpException('Service unavailable', HttpStatus.SERVICE_UNAVAILABLE);
})
)
);
return response;
} catch (error) {
console.error(`[AuthProxy] Error proxying ${endpoint}:`, error);
throw error;
}
}
// Auth endpoints
async signin(payload: any) {
// Log signin payload to understand device info flow
console.log('[AuthProxy] Signin request payload:', JSON.stringify(payload, null, 2));
if (payload.deviceInfo || payload.device_info) {
console.log('[AuthProxy] Device info present in signin request');
}
return this.proxyPost('/auth/signin', payload);
}
async signup(payload: any) {
// Hardcoded Memoro branding configuration
const memoroBranding: BrandingConfig = {
appName: 'Memoro',
logoUrl: 'memoro-logo.png',
primaryColor: '#F8D62B',
secondaryColor: '#f5c500',
websiteUrl: 'https://memoro.ai',
taglineDe: 'Sprechen statt Tippen',
taglineEn: 'Speak Instead of Type',
copyright: '© 2025 Memoro · Made with 💛 in Germany',
};
// Build payload with Memoro branding
const enhancedPayload: any = {
...payload,
redirectUrl: 'https://app.manacore.ai/welcome?appName=memoro',
};
// Add Memoro branding if not already provided in payload
if (!enhancedPayload.metadata) {
enhancedPayload.metadata = {};
}
// Merge: payload branding overrides default Memoro branding if provided
if (!enhancedPayload.metadata.branding) {
enhancedPayload.metadata.branding = memoroBranding;
} else {
// Merge: payload overrides default
enhancedPayload.metadata.branding = {
...memoroBranding,
...enhancedPayload.metadata.branding,
};
}
return this.proxyPost('/auth/signup', enhancedPayload);
}
async googleSignin(payload: any) {
return this.proxyPost('/auth/google-signin', payload);
}
async appleSignin(payload: any) {
return this.proxyPost('/auth/apple-signin', payload);
}
async refresh(payload: any) {
// Log the refresh payload to debug device info issues
console.log('[AuthProxy] Refresh request payload:', JSON.stringify(payload, null, 2));
// Check if device info is present - it's required for refresh
if (!payload.deviceInfo) {
console.error('[AuthProxy] Error: No device info in refresh request');
throw new HttpException(
{
error: 'Bad Request',
message: 'Device info is required for token refresh',
statusCode: 400,
},
HttpStatus.BAD_REQUEST
);
}
// Ensure the payload has the correct structure
const refreshPayload = {
refreshToken: payload.refreshToken,
appId: payload.appId || this.memoroAppId,
deviceInfo: payload.deviceInfo,
};
// Validate required fields
if (!refreshPayload.refreshToken) {
throw new HttpException(
{ error: 'Bad Request', message: 'Refresh token is required', statusCode: 400 },
HttpStatus.BAD_REQUEST
);
}
console.log('[AuthProxy] Device info included in refresh request');
return this.proxyPost('/auth/refresh', refreshPayload);
}
async logout(payload: any) {
return this.proxyPost('/auth/logout', payload);
}
async forgotPassword(payload: any) {
return this.proxyPost('/auth/forgot-password', payload);
}
async validate(payload: any) {
return this.proxyPost('/auth/validate', payload);
}
async getCredits(authHeader: string) {
return this.proxyGet('/auth/credits', {
Authorization: authHeader,
});
}
async getDevices(authHeader: string) {
return this.proxyGet('/auth/devices', {
Authorization: authHeader,
});
}
}

View file

@ -0,0 +1,34 @@
/**
* Feature object structure for branding emails
*/
export interface BrandingFeature {
icon: string; // Emoji icon
titleDe: string; // German title
titleEn: string; // English title
descriptionDe: string; // German description
descriptionEn: string; // English description
}
/**
* Email branding configuration for signup confirmation emails
* All fields are optional and will fall back to app-branding.config.ts defaults
*/
export interface BrandingConfig {
appName?: string; // App display name
logoUrl?: string; // Logo filename or URL
primaryColor?: string; // Primary brand color (hex)
secondaryColor?: string; // Secondary color (hex)
websiteUrl?: string; // Website URL
taglineDe?: string; // German tagline
taglineEn?: string; // English tagline
features?: BrandingFeature[]; // Feature list
copyright?: string; // Footer copyright text
}
/**
* Metadata object that can be passed in signup requests
*/
export interface SignupMetadata {
branding?: BrandingConfig;
[key: string]: any; // Allow custom fields for email personalization
}

View file

@ -0,0 +1,324 @@
import { Test, TestingModule } from '@nestjs/testing';
import { HttpService } from '@nestjs/axios';
import { ConfigService } from '@nestjs/config';
import { AuthClientService } from './auth-client.service';
import { UnauthorizedException } from '@nestjs/common';
import { of, throwError } from 'rxjs';
import { AxiosResponse, AxiosError } from 'axios';
describe('AuthClientService', () => {
let service: AuthClientService;
let httpService: jest.Mocked<HttpService>;
let configService: jest.Mocked<ConfigService>;
const mockHttpService = {
post: jest.fn(),
};
const mockConfigService = {
get: jest.fn(),
};
const authServiceUrl = 'http://localhost:3000';
const memoroAppId = 'test-app-id';
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
AuthClientService,
{
provide: HttpService,
useValue: mockHttpService,
},
{
provide: ConfigService,
useValue: mockConfigService,
},
],
}).compile();
service = module.get<AuthClientService>(AuthClientService);
httpService = module.get(HttpService);
configService = module.get(ConfigService);
// Clear and reset all mocks first
mockConfigService.get.mockClear();
mockHttpService.post.mockClear();
// Setup default config values
mockConfigService.get.mockImplementation((key: string, defaultValue?: any) => {
switch (key) {
case 'MANA_SERVICE_URL':
return authServiceUrl;
case 'MEMORO_APP_ID':
return memoroAppId;
default:
return defaultValue;
}
});
// Reset console.log mock
jest.spyOn(console, 'log').mockImplementation(() => {});
});
afterEach(() => {
jest.clearAllMocks();
jest.restoreAllMocks();
});
describe('constructor', () => {
it('should initialize with config values', () => {
expect(service).toBeDefined();
expect(configService.get).toHaveBeenCalledWith('MANA_SERVICE_URL', 'http://localhost:3000');
expect(configService.get).toHaveBeenCalledWith(
'MEMORO_APP_ID',
'973da0c1-b479-4dac-a1b0-ed09c72caca8'
);
});
it('should use default values when config not provided', async () => {
mockConfigService.get.mockReturnValue(undefined);
const module: TestingModule = await Test.createTestingModule({
providers: [
AuthClientService,
{
provide: HttpService,
useValue: mockHttpService,
},
{
provide: ConfigService,
useValue: mockConfigService,
},
],
}).compile();
const serviceWithDefaults = module.get<AuthClientService>(AuthClientService);
expect(serviceWithDefaults).toBeDefined();
});
});
describe('validateToken', () => {
it('should validate token successfully', async () => {
const token = 'valid-token';
const expectedUser = {
id: 'user-123',
email: 'test@test.com',
role: 'user',
};
const axiosResponse: AxiosResponse = {
data: {
valid: true,
user: expectedUser,
},
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
const result = await service.validateToken(token);
expect(console.log).toHaveBeenCalledWith(
'Calling: ',
`${authServiceUrl}/auth/validate?appId=${memoroAppId}`
);
expect(console.log).toHaveBeenCalledWith('Memoro App ID: ', memoroAppId);
expect(httpService.post).toHaveBeenCalledWith(
`${authServiceUrl}/auth/validate?appId=${memoroAppId}`,
{ appToken: token },
{
headers: {
'Content-Type': 'application/json',
},
}
);
expect(result).toEqual(expectedUser);
});
it('should throw UnauthorizedException for invalid token', async () => {
const token = 'invalid-token';
const axiosError: AxiosError = {
response: {
data: { message: 'Invalid token' },
status: 401,
statusText: 'Unauthorized',
headers: {},
config: {} as any,
},
config: {} as any,
isAxiosError: true,
toJSON: () => ({}),
name: 'AxiosError',
message: 'Request failed',
};
mockHttpService.post.mockReturnValue(throwError(() => axiosError));
await expect(service.validateToken(token)).rejects.toThrow(
new UnauthorizedException('Invalid token')
);
});
it('should throw UnauthorizedException when response is not valid', async () => {
const token = 'token';
const axiosResponse: AxiosResponse = {
data: {
valid: false,
},
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
await expect(service.validateToken(token)).rejects.toThrow(
new UnauthorizedException('Invalid token')
);
});
it('should throw UnauthorizedException when user is missing', async () => {
const token = 'token';
const axiosResponse: AxiosResponse = {
data: {
valid: true,
user: null,
},
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
await expect(service.validateToken(token)).rejects.toThrow(
new UnauthorizedException('Invalid token')
);
});
it('should handle network errors', async () => {
const token = 'token';
const error = new Error('Network error');
mockHttpService.post.mockReturnValue(throwError(() => error));
await expect(service.validateToken(token)).rejects.toThrow(
new UnauthorizedException('Invalid token')
);
});
it('should handle unexpected errors', async () => {
const token = 'token';
mockHttpService.post.mockImplementation(() => {
throw new Error('Unexpected error');
});
await expect(service.validateToken(token)).rejects.toThrow(
new UnauthorizedException('Invalid token')
);
});
});
describe('refreshToken', () => {
it('should refresh token successfully', async () => {
const refreshToken = 'valid-refresh-token';
const expectedResponse = {
appToken: 'new-app-token',
refreshToken: 'new-refresh-token',
};
const axiosResponse: AxiosResponse = {
data: expectedResponse,
status: 200,
statusText: 'OK',
headers: {},
config: {} as any,
};
mockHttpService.post.mockReturnValue(of(axiosResponse));
const result = await service.refreshToken(refreshToken);
expect(httpService.post).toHaveBeenCalledWith(
`${authServiceUrl}/auth/refresh`,
{ refreshToken, appId: memoroAppId },
{
headers: {
'Content-Type': 'application/json',
},
}
);
expect(result).toEqual(expectedResponse);
});
it('should throw UnauthorizedException for invalid refresh token', async () => {
const refreshToken = 'invalid-refresh-token';
const axiosError: AxiosError = {
response: {
data: { message: 'Invalid refresh token' },
status: 401,
statusText: 'Unauthorized',
headers: {},
config: {} as any,
},
config: {} as any,
isAxiosError: true,
toJSON: () => ({}),
name: 'AxiosError',
message: 'Request failed',
};
mockHttpService.post.mockReturnValue(throwError(() => axiosError));
await expect(service.refreshToken(refreshToken)).rejects.toThrow(
new UnauthorizedException('Invalid refresh token')
);
});
it('should handle network errors during refresh', async () => {
const refreshToken = 'refresh-token';
const error = new Error('Network error');
mockHttpService.post.mockReturnValue(throwError(() => error));
await expect(service.refreshToken(refreshToken)).rejects.toThrow(
new UnauthorizedException('Invalid refresh token')
);
});
it('should handle unexpected errors during refresh', async () => {
const refreshToken = 'refresh-token';
mockHttpService.post.mockImplementation(() => {
throw new Error('Unexpected error');
});
await expect(service.refreshToken(refreshToken)).rejects.toThrow(
new UnauthorizedException('Invalid refresh token')
);
});
it('should handle timeout errors', async () => {
const refreshToken = 'refresh-token';
const axiosError: AxiosError = {
code: 'ECONNABORTED',
config: {} as any,
isAxiosError: true,
toJSON: () => ({}),
name: 'AxiosError',
message: 'Timeout',
};
mockHttpService.post.mockReturnValue(throwError(() => axiosError));
await expect(service.refreshToken(refreshToken)).rejects.toThrow(
new UnauthorizedException('Invalid refresh token')
);
});
});
});

View file

@ -0,0 +1,92 @@
import { Injectable, UnauthorizedException } from '@nestjs/common';
import { HttpService } from '@nestjs/axios';
import { ConfigService } from '@nestjs/config';
import { Observable, catchError, firstValueFrom, map } from 'rxjs';
import { AxiosError } from 'axios';
import { JwtPayload } from '../types/jwt-payload.interface';
@Injectable()
export class AuthClientService {
private authServiceUrl: string;
private memoroAppId: string;
constructor(
private httpService: HttpService,
private configService: ConfigService
) {
this.authServiceUrl = this.configService.get<string>(
'MANA_SERVICE_URL',
'http://localhost:3000'
);
this.memoroAppId = this.configService.get<string>(
'MEMORO_APP_ID',
'973da0c1-b479-4dac-a1b0-ed09c72caca8'
);
}
/**
* Validates a JWT token by calling the Auth service
*/
async validateToken(token: string): Promise<JwtPayload> {
try {
console.log('Calling: ', `${this.authServiceUrl}/auth/validate?appId=${this.memoroAppId}`);
console.log('Memoro App ID: ', this.memoroAppId);
const response = await firstValueFrom(
this.httpService
.post(
`${this.authServiceUrl}/auth/validate?appId=${this.memoroAppId}`,
{ appToken: token },
{
headers: {
'Content-Type': 'application/json',
},
}
)
.pipe(
map((response) => response.data),
catchError((error: AxiosError) => {
throw new UnauthorizedException('Invalid token');
})
)
);
if (response.valid && response.user) {
return response.user;
} else {
throw new UnauthorizedException('Invalid token response format');
}
} catch (error) {
throw new UnauthorizedException('Invalid token');
}
}
/**
* Refreshes a token by calling the Auth service
*/
async refreshToken(refreshToken: string): Promise<{ appToken: string; refreshToken: string }> {
try {
const response = await firstValueFrom(
this.httpService
.post(
`${this.authServiceUrl}/auth/refresh`,
{ refreshToken, appId: this.memoroAppId },
{
headers: {
'Content-Type': 'application/json',
},
}
)
.pipe(
map((response) => response.data),
catchError((error: AxiosError) => {
throw new UnauthorizedException('Invalid refresh token');
})
)
);
return response;
} catch (error) {
throw new UnauthorizedException('Invalid refresh token');
}
}
}

View file

@ -0,0 +1,11 @@
import { Module } from '@nestjs/common';
import { HttpModule } from '@nestjs/axios';
import { ConfigModule } from '@nestjs/config';
import { AuthClientService } from './auth-client.service';
@Module({
imports: [HttpModule, ConfigModule],
providers: [AuthClientService],
exports: [AuthClientService],
})
export class AuthModule {}

View file

@ -0,0 +1,65 @@
import { Controller, Post, Body, UseGuards, Logger, HttpCode, HttpStatus } from '@nestjs/common';
import { AudioCleanupService } from './audio-cleanup.service';
import { InternalServiceGuard } from '../guards/internal-service.guard';
import { CleanupResult } from './interfaces/cleanup.interfaces';
/**
* Controller for audio cleanup operations.
* Protected by InternalServiceGuard - only accessible via internal API key.
*/
@Controller('cleanup')
export class AudioCleanupController {
private readonly logger = new Logger(AudioCleanupController.name);
constructor(private readonly audioCleanupService: AudioCleanupService) {}
/**
* Trigger the full cleanup job.
* Called by pg_cron or manually for testing.
* Fetches users with cleanup enabled and processes their old audio files.
*/
@Post('trigger-from-cron')
@UseGuards(InternalServiceGuard)
@HttpCode(HttpStatus.OK)
async triggerFromCron(): Promise<CleanupResult> {
this.logger.log('Cleanup triggered from cron job');
return this.audioCleanupService.runCleanup();
}
/**
* Process cleanup for specific user IDs.
* Used when the caller already knows which users to process.
*/
@Post('process-old-audios')
@UseGuards(InternalServiceGuard)
@HttpCode(HttpStatus.OK)
async processOldAudios(@Body() body: { userIds: string[] }): Promise<CleanupResult> {
this.logger.log(`Processing cleanup for ${body.userIds?.length || 0} users`);
if (!body.userIds || body.userIds.length === 0) {
return {
success: true,
usersProcessed: 0,
filesDeleted: 0,
filesFailed: 0,
errors: [],
startedAt: new Date().toISOString(),
completedAt: new Date().toISOString(),
};
}
return this.audioCleanupService.deleteOldAudiosForUsers(body.userIds);
}
/**
* Manual trigger for testing/admin purposes.
* Same as trigger-from-cron but with a different endpoint name for clarity.
*/
@Post('trigger-manual')
@UseGuards(InternalServiceGuard)
@HttpCode(HttpStatus.OK)
async triggerManual(): Promise<CleanupResult> {
this.logger.log('Cleanup triggered manually');
return this.audioCleanupService.runCleanup();
}
}

View file

@ -0,0 +1,395 @@
import { Injectable, Logger } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { createClient, SupabaseClient } from '@supabase/supabase-js';
import {
CleanupResult,
CleanupError,
UserCleanupEnabledResponse,
} from './interfaces/cleanup.interfaces';
interface StorageObject {
id: string;
name: string;
created_at: string;
bucket_id: string;
}
@Injectable()
export class AudioCleanupService {
private readonly logger = new Logger(AudioCleanupService.name);
private readonly memoroServiceClient: SupabaseClient;
private readonly memoroUrl: string;
private readonly manaCoreMiddlewareUrl: string;
private readonly internalApiKey: string;
private readonly STORAGE_BUCKET = 'user-uploads';
private readonly RETENTION_DAYS = 30;
private readonly BATCH_SIZE = 100; // Files per deletion batch
private readonly BATCH_DELAY_MS = 200; // Delay between batches
constructor(private configService: ConfigService) {
this.memoroUrl = this.configService.get<string>('MEMORO_SUPABASE_URL');
const memoroServiceKey = this.configService.get<string>('MEMORO_SUPABASE_SERVICE_KEY');
this.manaCoreMiddlewareUrl = this.configService.get<string>('MANA_SERVICE_URL');
this.internalApiKey = this.configService.get<string>('INTERNAL_API_KEY');
if (!this.memoroUrl || !memoroServiceKey) {
throw new Error('MEMORO_SUPABASE_URL or MEMORO_SUPABASE_SERVICE_KEY not provided');
}
this.memoroServiceClient = createClient(this.memoroUrl, memoroServiceKey);
}
/**
* Main entry point for the cleanup job.
* Uses direct SQL on storage.objects table for efficient file discovery.
*/
async runCleanup(): Promise<CleanupResult> {
const startedAt = new Date().toISOString();
const errors: CleanupError[] = [];
let usersProcessed = 0;
let totalFilesDeleted = 0;
let totalFilesFailed = 0;
this.logger.log('Starting audio cleanup job (SQL-based)');
try {
// Step 1: Get users with auto-delete enabled from mana-core-middleware
const userIds = await this.getUsersWithCleanupEnabled();
this.logger.log(`Found ${userIds.length} users with audio cleanup enabled`);
if (userIds.length === 0) {
return {
success: true,
usersProcessed: 0,
filesDeleted: 0,
filesFailed: 0,
errors: [],
startedAt,
completedAt: new Date().toISOString(),
};
}
// Step 2: Process each user using SQL-based cleanup
for (const userId of userIds) {
try {
const result = await this.processUserCleanupSQL(userId);
usersProcessed++;
totalFilesDeleted += result.filesDeleted;
totalFilesFailed += result.filesFailed;
errors.push(...result.errors);
} catch (error) {
this.logger.error(`Failed to process cleanup for user ${userId}:`, error);
errors.push({
userId,
error: error.message || 'Unknown error processing user cleanup',
});
}
}
// Step 3: Log the cleanup run
await this.logCleanupRun({
usersProcessed,
filesDeleted: totalFilesDeleted,
filesFailed: totalFilesFailed,
errors,
startedAt,
});
return {
success: true,
usersProcessed,
filesDeleted: totalFilesDeleted,
filesFailed: totalFilesFailed,
errors,
startedAt,
completedAt: new Date().toISOString(),
};
} catch (error) {
this.logger.error('Audio cleanup job failed:', error);
return {
success: false,
usersProcessed,
filesDeleted: totalFilesDeleted,
filesFailed: totalFilesFailed,
errors: [...errors, { error: error.message || 'Unknown error' }],
startedAt,
completedAt: new Date().toISOString(),
};
}
}
/**
* Process cleanup for a specific list of user IDs.
*/
async deleteOldAudiosForUsers(userIds: string[]): Promise<CleanupResult> {
const startedAt = new Date().toISOString();
const errors: CleanupError[] = [];
let usersProcessed = 0;
let totalFilesDeleted = 0;
let totalFilesFailed = 0;
this.logger.log(`Processing cleanup for ${userIds.length} users`);
for (const userId of userIds) {
try {
const result = await this.processUserCleanupSQL(userId);
usersProcessed++;
totalFilesDeleted += result.filesDeleted;
totalFilesFailed += result.filesFailed;
errors.push(...result.errors);
} catch (error) {
this.logger.error(`Failed to process cleanup for user ${userId}:`, error);
errors.push({
userId,
error: error.message || 'Unknown error processing user cleanup',
});
}
}
return {
success: errors.length === 0,
usersProcessed,
filesDeleted: totalFilesDeleted,
filesFailed: totalFilesFailed,
errors,
startedAt,
completedAt: new Date().toISOString(),
};
}
/**
* Process cleanup for a single user using direct SQL on storage.objects table.
* Queries files older than retention period and deletes them in batches.
*/
private async processUserCleanupSQL(userId: string): Promise<{
filesDeleted: number;
filesFailed: number;
errors: CleanupError[];
}> {
const errors: CleanupError[] = [];
let filesDeleted = 0;
let filesFailed = 0;
// Query storage.objects directly via the get_old_storage_files function
const { data: oldFiles, error: queryError } = await this.memoroServiceClient.rpc(
'get_old_storage_files',
{
p_bucket_id: this.STORAGE_BUCKET,
p_user_id: userId,
p_retention_days: this.RETENTION_DAYS,
}
);
if (queryError) {
this.logger.error(`Failed to query old files for user ${userId}:`, queryError);
throw new Error(`Query error: ${queryError.message}`);
}
if (!oldFiles || oldFiles.length === 0) {
this.logger.log(`No old files found for user ${userId}`);
return { filesDeleted: 0, filesFailed: 0, errors: [] };
}
this.logger.log(`Found ${oldFiles.length} old files for user ${userId}`);
// Extract unique memoIds from file paths (format: userId/memoId/filename)
// Only include valid UUIDs (skip folders like "migration-reports")
const UUID_REGEX = /^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i;
const memoIds = new Set<string>();
for (const file of oldFiles) {
const parts = file.name.split('/');
if (parts.length >= 2 && UUID_REGEX.test(parts[1])) {
memoIds.add(parts[1]); // memoId is the second part
}
}
// Delete files in batches
const filePaths = oldFiles.map((f: StorageObject) => f.name);
const result = await this.deleteFilesInBatches(filePaths, userId);
filesDeleted = result.deleted;
filesFailed = result.failed;
errors.push(...result.errors);
// Mark memos as audio deleted (only if files were actually deleted)
if (filesDeleted > 0 && memoIds.size > 0) {
await this.markMemosAsAudioDeleted(Array.from(memoIds), userId);
}
this.logger.log(`User ${userId}: deleted ${filesDeleted} files, failed ${filesFailed}`);
return { filesDeleted, filesFailed, errors };
}
/**
* Mark memos as having their audio deleted.
* Updates source.audio_deleted and source.audio_deleted_at fields.
*/
private async markMemosAsAudioDeleted(memoIds: string[], userId: string): Promise<void> {
const deletedAt = new Date().toISOString();
for (const memoId of memoIds) {
try {
// First get the current source to merge with
const { data: memo, error: fetchError } = await this.memoroServiceClient
.from('memos')
.select('source')
.eq('id', memoId)
.eq('user_id', userId)
.maybeSingle();
if (fetchError) {
this.logger.warn(`Error fetching memo ${memoId}:`, fetchError);
continue;
}
if (!memo) {
// Memo doesn't exist - this is fine, just skip it
this.logger.log(`Memo ${memoId} not found, skipping source update`);
continue;
}
// Update source with audio_deleted flag and clear the path
const updatedSource = {
...memo.source,
audio_path: null,
audio_deleted: true,
audio_deleted_at: deletedAt,
};
const { error: updateError } = await this.memoroServiceClient
.from('memos')
.update({ source: updatedSource })
.eq('id', memoId)
.eq('user_id', userId);
if (updateError) {
this.logger.warn(`Failed to mark memo ${memoId} as audio deleted:`, updateError);
} else {
this.logger.log(`Marked memo ${memoId} as audio deleted`);
}
} catch (error) {
this.logger.warn(`Error marking memo ${memoId} as audio deleted:`, error);
}
}
}
/**
* Delete files in batches to avoid rate limits and timeout issues.
*/
private async deleteFilesInBatches(
filePaths: string[],
userId: string
): Promise<{ deleted: number; failed: number; errors: CleanupError[] }> {
const errors: CleanupError[] = [];
let deleted = 0;
let failed = 0;
// Process in batches
for (let i = 0; i < filePaths.length; i += this.BATCH_SIZE) {
const batch = filePaths.slice(i, i + this.BATCH_SIZE);
try {
const { error: deleteError } = await this.memoroServiceClient.storage
.from(this.STORAGE_BUCKET)
.remove(batch);
if (deleteError) {
this.logger.error(`Batch delete failed:`, deleteError);
failed += batch.length;
errors.push({
userId,
error: `Batch delete failed: ${deleteError.message}`,
});
} else {
deleted += batch.length;
this.logger.log(
`Deleted batch of ${batch.length} files (${i + batch.length}/${filePaths.length})`
);
}
} catch (error) {
this.logger.error(`Batch delete error:`, error);
failed += batch.length;
errors.push({
userId,
error: error.message || 'Unknown batch delete error',
});
}
// Delay between batches
if (i + this.BATCH_SIZE < filePaths.length) {
await this.delay(this.BATCH_DELAY_MS);
}
}
return { deleted, failed, errors };
}
/**
* Get users with audio auto-delete enabled from mana-core-middleware.
*/
private async getUsersWithCleanupEnabled(): Promise<string[]> {
if (!this.manaCoreMiddlewareUrl || !this.internalApiKey) {
this.logger.warn('MANA_SERVICE_URL or INTERNAL_API_KEY not configured');
return [];
}
try {
const response = await fetch(
`${this.manaCoreMiddlewareUrl}/internal/users/audio-cleanup-enabled`,
{
method: 'GET',
headers: {
'X-Internal-API-Key': this.internalApiKey,
'Content-Type': 'application/json',
},
}
);
if (!response.ok) {
throw new Error(`Failed to fetch users: ${response.status} ${response.statusText}`);
}
const data: UserCleanupEnabledResponse = await response.json();
return data.userIds || [];
} catch (error) {
this.logger.error('Failed to get users with cleanup enabled:', error);
throw error;
}
}
/**
* Delay helper to avoid rate limits.
*/
private delay(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
/**
* Log cleanup run to the database for monitoring.
*/
private async logCleanupRun(data: {
usersProcessed: number;
filesDeleted: number;
filesFailed: number;
errors: CleanupError[];
startedAt: string;
}): Promise<void> {
try {
const { error } = await this.memoroServiceClient.from('audio_cleanup_logs').insert({
started_at: data.startedAt,
completed_at: new Date().toISOString(),
status: data.errors.length === 0 ? 'completed' : 'completed_with_errors',
users_processed: data.usersProcessed,
files_deleted: data.filesDeleted,
files_failed: data.filesFailed,
error_details: data.errors.length > 0 ? data.errors : null,
});
if (error) {
this.logger.warn('Failed to log cleanup run:', error);
}
} catch (error) {
this.logger.warn('Failed to log cleanup run:', error);
}
}
}

View file

@ -0,0 +1,12 @@
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { AudioCleanupService } from './audio-cleanup.service';
import { AudioCleanupController } from './audio-cleanup.controller';
@Module({
imports: [ConfigModule],
controllers: [AudioCleanupController],
providers: [AudioCleanupService],
exports: [AudioCleanupService],
})
export class CleanupModule {}

View file

@ -0,0 +1,20 @@
export interface CleanupResult {
success: boolean;
usersProcessed: number;
filesDeleted: number;
filesFailed: number;
errors: CleanupError[];
startedAt: string;
completedAt: string;
}
export interface CleanupError {
userId?: string;
memoId?: string;
filePath?: string;
error: string;
}
export interface UserCleanupEnabledResponse {
userIds: string[];
}

View file

@ -0,0 +1,279 @@
import { Injectable, BadRequestException, ForbiddenException } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { InsufficientCreditsException } from '../errors/insufficient-credits.error';
export interface CreditCheckResponse {
hasEnoughCredits: boolean;
currentCredits: number;
requiredCredits: number;
creditType: 'user' | 'space';
}
export interface CreditConsumptionResponse {
success: boolean;
message: string;
remainingCredits?: number;
}
@Injectable()
export class CreditClientService {
private readonly manaServiceUrl: string;
constructor(private configService: ConfigService) {
this.manaServiceUrl = this.configService.get<string>(
'MANA_SERVICE_URL',
'http://localhost:3000'
);
}
/**
* Check if user has enough personal credits
*/
async checkUserCredits(
userId: string,
requiredCredits: number,
token: string
): Promise<CreditCheckResponse> {
try {
const response = await fetch(`${this.manaServiceUrl}/users/credits`, {
method: 'GET',
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
},
});
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
throw new BadRequestException(
`Failed to check user credits: ${errorData.message || response.statusText}`
);
}
const data = await response.json();
const currentCredits = data.credits || 0;
return {
hasEnoughCredits: currentCredits >= requiredCredits,
currentCredits,
requiredCredits,
creditType: 'user',
};
} catch (error) {
console.error('Error checking user credits:', error);
throw error;
}
}
/**
* Check if space has enough credits
*/
async checkSpaceCredits(
spaceId: string,
requiredCredits: number,
token: string
): Promise<CreditCheckResponse> {
try {
const response = await fetch(`${this.manaServiceUrl}/spaces/${spaceId}/credits`, {
method: 'GET',
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
},
});
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
throw new BadRequestException(
`Failed to check space credits: ${errorData.message || response.statusText}`
);
}
const data = await response.json();
const currentCredits = data.space?.credits || data.creditSummary?.current_balance || 0;
return {
hasEnoughCredits: currentCredits >= requiredCredits,
currentCredits,
requiredCredits,
creditType: 'space',
};
} catch (error) {
console.error('Error checking space credits:', error);
throw error;
}
}
/**
* Consume credits from user's personal balance
*/
async consumeUserCredits(
userId: string,
amount: number,
token: string,
description?: string
): Promise<CreditConsumptionResponse> {
try {
const response = await fetch(`${this.manaServiceUrl}/users/credits/consume`, {
method: 'POST',
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
amount,
description: description || `Credit consumption for operation`,
}),
});
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
if (response.status === 400 && errorData.message?.includes('insufficient')) {
throw new InsufficientCreditsException({
requiredCredits: amount,
availableCredits: 0, // We don't know the exact amount from this error
creditType: 'user',
operation: 'credit_consumption',
});
}
throw new BadRequestException(
`Failed to consume user credits: ${errorData.message || response.statusText}`
);
}
const data = await response.json();
return {
success: true,
message: data.message || 'Credits consumed successfully',
};
} catch (error) {
console.error('Error consuming user credits:', error);
throw error;
}
}
/**
* Consume credits from space balance
*/
async consumeSpaceCredits(
spaceId: string,
amount: number,
token: string,
description?: string
): Promise<CreditConsumptionResponse> {
try {
const response = await fetch(`${this.manaServiceUrl}/spaces/${spaceId}/credits/consume`, {
method: 'POST',
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
amount,
description: description || `Credit consumption for operation`,
}),
});
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
if (response.status === 400 && errorData.message?.includes('insufficient')) {
throw new InsufficientCreditsException({
requiredCredits: amount,
availableCredits: 0, // We don't know the exact amount from this error
creditType: 'space',
operation: 'credit_consumption',
});
}
throw new BadRequestException(
`Failed to consume space credits: ${errorData.message || response.statusText}`
);
}
const data = await response.json();
return {
success: true,
message: data.message || 'Credits consumed successfully',
};
} catch (error) {
console.error('Error consuming space credits:', error);
throw error;
}
}
/**
* Check and consume credits based on operation context
* If spaceId is provided, check space credits first, fall back to user credits
* If no spaceId, use user credits only
*/
async checkAndConsumeCredits(
userId: string,
requiredCredits: number,
token: string,
options: {
spaceId?: string;
description?: string;
operation: string;
}
): Promise<{ consumed: boolean; creditType: 'user' | 'space'; message: string }> {
const { spaceId, description, operation } = options;
try {
// If spaceId provided, try space credits first
if (spaceId) {
try {
const spaceCheck = await this.checkSpaceCredits(spaceId, requiredCredits, token);
if (spaceCheck.hasEnoughCredits) {
await this.consumeSpaceCredits(
spaceId,
requiredCredits,
token,
description || `${operation} operation`
);
return {
consumed: true,
creditType: 'space',
message: `Consumed ${requiredCredits} credits from space balance`,
};
}
} catch (spaceError) {
console.warn(
`Space credit check failed, falling back to user credits: ${spaceError.message}`
);
}
}
// Use user credits (either as fallback or primary)
const userCheck = await this.checkUserCredits(userId, requiredCredits, token);
if (!userCheck.hasEnoughCredits) {
throw new InsufficientCreditsException({
requiredCredits,
availableCredits: userCheck.currentCredits,
creditType: userCheck.creditType,
operation: options.operation,
spaceId: options.spaceId,
});
}
await this.consumeUserCredits(
userId,
requiredCredits,
token,
description || `${operation} operation`
);
return {
consumed: true,
creditType: 'user',
message: `Consumed ${requiredCredits} credits from user balance`,
};
} catch (error) {
console.error(`Credit check and consumption failed for ${operation}:`, error);
throw error;
}
}
}

View file

@ -0,0 +1,532 @@
import { Test, TestingModule } from '@nestjs/testing';
import { ConfigService } from '@nestjs/config';
import { BadRequestException, ForbiddenException } from '@nestjs/common';
import { CreditConsumptionService, CreditConsumptionResult } from './credit-consumption.service';
import * as jwt from 'jsonwebtoken';
jest.mock('jsonwebtoken');
global.fetch = jest.fn();
describe('CreditConsumptionService', () => {
let service: CreditConsumptionService;
let configService: jest.Mocked<ConfigService>;
const mockUserId = 'user-123';
const mockSpaceId = 'space-123';
const mockUserToken = 'user-jwt-token';
const mockServiceToken = 'service-jwt-token';
const mockJwtSecret = 'test-secret';
const mockManaServiceUrl = 'https://mana-service.example.com';
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
CreditConsumptionService,
{
provide: ConfigService,
useValue: {
get: jest.fn((key: string) => {
const config: Record<string, string> = {
MANA_SERVICE_URL: mockManaServiceUrl,
MANA_JWT_SECRET: mockJwtSecret,
MEMORO_APP_ID: 'test-app-id',
};
return config[key];
}),
},
},
],
}).compile();
service = module.get<CreditConsumptionService>(CreditConsumptionService);
configService = module.get(ConfigService);
// Clear mocks
(global.fetch as jest.Mock).mockClear();
(jwt.sign as jest.Mock).mockClear();
});
afterEach(() => {
jest.clearAllMocks();
});
it('should be defined', () => {
expect(service).toBeDefined();
});
describe('getServiceRoleToken', () => {
it('should generate and cache a service role token', async () => {
const mockToken = 'generated-service-token';
(jwt.sign as jest.Mock).mockReturnValue(mockToken);
// Access private method through any type casting
const token = await (service as any).getServiceRoleToken();
expect(token).toBe(mockToken);
expect(jwt.sign).toHaveBeenCalledWith(
expect.objectContaining({
sub: 'memoro-service',
role: 'platform_admin',
app_id: 'test-app-id',
service: 'memoro-service',
}),
mockJwtSecret
);
});
it('should reuse cached token if still valid', async () => {
const mockToken = 'cached-service-token';
(jwt.sign as jest.Mock).mockReturnValue(mockToken);
// First call - generates new token
const token1 = await (service as any).getServiceRoleToken();
expect(jwt.sign).toHaveBeenCalledTimes(1);
// Second call - should use cached token
const token2 = await (service as any).getServiceRoleToken();
expect(token1).toBe(token2);
expect(jwt.sign).toHaveBeenCalledTimes(1); // Still only called once
});
it('should throw error if JWT secret is not configured', async () => {
configService.get.mockImplementation((key: string) => {
if (key === 'MANA_JWT_SECRET') return undefined;
return 'value';
});
await expect((service as any).getServiceRoleToken()).rejects.toThrow(
'Service role token generation failed: MANA_JWT_SECRET not configured'
);
});
});
describe('consumeCreditsForOperation', () => {
beforeEach(() => {
(jwt.sign as jest.Mock).mockReturnValue(mockServiceToken);
});
it('should successfully consume credits for an operation', async () => {
const mockResponse: CreditConsumptionResult = {
success: true,
creditsConsumed: 10,
creditType: 'user',
remainingCredits: 90,
message: 'Credits consumed successfully',
};
(global.fetch as jest.Mock).mockResolvedValueOnce({
ok: true,
json: async () => mockResponse,
});
const result = await service.consumeCreditsForOperation(
mockUserId,
'transcription',
10,
'Test transcription',
{ memoId: 'memo-123' },
undefined,
mockUserToken
);
expect(result).toEqual({
success: true,
creditsConsumed: 10,
creditType: 'user',
remainingCredits: 90,
message: 'Credits consumed successfully',
});
expect(global.fetch).toHaveBeenCalledWith(
`${mockManaServiceUrl}/credits/consume`,
expect.objectContaining({
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${mockUserToken}`,
'X-Service-Auth': 'memoro-service',
},
})
);
// Check body separately
const fetchCall = (global.fetch as jest.Mock).mock.calls[0];
const bodyData = JSON.parse(fetchCall[1].body);
expect(bodyData).toEqual({
userId: mockUserId,
amount: 10,
operation: 'transcription',
description: 'Test transcription',
metadata: expect.objectContaining({
memoId: 'memo-123',
service: 'memoro-service',
timestamp: expect.any(String),
}),
spaceId: undefined,
});
});
it('should consume space credits when spaceId is provided', async () => {
const mockResponse: CreditConsumptionResult = {
success: true,
creditsConsumed: 10,
creditType: 'space',
remainingCredits: 190,
message: 'Credits consumed successfully',
};
(global.fetch as jest.Mock).mockResolvedValueOnce({
ok: true,
json: async () => mockResponse,
});
const result = await service.consumeCreditsForOperation(
mockUserId,
'transcription',
10,
'Test transcription',
{},
mockSpaceId,
mockUserToken
);
expect(result.creditType).toBe('space');
expect(global.fetch).toHaveBeenCalledWith(
expect.any(String),
expect.objectContaining({
body: expect.stringContaining(`"spaceId":"${mockSpaceId}"`),
})
);
});
it('should throw BadRequestException for invalid inputs', async () => {
await expect(
service.consumeCreditsForOperation(
'',
'transcription',
10,
'Test',
{},
undefined,
mockUserToken
)
).rejects.toThrow(BadRequestException);
await expect(
service.consumeCreditsForOperation(
mockUserId,
'transcription',
0,
'Test',
{},
undefined,
mockUserToken
)
).rejects.toThrow(BadRequestException);
await expect(
service.consumeCreditsForOperation(
mockUserId,
'transcription',
10,
'Test',
{},
undefined,
''
)
).rejects.toThrow(BadRequestException);
});
it('should handle insufficient credits gracefully', async () => {
(global.fetch as jest.Mock).mockResolvedValueOnce({
ok: false,
status: 400,
statusText: 'Bad Request',
json: async () => ({ message: 'insufficient credits' }),
});
const result = await service.consumeCreditsForOperation(
mockUserId,
'transcription',
100,
'Test',
{},
undefined,
mockUserToken
);
expect(result).toEqual({
success: false,
creditsConsumed: 0,
creditType: 'user',
message: 'Insufficient credits. Required: 100',
error: 'insufficient credits',
});
});
it('should handle server errors', async () => {
(global.fetch as jest.Mock).mockResolvedValueOnce({
ok: false,
status: 500,
statusText: 'Internal Server Error',
json: async () => ({ message: 'Server error' }),
});
const result = await service.consumeCreditsForOperation(
mockUserId,
'transcription',
10,
'Test',
{},
undefined,
mockUserToken
);
expect(result).toEqual({
success: false,
creditsConsumed: 0,
creditType: 'user',
message: 'Credit consumption failed',
error: 'Credit consumption failed: Server error',
});
});
it('should handle network errors', async () => {
(global.fetch as jest.Mock).mockRejectedValueOnce(new Error('Network error'));
const result = await service.consumeCreditsForOperation(
mockUserId,
'transcription',
10,
'Test',
{},
undefined,
mockUserToken
);
expect(result).toEqual({
success: false,
creditsConsumed: 0,
creditType: 'user',
message: 'Credit consumption failed',
error: 'Network error',
});
});
});
describe('convenience methods', () => {
beforeEach(() => {
jest.spyOn(service, 'consumeCreditsForOperation').mockResolvedValue({
success: true,
creditsConsumed: 10,
creditType: 'user',
message: 'Success',
});
});
it('should consume transcription credits', async () => {
await service.consumeTranscriptionCredits(
mockUserId,
5,
10,
'memo-123',
'fast',
mockSpaceId,
mockUserToken
);
expect(service.consumeCreditsForOperation).toHaveBeenCalledWith(
mockUserId,
'transcription',
10,
'Transcription completed via fast route for memo memo-123',
{
memoId: 'memo-123',
route: 'fast',
durationMinutes: 5,
actualCost: 10,
},
mockSpaceId,
mockUserToken
);
});
it('should consume question credits', async () => {
const questionText = 'What is the main topic discussed?';
await service.consumeQuestionCredits(
mockUserId,
'memo-123',
questionText,
mockSpaceId,
mockUserToken
);
expect(service.consumeCreditsForOperation).toHaveBeenCalledWith(
mockUserId,
'question',
5,
'Question asked on memo memo-123',
{
memoId: 'memo-123',
questionLength: questionText.length,
questionPreview: questionText,
},
mockSpaceId,
mockUserToken
);
});
it('should consume combination credits', async () => {
const memoIds = ['memo-1', 'memo-2', 'memo-3'];
await service.consumeCombinationCredits(mockUserId, memoIds, mockSpaceId, mockUserToken);
expect(service.consumeCreditsForOperation).toHaveBeenCalledWith(
mockUserId,
'combination',
15, // 5 credits per memo
'Combined 3 memos',
{
memoCount: 3,
memoIds,
},
mockSpaceId,
mockUserToken
);
});
it('should consume blueprint credits', async () => {
await service.consumeBlueprintCredits(
mockUserId,
'blueprint-123',
'memo-123',
mockSpaceId,
mockUserToken
);
expect(service.consumeCreditsForOperation).toHaveBeenCalledWith(
mockUserId,
'blueprint',
5,
'Blueprint blueprint-123 applied to memo memo-123',
{
blueprintId: 'blueprint-123',
memoId: 'memo-123',
},
mockSpaceId,
mockUserToken
);
});
it('should consume headline credits', async () => {
await service.consumeHeadlineCredits(mockUserId, 'memo-123', mockSpaceId, mockUserToken);
expect(service.consumeCreditsForOperation).toHaveBeenCalledWith(
mockUserId,
'headline',
10,
'Headline generation for memo memo-123',
{
memoId: 'memo-123',
},
mockSpaceId,
mockUserToken
);
});
});
describe('validateCreditsForOperation', () => {
beforeEach(() => {
(jwt.sign as jest.Mock).mockReturnValue(mockServiceToken);
});
it('should validate credits successfully', async () => {
(global.fetch as jest.Mock).mockResolvedValueOnce({
ok: true,
json: async () => ({
valid: true,
availableCredits: 100,
}),
});
const result = await service.validateCreditsForOperation(
mockUserId,
'transcription',
10,
mockSpaceId
);
expect(result).toEqual({
hasEnoughCredits: true,
availableCredits: 100,
requiredCredits: 10,
});
});
it('should handle validation failure', async () => {
(global.fetch as jest.Mock).mockResolvedValueOnce({
ok: false,
json: async () => ({ message: 'Insufficient credits' }),
});
const result = await service.validateCreditsForOperation(mockUserId, 'transcription', 100);
expect(result).toEqual({
hasEnoughCredits: false,
availableCredits: 0,
requiredCredits: 100,
});
});
});
describe('getCurrentCredits', () => {
beforeEach(() => {
(jwt.sign as jest.Mock).mockReturnValue(mockServiceToken);
});
it('should get current credits for user', async () => {
(global.fetch as jest.Mock).mockResolvedValueOnce({
ok: true,
json: async () => ({ credits: 100 }),
});
const result = await service.getCurrentCredits(mockUserId);
expect(result).toEqual({
userCredits: 100,
spaceCredits: undefined,
});
});
it('should get both user and space credits', async () => {
(global.fetch as jest.Mock)
.mockResolvedValueOnce({
ok: true,
json: async () => ({ credits: 100 }),
})
.mockResolvedValueOnce({
ok: true,
json: async () => ({ creditSummary: { current_balance: 200 } }),
});
const result = await service.getCurrentCredits(mockUserId, mockSpaceId);
expect(result).toEqual({
userCredits: 100,
spaceCredits: 200,
});
});
it('should handle errors gracefully', async () => {
(global.fetch as jest.Mock).mockRejectedValue(new Error('Network error'));
const result = await service.getCurrentCredits(mockUserId, mockSpaceId);
expect(result).toEqual({
userCredits: 0,
spaceCredits: undefined,
});
});
});
});

View file

@ -0,0 +1,452 @@
import { Injectable, Logger, BadRequestException, ForbiddenException } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { InsufficientCreditsException } from '../errors/insufficient-credits.error';
export interface CreditConsumptionResult {
success: boolean;
creditsConsumed: number;
creditType: 'user' | 'space';
remainingCredits?: number;
message: string;
error?: string;
}
export interface CreditOperationMetadata {
memoId?: string;
route?: string;
durationMinutes?: number;
actualCost?: number;
operationId?: string;
[key: string]: any;
}
export type CreditOperation =
| 'transcription'
| 'question'
| 'combination'
| 'blueprint'
| 'headline'
| 'memory_creation'
| 'memo_sharing'
| 'space_operation'
| 'meeting_recording';
@Injectable()
export class CreditConsumptionService {
private readonly logger = new Logger(CreditConsumptionService.name);
private readonly manaServiceUrl: string;
private readonly manaServiceKey: string;
private readonly appId: string;
constructor(private configService: ConfigService) {
this.manaServiceUrl =
this.configService.get<string>('MANA_SERVICE_URL') ||
'https://mana-core-middleware-111768794939.europe-west3.run.app';
this.manaServiceUrl = this.manaServiceUrl.replace(/\/$/, '');
this.manaServiceKey = this.configService.get<string>('MANA_SUPABASE_SECRET_KEY');
this.appId = this.configService.get<string>('MEMORO_APP_ID');
if (!this.appId) {
throw new Error('MEMORO_APP_ID environment variable is required');
}
}
/**
* Centralized credit consumption for all operations
* Uses the existing user JWT token to work with RLS
*/
async consumeCreditsForOperation(
userId: string,
operation: CreditOperation,
amount: number,
description: string,
metadata: CreditOperationMetadata = {},
spaceId?: string,
userToken?: string
): Promise<CreditConsumptionResult> {
try {
this.logger.log(
`[consumeCreditsForOperation] ${operation}: ${amount} credits for user ${userId}${spaceId ? ` in space ${spaceId}` : ''}`
);
// Input validation
if (!userId) {
throw new BadRequestException('User ID is required');
}
if (amount <= 0) {
throw new BadRequestException('Credit amount must be positive');
}
// Determine if we're using service auth or user auth
const isServiceAuth = !userToken;
// Prepare request body for mana-core-middleware
const consumeBody = {
userId,
appId: this.appId,
amount,
operation,
description,
metadata: {
...metadata,
service: 'memoro-service',
timestamp: new Date().toISOString(),
},
spaceId,
};
let response;
if (isServiceAuth) {
// Use service authentication endpoint
this.logger.log(`[consumeCreditsForOperation] Using service auth for user ${userId}`);
if (!this.manaServiceKey) {
throw new Error('MANA_SUPABASE_SECRET_KEY not configured');
}
// Use service endpoint with different body structure
const serviceBody = {
userId,
appId: this.appId,
amount,
operationType: operation,
description,
operationDetails: metadata,
spaceId,
};
response = await fetch(`${this.manaServiceUrl}/credits/service/consume`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${this.manaServiceKey}`,
'X-Service-Auth': 'memoro-service',
},
body: JSON.stringify(serviceBody),
});
} else {
// Use regular user token auth
this.logger.log(
`[consumeCreditsForOperation] Using user token: ${userToken.substring(0, 50)}...`
);
// Try to decode token payload for debugging (without verification)
try {
const parts = userToken.split('.');
if (parts.length === 3) {
const payload = parts[1];
const paddedPayload = payload + '='.repeat((4 - (payload.length % 4)) % 4);
const decodedPayload = Buffer.from(paddedPayload, 'base64').toString();
const tokenData = JSON.parse(decodedPayload);
this.logger.log(
`[consumeCreditsForOperation] Token payload:`,
JSON.stringify(tokenData, null, 2)
);
this.logger.log(
`[consumeCreditsForOperation] Token has app_id: ${tokenData.app_id}, sub: ${tokenData.sub}, aud: ${tokenData.aud}`
);
}
} catch (decodeError) {
this.logger.warn(
`[consumeCreditsForOperation] Could not decode token for debugging:`,
decodeError.message
);
}
response = await fetch(`${this.manaServiceUrl}/credits/consume`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${userToken}`,
'X-Service-Auth': 'memoro-service',
},
body: JSON.stringify(consumeBody),
});
}
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
const errorMessage = errorData.message || `HTTP ${response.status}: ${response.statusText}`;
this.logger.error(
`[consumeCreditsForOperation] Credit consumption failed: ${response.status} - ${errorMessage}`
);
if (response.status === 400 && errorMessage.toLowerCase().includes('insufficient')) {
// Try to extract available credits from error message if possible
const availableMatch = errorMessage.match(/Available:\s*(\d+)/);
const availableCredits = availableMatch ? parseInt(availableMatch[1]) : 0;
throw new InsufficientCreditsException({
requiredCredits: amount,
availableCredits,
creditType: spaceId ? 'space' : 'user',
operation,
spaceId,
});
}
throw new Error(`Credit consumption failed: ${errorMessage}`);
}
const result = await response.json();
this.logger.log(
`[consumeCreditsForOperation] Successfully consumed ${amount} credits for ${operation}`
);
// Note: Frontend will refresh credits periodically or after operations
return {
success: true,
creditsConsumed: amount,
creditType: result.creditType || (spaceId ? 'space' : 'user'),
remainingCredits: result.remainingCredits,
message: result.message || 'Credits consumed successfully',
};
} catch (error) {
this.logger.error(
`[consumeCreditsForOperation] Error consuming credits for ${operation}:`,
error
);
if (
error instanceof BadRequestException ||
error instanceof ForbiddenException ||
error instanceof InsufficientCreditsException
) {
throw error;
}
return {
success: false,
creditsConsumed: 0,
creditType: spaceId ? 'space' : 'user',
message: 'Credit consumption failed',
error: error.message,
};
}
}
/**
* Convenience methods for specific operations
*/
async consumeTranscriptionCredits(
userId: string,
durationMinutes: number,
actualCost: number,
memoId: string,
route: 'fast' | 'batch',
spaceId?: string,
userToken?: string
): Promise<CreditConsumptionResult> {
return this.consumeCreditsForOperation(
userId,
'transcription',
actualCost,
`Transcription completed via ${route} route for memo ${memoId}`,
{
memoId,
route,
durationMinutes,
actualCost,
},
spaceId,
userToken
);
}
async consumeQuestionCredits(
userId: string,
memoId: string,
questionText: string,
spaceId?: string,
userToken?: string
): Promise<CreditConsumptionResult> {
const questionCost = 5; // Standard question cost
return this.consumeCreditsForOperation(
userId,
'question',
questionCost,
`Question asked on memo ${memoId}`,
{
memoId,
questionLength: questionText.length,
questionPreview: questionText.substring(0, 100),
},
spaceId,
userToken
);
}
async consumeCombinationCredits(
userId: string,
memoIds: string[],
spaceId?: string,
userToken?: string
): Promise<CreditConsumptionResult> {
const combinationCost = memoIds.length * 5; // 5 credits per memo
return this.consumeCreditsForOperation(
userId,
'combination',
combinationCost,
`Combined ${memoIds.length} memos`,
{
memoCount: memoIds.length,
memoIds,
},
spaceId,
userToken
);
}
async consumeBlueprintCredits(
userId: string,
blueprintId: string,
memoId: string,
spaceId?: string,
userToken?: string
): Promise<CreditConsumptionResult> {
const blueprintCost = 5; // Standard blueprint cost
return this.consumeCreditsForOperation(
userId,
'blueprint',
blueprintCost,
`Blueprint ${blueprintId} applied to memo ${memoId}`,
{
blueprintId,
memoId,
},
spaceId,
userToken
);
}
async consumeHeadlineCredits(
userId: string,
memoId: string,
spaceId?: string,
userToken?: string
): Promise<CreditConsumptionResult> {
const headlineCost = 10; // Standard headline cost
return this.consumeCreditsForOperation(
userId,
'headline',
headlineCost,
`Headline generation for memo ${memoId}`,
{
memoId,
},
spaceId,
userToken
);
}
/**
* Validate credits before operation (pre-flight check)
*/
async validateCreditsForOperation(
userId: string,
operation: CreditOperation,
amount: number,
spaceId?: string
): Promise<{ hasEnoughCredits: boolean; availableCredits: number; requiredCredits: number }> {
try {
if (!this.manaServiceKey) {
throw new Error('MANA_SUPABASE_SECRET_KEY not configured');
}
const response = await fetch(`${this.manaServiceUrl}/credits/service/validate`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${this.manaServiceKey}`,
'X-Service-Auth': 'memoro-service',
},
body: JSON.stringify({
userId,
amount,
spaceId,
operation,
}),
});
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
this.logger.warn(`Credit validation failed: ${errorData.message}`);
return {
hasEnoughCredits: false,
availableCredits: 0,
requiredCredits: amount,
};
}
const result = await response.json();
return {
// mana-core returns { hasCredits, balance }
hasEnoughCredits: result.hasCredits || result.valid || false,
availableCredits: result.balance || result.availableCredits || 0,
requiredCredits: amount,
};
} catch (error) {
this.logger.error('Error validating credits:', error);
return {
hasEnoughCredits: false,
availableCredits: 0,
requiredCredits: amount,
};
}
}
/**
* Get current credit balance for user
*/
async getCurrentCredits(
userId: string,
spaceId?: string
): Promise<{ userCredits: number; spaceCredits?: number }> {
try {
if (!this.manaServiceKey) {
throw new Error('MANA_SUPABASE_SECRET_KEY not configured');
}
// Get user credits
const userResponse = await fetch(`${this.manaServiceUrl}/users/credits`, {
method: 'GET',
headers: {
Authorization: `Bearer ${this.manaServiceKey}`,
'X-Service-Auth': 'memoro-service',
'X-User-ID': userId, // Pass user ID in header for service role requests
},
});
let userCredits = 0;
if (userResponse.ok) {
const userData = await userResponse.json();
userCredits = userData.credits || 0;
}
let spaceCredits = undefined;
if (spaceId) {
const spaceResponse = await fetch(`${this.manaServiceUrl}/spaces/${spaceId}/credits`, {
method: 'GET',
headers: {
Authorization: `Bearer ${this.manaServiceKey}`,
'X-Service-Auth': 'memoro-service',
'X-User-ID': userId,
},
});
if (spaceResponse.ok) {
const spaceData = await spaceResponse.json();
spaceCredits = spaceData.creditSummary?.current_balance || 0;
}
}
return { userCredits, spaceCredits };
} catch (error) {
this.logger.error('Error getting current credits:', error);
return { userCredits: 0, spaceCredits: undefined };
}
}
}

View file

@ -0,0 +1,227 @@
import { Controller, Post, Body, UseGuards, BadRequestException, Get } from '@nestjs/common';
import { AuthGuard } from '../guards/auth.guard';
import { User } from '../decorators/user.decorator';
import { CreditClientService } from './credit-client.service';
import {
calculateTranscriptionCost,
calculateTranscriptionCostByLength,
OPERATION_COSTS,
} from './pricing.constants';
import { InsufficientCreditsException } from '../errors/insufficient-credits.error';
// DTOs for credit operations
class CheckTranscriptionCreditsDto {
durationSeconds?: number;
transcriptLength?: number;
spaceId?: string;
}
class ConsumeTranscriptionCreditsDto {
durationSeconds?: number;
transcriptLength?: number;
spaceId?: string;
description?: string;
}
class ConsumeOperationCreditsDto {
operation:
| 'HEADLINE_GENERATION'
| 'MEMORY_CREATION'
| 'BLUEPRINT_PROCESSING'
| 'QUESTION_MEMO'
| 'NEW_MEMORY'
| 'MEMO_COMBINE';
spaceId?: string;
description?: string;
memoId?: string;
memoCount?: number; // For MEMO_COMBINE operation
}
@Controller('memoro/credits')
export class CreditController {
constructor(private readonly creditClientService: CreditClientService) {}
@Get('pricing')
async getPricing() {
return {
operationCosts: OPERATION_COSTS,
transcriptionPerHour: OPERATION_COSTS.TRANSCRIPTION_PER_MINUTE * 60,
lastUpdated: new Date().toISOString(),
};
}
@Post('check-transcription')
@UseGuards(AuthGuard)
async checkTranscriptionCredits(@User() user: any, @Body() dto: CheckTranscriptionCreditsDto) {
if (!dto.durationSeconds && !dto.transcriptLength) {
throw new BadRequestException('Either durationSeconds or transcriptLength must be provided');
}
// Extract token from request
const token = user.token;
// Calculate required credits using new length-based or duration-based pricing
const requiredCredits = calculateTranscriptionCostByLength(
dto.transcriptLength,
dto.durationSeconds
);
try {
// If spaceId is provided, check space credits first
if (dto.spaceId) {
try {
const spaceCheck = await this.creditClientService.checkSpaceCredits(
dto.spaceId,
requiredCredits,
token
);
return {
hasEnoughCredits: spaceCheck.hasEnoughCredits,
requiredCredits,
currentCredits: spaceCheck.currentCredits,
creditType: 'space',
};
} catch (error) {
console.warn('Space credit check failed, falling back to user credits:', error.message);
}
}
// Check user credits
const userCheck = await this.creditClientService.checkUserCredits(
user.sub,
requiredCredits,
token
);
return {
hasEnoughCredits: userCheck.hasEnoughCredits,
requiredCredits,
currentCredits: userCheck.currentCredits,
creditType: 'user',
};
} catch (error) {
if (error instanceof InsufficientCreditsException) {
throw error; // Let the exception propagate with 402 status
}
throw new BadRequestException(`Failed to check credits: ${error.message}`);
}
}
@Post('consume-transcription')
@UseGuards(AuthGuard)
async consumeTranscriptionCredits(
@User() user: any,
@Body() dto: ConsumeTranscriptionCreditsDto
) {
if (!dto.durationSeconds && !dto.transcriptLength) {
throw new BadRequestException('Either durationSeconds or transcriptLength must be provided');
}
// Extract token from request
const token = user.token;
// Calculate required credits using new length-based or duration-based pricing
const requiredCredits = calculateTranscriptionCostByLength(
dto.transcriptLength,
dto.durationSeconds
);
const description =
dto.description ||
(dto.transcriptLength
? `Transcription (${dto.transcriptLength} chars)`
: `Transcription (${dto.durationSeconds}s)`);
try {
const result = await this.creditClientService.checkAndConsumeCredits(
user.sub,
requiredCredits,
token,
{
spaceId: dto.spaceId,
description,
operation: 'TRANSCRIPTION',
}
);
return {
success: true,
creditsConsumed: requiredCredits,
creditType: result.creditType,
message: result.message,
};
} catch (error) {
if (error instanceof InsufficientCreditsException) {
throw error; // Let the exception propagate with 402 status
}
throw new BadRequestException(`Failed to consume credits: ${error.message}`);
}
}
@Post('consume-operation')
@UseGuards(AuthGuard)
async consumeOperationCredits(@User() user: any, @Body() dto: ConsumeOperationCreditsDto) {
// Validate operation type
const validOperations = [
'HEADLINE_GENERATION',
'MEMORY_CREATION',
'BLUEPRINT_PROCESSING',
'QUESTION_MEMO',
'NEW_MEMORY',
'MEMO_COMBINE',
];
if (!validOperations.includes(dto.operation)) {
throw new BadRequestException(
`Invalid operation type. Must be one of: ${validOperations.join(', ')}`
);
}
// Extract token from request
const token = user.token;
// Define credit costs for different operations
const creditCosts = {
HEADLINE_GENERATION: 10,
MEMORY_CREATION: 10,
BLUEPRINT_PROCESSING: 5,
QUESTION_MEMO: 5,
NEW_MEMORY: 5,
MEMO_COMBINE: 5,
};
// Calculate required credits based on operation
let requiredCredits = creditCosts[dto.operation];
// For MEMO_COMBINE, multiply by the number of memos
if (dto.operation === 'MEMO_COMBINE' && dto.memoCount) {
requiredCredits = requiredCredits * dto.memoCount;
}
const description = dto.description || `${dto.operation} operation`;
try {
const result = await this.creditClientService.checkAndConsumeCredits(
user.sub,
requiredCredits,
token,
{
spaceId: dto.spaceId,
description,
operation: dto.operation,
}
);
return {
success: true,
creditsConsumed: requiredCredits,
creditType: result.creditType,
message: result.message,
};
} catch (error) {
if (error instanceof InsufficientCreditsException) {
throw error; // Let the exception propagate with 402 status
}
throw new BadRequestException(`Failed to consume credits: ${error.message}`);
}
}
}

View file

@ -0,0 +1,14 @@
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { AuthModule } from '../auth/auth.module';
import { CreditClientService } from './credit-client.service';
import { CreditController } from './credit.controller';
import { CreditConsumptionService } from './credit-consumption.service';
@Module({
imports: [ConfigModule, AuthModule],
controllers: [CreditController],
providers: [CreditClientService, CreditConsumptionService],
exports: [CreditClientService, CreditConsumptionService],
})
export class CreditsModule {}

View file

@ -0,0 +1,99 @@
/**
* Pricing constants for various operations in the memoro service
* These should match the costs defined in the app's appCosts.json
*/
export const OPERATION_COSTS = {
// Transcription costs
TRANSCRIPTION_PER_MINUTE: 2, // 2 credits per minute of audio
// Meeting recording costs
MEETING_RECORDING_PER_MINUTE: 2, // 2 credits per minute of recording (same as transcription)
// Memory/headline generation
HEADLINE_GENERATION: 10,
MEMORY_CREATION: 10,
// Blueprint operations
BLUEPRINT_PROCESSING: 5,
// Question/Memory processing
QUESTION_MEMO: 5, // 5 mana per question to memo
NEW_MEMORY: 5, // 5 mana per new memory creation
MEMO_COMBINE: 5, // 5 mana per memo when combining
// Other operations
MEMO_SHARING: 1,
SPACE_OPERATION: 2,
} as const;
/**
* Calculate transcription cost based on audio duration
* @param durationSeconds - Duration of audio in seconds
* @returns Number of credits required (2 credits per minute, minimum 2 credits)
*/
export function calculateTranscriptionCost(durationSeconds: number): number {
// Log the input for debugging
console.log(
`[calculateTranscriptionCost] Input duration: ${durationSeconds} seconds (${(durationSeconds / 60).toFixed(2)} minutes)`
);
const minutes = durationSeconds / 60; // Convert seconds to minutes
const cost = Math.ceil(minutes * OPERATION_COSTS.TRANSCRIPTION_PER_MINUTE);
// Apply minimum cost of 2 credits (1 minute worth) to prevent undercharging
const finalCost = Math.max(cost, 2);
console.log(
`[calculateTranscriptionCost] Calculated cost: ${cost}, Final cost (with minimum): ${finalCost} credits`
);
return finalCost;
}
/**
* Calculate memo combination cost based on number of memos
* @param memoCount - Number of memos being combined
* @returns Number of credits required
*/
export function calculateMemoCombineCost(memoCount: number): number {
return memoCount * OPERATION_COSTS.MEMO_COMBINE;
}
/**
* Calculate transcription cost with length-based pricing
* Uses existing per-minute pricing but ensures proper length-based calculation
* @param transcriptLength - Length of transcript in characters
* @param durationSeconds - Duration of audio in seconds (fallback if no transcript length)
* @returns Number of credits required
*/
export function calculateTranscriptionCostByLength(
transcriptLength?: number,
durationSeconds?: number
): number {
// If we have transcript length, use it to estimate duration
if (transcriptLength) {
// Estimate: ~150 words per minute, ~5 characters per word
const estimatedWords = transcriptLength / 5;
const estimatedMinutes = estimatedWords / 150;
const estimatedSeconds = estimatedMinutes * 60;
return calculateTranscriptionCost(estimatedSeconds);
}
// Fall back to duration-based calculation
if (durationSeconds) {
return calculateTranscriptionCost(durationSeconds);
}
// Throw error if no length or duration provided
throw new Error('Cannot calculate transcription cost: no transcript length or duration provided');
}
/**
* Get operation cost by operation type
* @param operation - The operation type
* @returns Number of credits required
*/
export function getOperationCost(operation: keyof typeof OPERATION_COSTS): number {
return OPERATION_COSTS[operation];
}

View file

@ -0,0 +1,51 @@
// Debug test file to verify logging in Cloud Run
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
async function debugTest() {
// Force all debug logs to use console.error for visibility
console.error('[DEBUG TEST 1] Starting debug test - console.error');
console.log('[DEBUG TEST 2] Starting debug test - console.log');
console.warn('[DEBUG TEST 3] Starting debug test - console.warn');
// Log process info
console.error('[DEBUG TEST] Process info:', {
nodeVersion: process.version,
platform: process.platform,
pid: process.pid,
cwd: process.cwd(),
execPath: process.execPath,
});
// Log all environment variables (be careful with sensitive data)
console.error('[DEBUG TEST] Environment variables count:', Object.keys(process.env).length);
console.error('[DEBUG TEST] NODE_ENV:', process.env.NODE_ENV);
console.error('[DEBUG TEST] PORT:', process.env.PORT);
console.error('[DEBUG TEST] AUDIO_MICROSERVICE_URL:', process.env.AUDIO_MICROSERVICE_URL);
// Check if dist files exist
const fs = require('fs');
const path = require('path');
const mainPath = path.join(__dirname, 'main.js');
console.error('[DEBUG TEST] Current file location:', __filename);
console.error('[DEBUG TEST] Main.js exists:', fs.existsSync(mainPath));
// Create the app to test NestJS logging
try {
const app = await NestFactory.create(AppModule, {
logger: ['error', 'warn', 'log', 'debug', 'verbose'],
});
console.error('[DEBUG TEST] NestJS app created successfully');
// Don't actually start the server, just test creation
await app.close();
console.error('[DEBUG TEST] Test completed successfully');
} catch (error) {
console.error('[DEBUG TEST] Error creating app:', error);
}
process.exit(0);
}
debugTest();

View file

@ -0,0 +1,12 @@
import { createParamDecorator, ExecutionContext } from '@nestjs/common';
import { JwtPayload } from '../types/jwt-payload.interface';
export const User = createParamDecorator(
(data: unknown, ctx: ExecutionContext): JwtPayload & { token: string } => {
const request = ctx.switchToHttp().getRequest();
return {
...request.user,
token: request.token,
};
}
);

View file

@ -0,0 +1,66 @@
# Standardized Error Handling
This directory contains standardized error handling utilities for the memoro-service.
## InsufficientCreditsException
A custom exception class for handling insufficient credit scenarios with consistent error responses.
### Features
- **HTTP Status Code**: 402 Payment Required
- **Standardized Error Format**: Includes required credits, available credits, credit type, and operation details
- **Type Safety**: Strongly typed error data structure
- **Consistent Responses**: All insufficient credit errors follow the same format
### Usage
```typescript
import { InsufficientCreditsException } from '../errors/insufficient-credits.error';
// Throw when insufficient credits detected
throw new InsufficientCreditsException({
requiredCredits: 100,
availableCredits: 50,
creditType: 'user', // or 'space'
operation: 'transcription',
spaceId: 'space-uuid' // optional
});
```
### Error Response Format
```json
{
"statusCode": 402,
"error": "InsufficientCredits",
"message": "Insufficient user credits. Required: 100, Available: 50",
"details": {
"requiredCredits": 100,
"availableCredits": 50,
"creditType": "user",
"operation": "transcription",
"spaceId": null
}
}
```
### Helper Functions
- `createInsufficientCreditsError()`: Factory function to create the exception
- `isInsufficientCreditsError()`: Type guard to check if an error is an insufficient credits error
- `extractCreditInfoFromError()`: Extract credit information from various error types
## Global Exception Filter
The `HttpExceptionFilter` in `/filters/http-exception.filter.ts` ensures all exceptions are properly formatted and InsufficientCreditsException returns the correct 402 status code.
## Migration Notes
All credit-consuming endpoints have been updated to use this standardized error handling:
- Transcription endpoints
- Question memo processing
- Memo combination
- All credit consumption operations
Legacy `ForbiddenException` and `BadRequestException` for insufficient credits have been replaced with `InsufficientCreditsException`.

View file

@ -0,0 +1,90 @@
import { HttpException, HttpStatus } from '@nestjs/common';
export interface InsufficientCreditsErrorData {
requiredCredits: number;
availableCredits: number;
creditType: 'user' | 'space';
operation?: string;
spaceId?: string;
}
/**
* Custom exception for insufficient credits scenarios
* Uses HTTP 402 Payment Required status code
*/
export class InsufficientCreditsException extends HttpException {
constructor(data: InsufficientCreditsErrorData) {
const message = `Insufficient ${data.creditType} credits. Required: ${data.requiredCredits}, Available: ${data.availableCredits}`;
const response = {
statusCode: HttpStatus.PAYMENT_REQUIRED,
error: 'InsufficientCredits',
message,
details: {
requiredCredits: data.requiredCredits,
availableCredits: data.availableCredits,
creditType: data.creditType,
operation: data.operation,
spaceId: data.spaceId,
},
};
super(response, HttpStatus.PAYMENT_REQUIRED);
}
}
/**
* Helper function to create standardized insufficient credits error
*/
export function createInsufficientCreditsError(
requiredCredits: number,
availableCredits: number,
creditType: 'user' | 'space' = 'user',
operation?: string,
spaceId?: string
): InsufficientCreditsException {
return new InsufficientCreditsException({
requiredCredits,
availableCredits,
creditType,
operation,
spaceId,
});
}
/**
* Type guard to check if an error is an insufficient credits error
*/
export function isInsufficientCreditsError(error: any): error is InsufficientCreditsException {
return (
error instanceof InsufficientCreditsException ||
(error instanceof HttpException && error.getStatus() === HttpStatus.PAYMENT_REQUIRED) ||
error?.message?.toLowerCase().includes('insufficient credits')
);
}
/**
* Extract credit information from various error types
*/
export function extractCreditInfoFromError(error: any): {
requiredCredits?: number;
availableCredits?: number;
creditType?: 'user' | 'space';
} | null {
if (error instanceof InsufficientCreditsException) {
const response = error.getResponse() as any;
return response.details || null;
}
// Try to parse from error message
const messageMatch = error?.message?.match(/Required:\s*(\d+),\s*Available:\s*(\d+)/);
if (messageMatch) {
return {
requiredCredits: parseInt(messageMatch[1]),
availableCredits: parseInt(messageMatch[2]),
creditType: error.message.includes('space') ? 'space' : 'user',
};
}
return null;
}

View file

@ -0,0 +1,37 @@
import {
ExceptionFilter,
Catch,
ArgumentsHost,
HttpException,
HttpStatus,
Logger,
} from '@nestjs/common';
import { Response } from 'express';
import { InsufficientCreditsException } from '../errors/insufficient-credits.error';
/**
* Global exception filter to handle HTTP exceptions
* Ensures proper error responses, especially for InsufficientCreditsException
*/
@Catch(HttpException)
export class HttpExceptionFilter implements ExceptionFilter {
private readonly logger = new Logger(HttpExceptionFilter.name);
catch(exception: HttpException, host: ArgumentsHost) {
const ctx = host.switchToHttp();
const response = ctx.getResponse<Response>();
const status = exception.getStatus();
const exceptionResponse = exception.getResponse();
// Log the error for debugging
this.logger.error(`HTTP ${status} Error: ${exception.message}`, exception.stack);
// Ensure InsufficientCreditsException returns 402 status
if (exception instanceof InsufficientCreditsException) {
return response.status(HttpStatus.PAYMENT_REQUIRED).json(exceptionResponse);
}
// For other exceptions, return the standard response
response.status(status).json(exceptionResponse);
}
}

View file

@ -0,0 +1,230 @@
import { Test, TestingModule } from '@nestjs/testing';
import { ExecutionContext, UnauthorizedException } from '@nestjs/common';
import { AuthGuard } from './auth.guard';
import { AuthClientService } from '../auth/auth-client.service';
import { JwtPayload } from '../types/jwt-payload.interface';
describe('AuthGuard', () => {
let guard: AuthGuard;
let authClientService: jest.Mocked<AuthClientService>;
const mockJwtPayload: JwtPayload = {
sub: 'user-123',
email: 'test@example.com',
role: 'authenticated',
app_id: 'test-app',
aud: 'authenticated',
iat: Math.floor(Date.now() / 1000),
exp: Math.floor(Date.now() / 1000) + 3600,
};
const mockToken = 'mock-jwt-token';
const createMockExecutionContext = (headers: Record<string, string> = {}) => {
const request = {
headers,
user: undefined,
token: undefined,
};
return {
switchToHttp: () => ({
getRequest: () => request,
}),
getRequest: () => request, // Helper method to get request in tests
} as any;
};
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
AuthGuard,
{
provide: AuthClientService,
useValue: {
validateToken: jest.fn(),
},
},
],
}).compile();
guard = module.get<AuthGuard>(AuthGuard);
authClientService = module.get(AuthClientService);
});
it('should be defined', () => {
expect(guard).toBeDefined();
});
describe('canActivate', () => {
it('should return true and attach user/token to request when token is valid', async () => {
const mockContext = createMockExecutionContext({
authorization: `Bearer ${mockToken}`,
});
authClientService.validateToken.mockResolvedValue(mockJwtPayload);
const result = await guard.canActivate(mockContext);
const request = mockContext.getRequest();
expect(result).toBe(true);
expect(authClientService.validateToken).toHaveBeenCalledWith(mockToken);
expect(request.user).toEqual(mockJwtPayload);
expect(request.token).toBe(mockToken);
});
it('should throw UnauthorizedException when no authorization header is provided', async () => {
const mockContext = createMockExecutionContext({});
await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException);
await expect(guard.canActivate(mockContext)).rejects.toThrow(
'No authorization header provided'
);
});
it('should throw UnauthorizedException when authorization header is empty', async () => {
const mockContext = createMockExecutionContext({
authorization: '',
});
await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException);
});
it('should throw UnauthorizedException when token type is not Bearer', async () => {
const mockContext = createMockExecutionContext({
authorization: `Basic ${mockToken}`,
});
await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException);
await expect(guard.canActivate(mockContext)).rejects.toThrow('Invalid token type');
});
it('should throw UnauthorizedException when no token is provided after Bearer', async () => {
const mockContext = createMockExecutionContext({
authorization: 'Bearer',
});
await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException);
await expect(guard.canActivate(mockContext)).rejects.toThrow('No token provided');
});
it('should throw UnauthorizedException when token is only whitespace', async () => {
const mockContext = createMockExecutionContext({
authorization: 'Bearer ',
});
await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException);
await expect(guard.canActivate(mockContext)).rejects.toThrow('No token provided');
});
it('should throw UnauthorizedException when token validation fails', async () => {
const mockContext = createMockExecutionContext({
authorization: `Bearer ${mockToken}`,
});
authClientService.validateToken.mockRejectedValue(new Error('Token validation failed'));
await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException);
await expect(guard.canActivate(mockContext)).rejects.toThrow('Invalid token');
});
it('should handle various token validation errors', async () => {
const mockContext = createMockExecutionContext({
authorization: `Bearer ${mockToken}`,
});
const testCases = [
{ error: new Error('Token expired'), message: 'Invalid token' },
{ error: new Error('Invalid signature'), message: 'Invalid token' },
{ error: new UnauthorizedException('Custom auth error'), message: 'Invalid token' },
];
for (const testCase of testCases) {
authClientService.validateToken.mockRejectedValue(testCase.error);
await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException);
await expect(guard.canActivate(mockContext)).rejects.toThrow(testCase.message);
}
});
it('should handle malformed authorization headers gracefully', async () => {
const testCases = [
'Bearer',
'Bearer ',
'Bearer ',
'BearerToken',
'Token ' + mockToken,
' Bearer ' + mockToken,
'Bearer ' + mockToken + ' extra',
];
for (const authHeader of testCases) {
const mockContext = createMockExecutionContext({
authorization: authHeader,
});
if (authHeader.trim().startsWith('Bearer ') && authHeader.split(' ')[1]?.trim()) {
authClientService.validateToken.mockResolvedValue(mockJwtPayload);
const result = await guard.canActivate(mockContext);
expect(result).toBe(true);
} else {
await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException);
}
}
});
it('should preserve original request properties when attaching user and token', async () => {
const originalRequest = {
headers: {
authorization: `Bearer ${mockToken}`,
'content-type': 'application/json',
},
body: { data: 'test' },
params: { id: '123' },
query: { filter: 'active' },
};
const mockContext = {
switchToHttp: () => ({
getRequest: () => originalRequest,
}),
} as ExecutionContext;
authClientService.validateToken.mockResolvedValue(mockJwtPayload);
await guard.canActivate(mockContext);
expect(originalRequest.headers).toEqual({
authorization: `Bearer ${mockToken}`,
'content-type': 'application/json',
});
expect(originalRequest.body).toEqual({ data: 'test' });
expect(originalRequest.params).toEqual({ id: '123' });
expect(originalRequest.query).toEqual({ filter: 'active' });
expect((originalRequest as any).user).toEqual(mockJwtPayload);
expect((originalRequest as any).token).toBe(mockToken);
});
it('should log error details when token validation fails', async () => {
const consoleSpy = jest.spyOn(console, 'error').mockImplementation();
const mockContext = createMockExecutionContext({
authorization: `Bearer ${mockToken}`,
});
const validationError = new Error('Token signature invalid');
authClientService.validateToken.mockRejectedValue(validationError);
await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException);
expect(consoleSpy).toHaveBeenCalledWith('Auth error:', 'Token signature invalid');
consoleSpy.mockRestore();
});
});
});

View file

@ -0,0 +1,47 @@
import { Injectable, CanActivate, ExecutionContext, UnauthorizedException } from '@nestjs/common';
import { Observable } from 'rxjs';
import { AuthClientService } from '../auth/auth-client.service';
import { JwtPayload } from '../types/jwt-payload.interface';
@Injectable()
export class AuthGuard implements CanActivate {
constructor(private authClientService: AuthClientService) {}
canActivate(context: ExecutionContext): boolean | Promise<boolean> | Observable<boolean> {
const request = context.switchToHttp().getRequest();
return this.validateRequest(request);
}
private async validateRequest(request: any): Promise<boolean> {
const authHeader = request.headers.authorization;
if (!authHeader) {
throw new UnauthorizedException('No authorization header provided');
}
const [type, token] = authHeader.split(' ');
if (type !== 'Bearer') {
throw new UnauthorizedException('Invalid token type');
}
if (!token) {
throw new UnauthorizedException('No token provided');
}
try {
// Validate the token with the Auth service
const payload = await this.authClientService.validateToken(token);
// Attach the user payload to the request for controllers to use
request.user = payload as JwtPayload;
// Also attach the token for potential forwarding to other services
request.token = token;
return true;
} catch (error) {
console.error('Auth error:', error.message);
throw new UnauthorizedException('Invalid token');
}
}
}

View file

@ -0,0 +1,35 @@
import { Injectable, CanActivate, ExecutionContext, UnauthorizedException } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
/**
* Guard for internal service-to-service communication.
* Validates requests using the X-Internal-API-Key header.
* Used for scheduled jobs and internal microservice calls.
*/
@Injectable()
export class InternalServiceGuard implements CanActivate {
constructor(private configService: ConfigService) {}
async canActivate(context: ExecutionContext): Promise<boolean> {
const request = context.switchToHttp().getRequest();
const apiKey = request.headers['x-internal-api-key'];
if (!apiKey) {
throw new UnauthorizedException('Missing X-Internal-API-Key header');
}
const internalApiKey = this.configService.get<string>('INTERNAL_API_KEY');
if (!internalApiKey) {
throw new UnauthorizedException('Internal API key not configured');
}
if (apiKey !== internalApiKey) {
throw new UnauthorizedException('Invalid internal API key');
}
// Mark request as internal service call
request.isInternalService = true;
return true;
}
}

View file

@ -0,0 +1,225 @@
import { Test, TestingModule } from '@nestjs/testing';
import { ConfigService } from '@nestjs/config';
import { ExecutionContext, UnauthorizedException } from '@nestjs/common';
import { ServiceAuthGuard } from './service-auth.guard';
import { createClient } from '@supabase/supabase-js';
jest.mock('@supabase/supabase-js');
describe('ServiceAuthGuard', () => {
let guard: ServiceAuthGuard;
let configService: jest.Mocked<ConfigService>;
const mockConfigService = {
get: jest.fn(),
};
const mockSupabaseClient = {
from: jest.fn().mockReturnThis(),
select: jest.fn().mockReturnThis(),
limit: jest.fn().mockReturnThis(),
};
const createMockExecutionContext = (headers: any = {}): ExecutionContext =>
({
switchToHttp: () => ({
getRequest: () => ({
headers,
}),
}),
}) as ExecutionContext;
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
ServiceAuthGuard,
{
provide: ConfigService,
useValue: mockConfigService,
},
],
}).compile();
guard = module.get<ServiceAuthGuard>(ServiceAuthGuard);
configService = module.get(ConfigService);
(createClient as jest.Mock).mockReturnValue(mockSupabaseClient);
});
afterEach(() => {
jest.clearAllMocks();
});
describe('canActivate', () => {
it('should return true for valid MEMORO_SUPABASE_SERVICE_KEY', async () => {
const serviceKey = 'valid-memoro-service-key';
mockConfigService.get.mockImplementation((key: string) => {
if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return serviceKey;
if (key === 'SUPABASE_SERVICE_KEY') return 'other-key';
if (key === 'MEMORO_SUPABASE_URL') return 'https://example.supabase.co';
return null;
});
const context = createMockExecutionContext({
authorization: `Bearer ${serviceKey}`,
});
const request = context.switchToHttp().getRequest();
const result = await guard.canActivate(context);
expect(result).toBe(true);
expect(request.isServiceAuth).toBe(true);
expect(request.serviceKey).toBe(serviceKey);
});
it('should return true for valid SUPABASE_SERVICE_KEY', async () => {
const serviceKey = 'valid-supabase-service-key';
mockConfigService.get.mockImplementation((key: string) => {
if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return 'other-key';
if (key === 'SUPABASE_SERVICE_KEY') return serviceKey;
if (key === 'MEMORO_SUPABASE_URL') return 'https://example.supabase.co';
return null;
});
const context = createMockExecutionContext({
authorization: `Bearer ${serviceKey}`,
});
const request = context.switchToHttp().getRequest();
const result = await guard.canActivate(context);
expect(result).toBe(true);
expect(request.isServiceAuth).toBe(true);
expect(request.serviceKey).toBe(serviceKey);
});
it('should validate token with Supabase when not matching config keys', async () => {
const serviceKey = 'unknown-service-key';
mockConfigService.get.mockImplementation((key: string) => {
if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return 'memoro-key';
if (key === 'SUPABASE_SERVICE_KEY') return 'supabase-key';
if (key === 'MEMORO_SUPABASE_URL') return 'https://example.supabase.co';
return null;
});
mockSupabaseClient.limit.mockResolvedValue({ error: null });
const context = createMockExecutionContext({
authorization: `Bearer ${serviceKey}`,
});
const request = context.switchToHttp().getRequest();
const result = await guard.canActivate(context);
expect(result).toBe(true);
expect(createClient).toHaveBeenCalledWith(
'https://example.supabase.co',
serviceKey,
expect.any(Object)
);
expect(mockSupabaseClient.from).toHaveBeenCalledWith('memos');
expect(mockSupabaseClient.select).toHaveBeenCalledWith('id');
expect(mockSupabaseClient.limit).toHaveBeenCalledWith(1);
expect(request.isServiceAuth).toBe(true);
expect(request.serviceKey).toBe(serviceKey);
});
it('should throw UnauthorizedException when no authorization header', async () => {
const context = createMockExecutionContext({});
await expect(guard.canActivate(context)).rejects.toThrow(
new UnauthorizedException('No authorization header provided')
);
});
it('should throw UnauthorizedException for invalid token type', async () => {
const context = createMockExecutionContext({
authorization: 'Basic invalidtoken',
});
await expect(guard.canActivate(context)).rejects.toThrow(
new UnauthorizedException('Invalid token type')
);
});
it('should throw UnauthorizedException when no token provided', async () => {
const context = createMockExecutionContext({
authorization: 'Bearer ',
});
await expect(guard.canActivate(context)).rejects.toThrow(
new UnauthorizedException('No token provided')
);
});
it('should throw UnauthorizedException when Supabase validation fails', async () => {
const serviceKey = 'invalid-service-key';
mockConfigService.get.mockImplementation((key: string) => {
if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return 'memoro-key';
if (key === 'SUPABASE_SERVICE_KEY') return 'supabase-key';
if (key === 'MEMORO_SUPABASE_URL') return 'https://example.supabase.co';
return null;
});
mockSupabaseClient.limit.mockResolvedValue({
error: { message: 'Invalid service key', code: 'PGRST301' },
});
const context = createMockExecutionContext({
authorization: `Bearer ${serviceKey}`,
});
await expect(guard.canActivate(context)).rejects.toThrow(
new UnauthorizedException('Invalid service key')
);
});
it('should throw UnauthorizedException when Supabase client throws error', async () => {
const serviceKey = 'error-service-key';
mockConfigService.get.mockImplementation((key: string) => {
if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return 'memoro-key';
if (key === 'SUPABASE_SERVICE_KEY') return 'supabase-key';
if (key === 'MEMORO_SUPABASE_URL') return 'https://example.supabase.co';
return null;
});
mockSupabaseClient.limit.mockRejectedValue(new Error('Network error'));
const context = createMockExecutionContext({
authorization: `Bearer ${serviceKey}`,
});
await expect(guard.canActivate(context)).rejects.toThrow(
new UnauthorizedException('Invalid service key')
);
});
it('should handle edge case with empty Bearer token', async () => {
const context = createMockExecutionContext({
authorization: 'Bearer',
});
await expect(guard.canActivate(context)).rejects.toThrow(
new UnauthorizedException('No token provided')
);
});
it('should handle multiple spaces in authorization header', async () => {
const serviceKey = 'valid-memoro-service-key';
mockConfigService.get.mockImplementation((key: string) => {
if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return serviceKey;
return null;
});
const context = createMockExecutionContext({
authorization: `Bearer ${serviceKey}`, // Normal spacing
});
const request = context.switchToHttp().getRequest();
const result = await guard.canActivate(context);
expect(result).toBe(true);
expect(request.isServiceAuth).toBe(true);
});
});
});

View file

@ -0,0 +1,65 @@
import { Injectable, CanActivate, ExecutionContext, UnauthorizedException } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { createClient } from '@supabase/supabase-js';
@Injectable()
export class ServiceAuthGuard implements CanActivate {
constructor(private configService: ConfigService) {}
async canActivate(context: ExecutionContext): Promise<boolean> {
const request = context.switchToHttp().getRequest();
const authHeader = request.headers.authorization;
if (!authHeader) {
throw new UnauthorizedException('No authorization header provided');
}
const [type, token] = authHeader.split(' ');
if (type !== 'Bearer') {
throw new UnauthorizedException('Invalid token type');
}
if (!token) {
throw new UnauthorizedException('No token provided');
}
// Check if the token is the service role key
// Accept both MEMORO_SUPABASE_SERVICE_KEY and SUPABASE_SERVICE_KEY for compatibility
const memoroServiceKey = this.configService.get<string>('MEMORO_SUPABASE_SERVICE_KEY');
const supabaseServiceKey = this.configService.get<string>('SUPABASE_SERVICE_KEY');
if (token === memoroServiceKey || token === supabaseServiceKey) {
// This is a valid service-to-service request
// Attach a service identifier to the request
request.isServiceAuth = true;
request.serviceKey = token;
return true;
}
// Optionally, validate the token with Supabase to ensure it's a valid service key
try {
const supabaseUrl = this.configService.get<string>('MEMORO_SUPABASE_URL');
const supabase = createClient(supabaseUrl, token, {
auth: {
autoRefreshToken: false,
persistSession: false,
},
});
// Try to access a protected resource to validate the service key
const { error } = await supabase.from('memos').select('id').limit(1);
if (!error) {
// Valid service key
request.isServiceAuth = true;
request.serviceKey = token;
return true;
}
} catch (error) {
// Token validation failed
}
throw new UnauthorizedException('Invalid service key');
}
}

View file

@ -0,0 +1,36 @@
import { Controller, Get } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
@Controller('health')
export class HealthController {
constructor(private readonly configService: ConfigService) {}
@Get()
checkHealth() {
// Log debug info when health check is called
console.error('[HEALTH CHECK DEBUG] Environment check:');
console.error(
'[HEALTH CHECK DEBUG] AUDIO_MICROSERVICE_URL from env:',
process.env.AUDIO_MICROSERVICE_URL
);
console.error(
'[HEALTH CHECK DEBUG] AUDIO_MICROSERVICE_URL from ConfigService:',
this.configService.get<string>('AUDIO_MICROSERVICE_URL')
);
console.error('[HEALTH CHECK DEBUG] NODE_ENV:', process.env.NODE_ENV);
return {
status: 'ok',
timestamp: new Date().toISOString(),
service: 'memoro-service',
debug: {
nodeEnv: process.env.NODE_ENV,
audioServiceUrl: this.configService.get<string>('AUDIO_MICROSERVICE_URL'),
audioServiceUrlEnv: process.env.AUDIO_MICROSERVICE_URL,
port: process.env.PORT || 3001,
cwd: process.cwd(),
nodeVersion: process.version,
},
};
}
}

View file

@ -0,0 +1,7 @@
import { Module } from '@nestjs/common';
import { HealthController } from './health.controller';
@Module({
controllers: [HealthController],
})
export class HealthModule {}

View file

@ -0,0 +1,57 @@
export interface MemoroSpaceDto {
id: string;
name: string;
owner_id: string;
app_id: string;
roles: any;
credits: number;
created_at: string;
updated_at: string;
memo_count?: number;
isOwner?: boolean; // Added for frontend ownership indication
}
export interface LinkMemoSpaceDto {
memoId: string;
spaceId: string;
}
export interface UnlinkMemoSpaceDto {
memoId: string;
spaceId: string;
}
export interface SuccessResponseDto {
success: boolean;
message?: string;
}
// Video-related interfaces
export interface VideoMetadata {
width?: number;
height?: number;
fps?: number;
videoCodec?: string;
audioCodec?: string;
audioChannels?: number;
audioSampleRate?: number;
fileSize?: number;
bitrate?: number;
hasAudioTrack?: boolean;
}
export type MediaType = 'audio' | 'video';
export interface ProcessMediaDto {
filePath: string;
duration: number;
spaceId?: string;
blueprintId?: string | null;
recordingLanguages?: string[];
memoId?: string;
location?: any;
recordingStartedAt?: string;
enableDiarization?: boolean;
mediaType?: MediaType;
videoMetadata?: VideoMetadata;
}

View file

@ -0,0 +1,26 @@
export interface SpaceDto {
id: string;
name: string;
owner_id: string;
app_id: string;
roles: any;
credits: number;
created_at: string;
updated_at: string;
memo_count?: number; // Added for compatibility with MemoroSpaceDto
}
export interface SpaceInviteDto {
id: string;
space_id: string;
space?: SpaceDto;
user_email: string;
role: string;
status: string;
created_at: string;
updated_at: string;
}
export interface PendingInvitesResponseDto {
invites: SpaceInviteDto[];
}

View file

@ -0,0 +1,60 @@
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { HttpExceptionFilter } from './filters/http-exception.filter';
async function bootstrap() {
// Debug: Log environment variables at startup - using console.error for Cloud Run visibility
console.error('[STARTUP DEBUG] Environment variables check:');
console.error('[STARTUP DEBUG] AUDIO_MICROSERVICE_URL:', process.env.AUDIO_MICROSERVICE_URL);
console.error(
'[STARTUP DEBUG] All env vars with AUDIO:',
Object.keys(process.env).filter((key) => key.includes('AUDIO'))
);
console.error('[STARTUP DEBUG] NODE_ENV:', process.env.NODE_ENV);
console.error('[STARTUP DEBUG] Current working directory:', process.cwd());
console.error('[STARTUP DEBUG] __dirname:', __dirname);
const app = await NestFactory.create(AppModule);
app.enableCors();
// Apply global exception filter for standardized error responses
app.useGlobalFilters(new HttpExceptionFilter());
// Increase request body size limit to handle rich speaker diarization data
// NestJS default is 100KB, our speaker data can be ~150KB+
const bodyLimit = '10mb'; // More reasonable limit
app.use(
require('express').json({
limit: bodyLimit,
verify: (req, res, buf, encoding) => {
console.log(`[Body Parser] Received ${buf.length} bytes on ${req.url}`);
if (buf.length > 1024 * 1024) {
// Log if >1MB
console.log(
`[Body Parser] Large payload detected: ${(buf.length / 1024 / 1024).toFixed(2)}MB`
);
}
},
})
);
app.use(
require('express').urlencoded({
extended: true,
limit: bodyLimit,
verify: (req, res, buf, encoding) => {
console.log(`[Body Parser URL] Received ${buf.length} bytes on ${req.url}`);
},
})
);
console.log(`[NestJS] Body parser configured with limit: ${bodyLimit}`);
// Use PORT environment variable provided by Cloud Run, default to 3001
// Using 3001 instead of 3000 to avoid conflicts with the main middleware service in development
const port = process.env.PORT || 3001;
await app.listen(port);
console.log(`Memoro microservice listening on port ${port}`);
}
bootstrap();

View file

@ -0,0 +1,33 @@
/**
* DTO for creating a meeting bot
*/
export class CreateBotDto {
meeting_url: string;
space_id?: string;
}
/**
* DTO for stopping a meeting bot
*/
export class StopBotDto {
bot_id: string;
}
/**
* Query params for listing bots
*/
export class ListBotsQueryDto {
state?: string;
space_id?: string;
limit?: number;
offset?: number;
}
/**
* Query params for listing recordings
*/
export class ListRecordingsQueryDto {
space_id?: string;
limit?: number;
offset?: number;
}

Some files were not shown because too many files have changed in this diff Show more