diff --git a/.prettierignore b/.prettierignore index 3b5590518..c25a1c3c5 100644 --- a/.prettierignore +++ b/.prettierignore @@ -88,3 +88,9 @@ apps/picture/apps/landing/src/components/promptTemplates/CategoryGrid.astro **/*QUICK*.md **/*QUICKSTART*.md +# Legacy memoro apps (not yet migrated to monorepo standards, have their own tooling) +apps/memoro/apps/backend/** +apps/memoro/apps/audio-backend/** +apps/memoro/apps/mobile/** +apps/memoro/apps/landing/** + diff --git a/apps/memoro/.gitignore b/apps/memoro/.gitignore new file mode 100644 index 000000000..aec2b5650 --- /dev/null +++ b/apps/memoro/.gitignore @@ -0,0 +1 @@ +*.env.deploy diff --git a/apps/memoro/CLAUDE.md b/apps/memoro/CLAUDE.md new file mode 100644 index 000000000..77f3e757e --- /dev/null +++ b/apps/memoro/CLAUDE.md @@ -0,0 +1,459 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Repository Overview + +Memoro is a monorepo containing an AI-powered voice recording and memo management application with two apps: + +- **Mobile App** (`apps/mobile/`): React Native + Expo cross-platform app (iOS, Android, Web) +- **Web App** (`apps/web/`): SvelteKit companion web application + +Both apps share the same Supabase backend. + +## Development Commands + +### Mobile App (`apps/mobile/`) + +```bash +# Development +npm start # Start Expo dev server +npm run start:dev # Start with dev environment +npm run start:prod # Start with prod environment +npm run ios # Run on iOS simulator +npm run android # Run on Android emulator +npm run web # Run web version +npm run web:dev # Run web with dev environment + +# Code Quality +npm run lint # Run ESLint and Prettier check +npm run lint:fix # Auto-fix linting issues +npm run lint:unused # Find unused imports/vars +npm run format # Format code with ESLint + Prettier + +# Build & Deploy +npm run prebuild # Generate native projects +npm run rebuild # Clean rebuild (removes node_modules, ios/, android/) +npm run web:build # Build for web deployment +eas build --profile development # Development build +eas build --profile preview # Preview build +eas build --profile production # Production build +``` + +### Web App (`apps/web/`) + +```bash +npm run dev # Start development server +npm run build # Build for production +npm run preview # Preview production build +npm run check # Run svelte-check +npm run check:watch # Watch mode for svelte-check +``` + +## Architecture + +### Mobile App Architecture + +**Framework Stack:** +- React Native 0.83.2 + Expo SDK 55 +- Expo Router (file-based routing) +- TypeScript +- NativeWind (Tailwind CSS for React Native) +- Zustand (state management) + +**Key Design Patterns:** + +1. **Feature-Based Architecture** (`features/`): + - Each feature is self-contained with its own services, hooks, components, and stores + - Features: auth, audioRecordingV2, memos, spaces, credits, subscription, i18n, theme, etc. + - 33 feature modules in total + +2. **Atomic Design System** (`components/`): + - `atoms/`: Basic UI components (Button, Input, Text, Icon, etc.) + - `molecules/`: Composite components (MemoPreview, RecordingBar, TagSelector, etc.) + - `organisms/`: Complex components (AudioRecorder, Memory, TranscriptDisplay, etc.) + - `statistics/`: Specialized analytics components + +3. **Route Structure** (`app/`): + - `(public)/`: Unauthenticated routes (login, register) + - `(protected)/`: Authenticated routes with auth guard + - `(tabs)/`: Main tab navigation (home, memos, spaces) + - `(memo)/[id]`: Dynamic memo detail pages + - `(space)/[id]`: Dynamic space detail pages + - Uses Expo Router's file-based routing with typed routes enabled + +### Authentication System + +Uses a **middleware-based authentication bridge** between the app and Supabase: + +``` +Mobile App → Middleware Auth Service → Supabase +``` + +**Key Points:** +- Middleware issues three tokens: `manaToken`, `appToken` (Supabase-compatible JWT), `refreshToken` +- Tokens stored securely via platform-specific `safeStorage` utility +- Auth state managed via `AuthContext` provider +- Supabase client configured to use JWT from middleware +- Row Level Security (RLS) policies use JWT claims (`sub`, `role`, `app_id`) +- Supports email/password, Google Sign-In, and Apple Sign-In +- Automatic token refresh mechanism + +See `apps/mobile/features/auth/README.md` for detailed authentication flow. + +### Audio Recording System + +**AudioRecordingV2** is the current audio recording implementation: + +- Uses `expo-audio` (migrated from deprecated `expo-av`) +- Platform-specific services: `IOSRecordingService`, `AndroidRecordingService` +- Zustand store for state management (`recordingStore`) +- Comprehensive error handling with retry strategies +- Android: Foreground service with wake locks +- iOS: Background audio capability with `mixWithOthers` mode +- Real-time status updates via polling +- Prevents zero-byte recordings with validation +- **Background recording works correctly** - continues when app is backgrounded or locked + +**iOS Background Recording:** +- Uses `interruptionMode: 'mixWithOthers'` for background recording support +- Recording continues when pressing home button, switching apps, or locking device +- Audio session automatically restored when returning to foreground +- JavaScript timers suspended in background, but native recording continues +- Handles real interruptions (phone calls, Siri) automatically + +**Recording Options:** +- High quality: M4A format with AAC encoding (MONO for compatibility) +- Presets: HIGH_QUALITY, MEDIUM_QUALITY, LOW_QUALITY, VOICE_MEMO +- Max duration and size limits +- Pause/resume support +- Audio level metering for waveform visualization +- Optimized for voice (MONO, 96 quality) to prevent FFmpeg 'chnl' box errors + +**Key Technical Details:** +- MONO recording prevents iOS spatial audio metadata issues +- Audio session verification on cold start prevents first-recording failures +- Status polling restarts when app returns from background +- Full duration captured (foreground + background time) + +See `apps/mobile/features/audioRecordingV2/README.md` for full details. +See `apps/mobile/features/audioRecordingV2/TROUBLESHOOTING.md` for bug fixes and solutions. + +### AI Processing System + +**Blueprints:** +- Reusable AI analysis patterns for different use cases +- Examples: Text Analysis, Creative Writing, Meeting Notes +- Each blueprint has localized advice tips (32 languages) +- Stored in Supabase with public/private visibility + +**Prompts:** +- Specific AI tasks for content transformation +- Examples: Summary, To-Do extraction, Translation, Q&A +- Associated with blueprints via `blueprint_prompts` join table +- Multi-language support (German/English minimum) + +**Content Organization:** +- 8 categories: Coaching, Crafts, Healthcare, Journal, Journalism, Office, Sales, University +- Categories provide contextual grouping for blueprints/prompts + +See `apps/mobile/docs/blueprints_and_prompts.md` for full documentation. + +### Theme System + +**Multi-Theme Support:** +- 4 theme variants: Lume (gold), Nature (green), Stone (slate), Ocean (blue) +- Each theme has light and dark mode variants +- 13 semantic color tokens per theme (primary, secondary, borders, backgrounds, text) +- Theme state managed via `ThemeProvider` context +- Dark mode detection + manual override +- All colors defined in `tailwind.config.js` + +**Markdown Rendering:** +- Full Markdown support in memo display +- Theme-aware styles adapt to light/dark mode +- Centralized styles in `features/theme/markdownStyles.ts` +- Hybrid rendering with auto-detection + +### Spaces (Collaboration) + +**Team Workspaces:** +- Create unlimited collaborative spaces +- Role-based permissions (owner, member) +- Memo sharing within spaces +- Email-based invitation system +- Credit pools shared among team members +- Real-time sync via Supabase Realtime + +**Backend Integration:** +- RESTful API for space management +- RLS policies for access control +- Space-specific memo filtering + +See `apps/mobile/docs/SPACES.md` for implementation details. + +### Subscription & Credits + +**Mana Credit System:** +- Backend-driven transparent pricing +- Real-time credit validation before operations +- Usage tracking and analytics +- Credit sharing in team spaces +- Free tier: 150 Mana + 5 daily Mana + +**RevenueCat Integration:** +- Cross-platform (iOS, Android, Web) +- Subscription lifecycle management +- User identification tied to auth +- Purchase restoration across devices +- 4 individual plans: Stream (€5.99), River (€14.99), Lake (€29.99), Ocean (€49.99) +- Team and Enterprise plans available + +### Internationalization + +**32 Languages Supported:** +- Arabic, Bengali, Bulgarian, Chinese, Czech, Danish, Dutch, English, Estonian, Finnish, French, Gaelic, German, Greek, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Lithuanian, Latvian, Maltese, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Turkish, Ukrainian, Urdu, Vietnamese + +**Implementation:** +- `react-i18next` for translations +- Automatic device language detection +- Persistent user preference storage +- RTL support for Arabic/Hebrew +- Translation files in `features/i18n/translations/` + +### Real-Time Features + +**Supabase Realtime:** +- Live memo updates (INSERT, UPDATE, DELETE) +- Real-time collaboration in spaces +- `MemoRealtimeProvider` context for subscriptions +- Automatic reconnection handling +- RLS-aware subscriptions + +### Platform-Specific Notes + +**Web Platform:** +- Uses `.web.ts` file extensions for web-specific implementations +- `safeStorage.web.ts` uses localStorage (vs AsyncStorage on native) +- Web Audio API for recording (vs expo-audio) +- Some features unavailable: push notifications, haptics, native gestures + +**iOS:** +- Background audio capability required +- Audio session management +- Apple Sign-In integration +- RevenueCat StoreKit 2 + +**Android:** +- Foreground service for recording +- Wake lock to prevent sleep +- Android 16+ requires foreground to start recording +- Google Sign-In integration + +## Environment Configuration + +The mobile app uses environment-specific `.env` files: + +- `.env.dev`: Development environment (copy from `.env.dev.example`) +- `.env.prod`: Production environment (copy from `.env.prod.example`) +- `.env.local`: Active environment (auto-generated by npm scripts) + +**Key Environment Variables:** +- `EXPO_PUBLIC_SUPABASE_URL`: Supabase project URL +- `EXPO_PUBLIC_SUPABASE_ANON_KEY`: Supabase anon key +- `EXPO_PUBLIC_MIDDLEWARE_API_URL`: Middleware auth service URL +- `EXPO_PUBLIC_APPID`: Application ID for middleware +- RevenueCat keys for iOS/Android + +## Code Quality + +**Linting:** +- ESLint with TypeScript plugin +- React/React Native rules +- Unused imports auto-removal +- Configuration in `eslint.config.js` + +**Formatting:** +- Prettier with Tailwind plugin +- Auto-format on save recommended + +**TypeScript:** +- Strict mode enabled +- Typed routes from Expo Router +- Type definitions in `types/` and feature-specific types + +## Migration Notes + +**Expo SDK 55 (Current):** +- React Native 0.83.2, React 19.2 +- Native `allowsBackgroundRecording` support in expo-audio (no more workarounds needed) +- All Expo packages use `^55.x.x` version scheme +- New Architecture is the default (Legacy Architecture dropped) +- Android compileSdkVersion/targetSdkVersion 36 + +**Expo SDK 54 Migration (Historical):** +- Migrated from `expo-av` to `expo-audio` +- New audio recording API (`AudioModule.AudioRecorder`) +- Status polling instead of callbacks +- See `EXPO_54_AUDIO_RECORDING_MIGRATION.md` + +**SvelteKit Web App:** +- Separate web app being built as companion +- Shares Supabase backend with mobile app +- See `SVELTEKIT_MIGRATION_ANALYSIS.md` for migration plan + +## Testing Strategy + +**Manual Testing:** +- Test on both iOS and Android before commits +- Verify web platform compatibility +- Check dark mode and all theme variants +- Test with different languages + +**Platform Matrix:** +- iOS (simulator + device) +- Android (emulator + device) +- Web (Chrome, Safari, Firefox) + +## Common Patterns + +### Creating a New Feature + +1. Create feature directory in `features/` +2. Add subdirectories: `components/`, `hooks/`, `services/`, `store/`, `types/` +3. Export public API via `index.ts` +4. Add feature-specific README if complex +5. Update this CLAUDE.md if architectural + +### Adding a New Route + +1. Add file in `app/` directory following Expo Router conventions +2. Use `(protected)/` group if authentication required +3. Use `[id]` for dynamic routes +4. Enable typed routes in `app.json` (already enabled) +5. Import route types from `expo-router` + +### Working with Zustand Stores + +```typescript +// Create store +export const useMyStore = create((set, get) => ({ + // state + data: null, + + // actions + setData: (data) => set({ data }), + + // computed/derived + getData: () => get().data, +})); +``` + +Stores are located in: +- Global: `store/store.ts` +- Feature-specific: `features/[feature]/store/` + +### Platform-Specific Code + +Use file extensions for platform-specific implementations: +- `file.ts`: Default (mobile) +- `file.web.ts`: Web platform +- `file.ios.ts`: iOS only +- `file.android.ts`: Android only + +Metro bundler automatically resolves based on platform. + +### Error Handling + +1. Use feature-specific error types +2. Provide user-friendly messages +3. Include retry mechanisms where appropriate +4. Log errors to console for debugging +5. Consider Sentry integration for production + +## Build and Deployment + +**EAS Build Profiles:** +- `development`: Dev client with debugging +- `preview`: Internal distribution (TestFlight/Google Play Internal) +- `simulator`: iOS simulator build +- `production`: Auto-increment version, store-ready + +**Environment Selection:** +EAS profiles automatically load correct environment via `EXPO_PUBLIC_USE_ENV_FILE` in `eas.json`. + +**Version Management:** +- iOS: `buildNumber` in `app.json` +- Android: `versionCode` in `app.json` +- Production profile auto-increments both + +## Important Files + +- `app.json`: Expo configuration, plugins, permissions +- `eas.json`: EAS Build configuration +- `package.json`: Dependencies and scripts +- `tailwind.config.js`: Theme colors and styling +- `eslint.config.js`: Linting rules +- `babel.config.js`: Babel configuration +- `metro.config.js`: Metro bundler configuration (if present) +- `types/supabase.ts`: Auto-generated Supabase types + +## Database Schema + +The app uses Supabase with the following key tables: +- `memos`: Audio recordings and transcriptions +- `memories`: AI-generated insights from memos +- `blueprints`: AI analysis templates +- `prompts`: AI task templates +- `blueprint_prompts`: Many-to-many join table +- `categories`: Organization categories +- `tags`: User-defined tags +- `memo_tags`: Many-to-many join table +- `spaces`: Collaborative workspaces +- `space_members`: User-space relationships +- `profiles`: User profiles and settings + +All tables use RLS policies based on JWT claims. + +## Auto-Delete Audio Files (30-Day Retention) + +When users enable `autoDeleteAudiosAfter30Days` in their settings, audio files older than 30 days are automatically deleted while preserving memo records (transcripts, metadata). + +**Setting Location:** `app_settings.memoro.autoDeleteAudiosAfter30Days` (default: `false`) + +**Two Cleanup Mechanisms:** + +1. **Cloud Storage Cleanup** (memoro-service): + - Daily cron job at 3 AM UTC via Google Cloud Scheduler + - Queries `storage.objects` table for files older than 30 days + - Deletes from Supabase Storage bucket `user-uploads` + - Updates memo `source` field: `{ audio_path: null, audio_deleted: true, audio_deleted_at: timestamp }` + +2. **Local Device Cleanup** (mobile app): + - Runs on app launch after successful authentication + - Throttled to once per 24 hours + - Uses `fileStorageService.cleanupOldFiles()` with 30-day retention + - Implementation: `features/storage/services/localAudioCleanup.ts` + +**Key Files:** +- `memoro-service/src/cleanup/` - Cloud cleanup service +- `mana-core-middleware/src/modules/users/services/user-settings.service.ts` - User settings query +- `apps/mobile/features/storage/services/localAudioCleanup.ts` - Local device cleanup +- `apps/mobile/features/auth/contexts/AuthContext.tsx` - Cleanup trigger after auth + +## Known Issues + +1. **Android 16+ Recording**: Must be in foreground to start recording +2. **Zero-byte Recordings**: Occasional issue on some Android devices (retry mechanism in place) +3. **Token Refresh**: Email may not be in refreshed token (stored separately as workaround) +4. **Web Platform**: Limited functionality vs native (no push notifications, haptics, etc.) + +## Additional Documentation + +- `apps/mobile/README.md`: Full mobile app documentation +- `apps/web/README.md`: Web app documentation +- `features/auth/README.md`: Authentication system details +- `features/audioRecordingV2/README.md`: Audio recording implementation +- `docs/blueprints_and_prompts.md`: AI processing system +- `docs/SPACES.md`: Collaboration features +- `SVELTEKIT_MIGRATION_ANALYSIS.md`: Web app migration plan diff --git a/apps/memoro/README.md b/apps/memoro/README.md new file mode 100644 index 000000000..2f8158c44 --- /dev/null +++ b/apps/memoro/README.md @@ -0,0 +1,373 @@ +# Memoro + +**AI-powered voice recording and memo management platform** that transforms audio recordings into structured, searchable content using artificial intelligence. + +![Platform](https://img.shields.io/badge/platform-iOS%20%7C%20Android%20%7C%20Web-blue) +![React Native](https://img.shields.io/badge/React%20Native-0.81.4-61dafb) +![Expo](https://img.shields.io/badge/Expo-54.0.0-000020) +![SvelteKit](https://img.shields.io/badge/SvelteKit-2.x-ff3e00) +![TypeScript](https://img.shields.io/badge/TypeScript-5.x-3178c6) + +## 📱 What is Memoro? + +Memoro is a cross-platform application that combines voice recording, AI processing, and collaborative features to help individuals and teams capture, organize, and analyze spoken content. Record meetings, interviews, lectures, or personal notes, and let AI transform them into structured, actionable insights. + +### Key Features + +✨ **High-Quality Audio Recording** - Background recording with pause/resume support +🤖 **AI-Powered Analysis** - Transform recordings using customizable Blueprints and Prompts +👥 **Collaborative Spaces** - Share and organize memos within team workspaces +🌍 **32 Languages** - Full internationalization with automatic language detection +🎨 **4 Theme Variants** - Light/dark mode with Nature, Ocean, Stone, and Lume themes +💰 **Credit System** - Transparent Mana-based pricing for AI operations +🔒 **Enterprise Security** - Row-level security with JWT authentication +📊 **Rich Analytics** - Track usage, productivity, and team insights + +## 🏗 Monorepo Structure + +``` +memoro_app/ +├── apps/ +│ ├── mobile/ # React Native + Expo app (iOS & Android native) +│ └── web/ # SvelteKit web application +├── CLAUDE.md # Development guidance for Claude Code +└── README.md # This file +``` + +Both applications share the same Supabase backend for seamless data synchronization. + +## 🚀 Quick Start + +### Prerequisites + +- **Node.js** 18 or higher +- **npm** or **pnpm** +- **Expo CLI** (for mobile development) +- **iOS Simulator** (macOS only) or **Android Emulator** +- **Supabase Account** (for backend services) + +### Installation + +```bash +# Clone the repository +git clone +cd memoro_app + +# Install mobile app dependencies +cd apps/mobile +npm install + +# Install web app dependencies +cd ../web +npm install +``` + +### Environment Setup + +Both apps require environment variables. Copy the example files and fill in your credentials: + +```bash +# Mobile app +cd apps/mobile +cp .env.dev.example .env.dev +cp .env.prod.example .env.prod +# Edit .env.dev and .env.prod with your Supabase and API credentials + +# Web app +cd apps/web +cp .env.example .env +# Edit .env with your Supabase credentials +``` + +**Required Environment Variables:** +- `EXPO_PUBLIC_SUPABASE_URL` - Your Supabase project URL +- `EXPO_PUBLIC_SUPABASE_ANON_KEY` - Your Supabase anonymous key +- `EXPO_PUBLIC_MIDDLEWARE_API_URL` - Middleware authentication service URL +- `EXPO_PUBLIC_APPID` - Application ID for middleware + +### Running the Apps + +**Mobile App (iOS & Android):** +```bash +cd apps/mobile + +# Start development server +npm start + +# Run on iOS +npm run ios + +# Run on Android +npm run android + +# Run with specific environment +npm run start:dev # Development environment +npm run start:prod # Production environment +``` + +**Web App:** +```bash +cd apps/web + +# Start development server +npm run dev + +# Build for production +npm run build +npm run preview +``` + +## 📖 Documentation + +### Comprehensive Guides + +- **[CLAUDE.md](./CLAUDE.md)** - Complete architectural overview and development guidelines +- **[Mobile App README](./apps/mobile/README.md)** - Detailed mobile app documentation +- **[Web App README](./apps/web/README.md)** - SvelteKit web app guide + +### Feature Documentation + +- **[Authentication System](./apps/mobile/features/auth/README.md)** - Middleware-based auth with JWT +- **[Audio Recording](./apps/mobile/features/audioRecordingV2/README.md)** - AudioRecordingV2 implementation +- **[Blueprints & Prompts](./apps/mobile/docs/blueprints_and_prompts.md)** - AI processing system +- **[Spaces](./apps/mobile/docs/SPACES.md)** - Collaborative workspaces +- **[SvelteKit Migration](./SVELTEKIT_MIGRATION_ANALYSIS.md)** - Web app migration analysis + +## 🛠 Technology Stack + +### Mobile App (`apps/mobile/`) + +| Category | Technologies | +|----------|-------------| +| **Framework** | React Native 0.81.4, Expo SDK 54 | +| **Language** | TypeScript 5.x | +| **Navigation** | Expo Router (file-based) | +| **Styling** | NativeWind (Tailwind CSS) | +| **State** | Zustand, React Context | +| **Backend** | Supabase (PostgreSQL, Storage, Realtime) | +| **Audio** | expo-audio, Azure Speech Services | +| **Payments** | RevenueCat (iOS, Android) | +| **Analytics** | PostHog, Sentry | +| **i18n** | react-i18next (32 languages) | + +### Web App (`apps/web/`) + +| Category | Technologies | +|----------|-------------| +| **Framework** | SvelteKit 2.x | +| **Language** | TypeScript 5.x | +| **Styling** | TailwindCSS 3.x | +| **State** | Svelte Stores | +| **Backend** | Supabase (shared with mobile) | +| **i18n** | svelte-i18n | + +## 🏛 Architecture Highlights + +### Feature-Based Structure +The mobile app uses a feature-based architecture with **33 self-contained modules** (auth, audioRecordingV2, memos, spaces, credits, subscription, i18n, theme, etc.), each with its own services, hooks, components, and stores. + +### Atomic Design System +Components are organized using atomic design principles: +- **Atoms**: Button, Input, Text, Icon (16 components) +- **Molecules**: MemoPreview, RecordingBar, TagSelector (21 components) +- **Organisms**: AudioRecorder, Memory, TranscriptDisplay (9 components) +- **Statistics**: Analytics components (14 components) + +### Middleware Authentication +Uses a custom middleware service as a bridge between the app and Supabase: +``` +Mobile/Web App → Middleware Auth → Supabase (with JWT + RLS) +``` +- Three token types: `manaToken`, `appToken`, `refreshToken` +- Platform-specific secure storage +- Automatic token refresh +- Supports email/password, Google, and Apple Sign-In + +### AI Processing Pipeline +- **Blueprints**: Reusable analysis patterns (Text Analysis, Creative Writing, Meeting Notes) +- **Prompts**: Specific AI tasks (Summary, To-Do, Translation, Q&A) +- **Categories**: 8 organizational categories (Office, Healthcare, University, etc.) +- Multi-language support with localized advice + +## 🎯 Key Features Deep Dive + +### Audio Recording System (V2) +- High-quality M4A/AAC recording +- Background recording with foreground service (Android) +- Pause/resume support +- Real-time audio metering +- Platform-specific optimizations (iOS/Android) +- Crash recovery with automatic segmentation +- Zero-byte recording prevention + +### Collaborative Spaces +- Create unlimited team workspaces +- Role-based permissions (owner, member) +- Email-based invitation system +- Shared credit pools +- Real-time collaboration via Supabase Realtime + +### Theme System +4 complete theme variants with light/dark modes: +- **Lume**: Modern gold & dark +- **Nature**: Soothing green +- **Stone**: Elegant slate +- **Ocean**: Tranquil blue + +Each theme includes 13 semantic color tokens for consistent UI. + +### Internationalization +**32 supported languages** with: +- Automatic device language detection +- Persistent user preferences +- RTL support (Arabic, Hebrew) +- Complete UI translations + +Languages: Arabic, Bengali, Bulgarian, Chinese, Czech, Danish, Dutch, English, Estonian, Finnish, French, Gaelic, German, Greek, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Lithuanian, Latvian, Maltese, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Turkish, Ukrainian, Urdu, Vietnamese. + +## 💻 Development + +### Code Quality Tools + +```bash +# Mobile app linting +cd apps/mobile +npm run lint # Check code quality +npm run lint:fix # Auto-fix issues +npm run lint:unused # Find unused imports/vars +npm run format # Format with Prettier + ESLint + +# Web app checking +cd apps/web +npm run check # Type check +npm run check:watch # Watch mode +``` + +### Building for Production + +**Mobile App (EAS Build):** +```bash +cd apps/mobile + +# Development build (with dev client) +eas build --profile development + +# Preview build (internal testing) +eas build --profile preview + +# Production build (store submission) +eas build --profile production +``` + +**Web App:** +```bash +cd apps/web + +# Build static site +npm run build + +# Preview production build +npm run preview +``` + +## 📊 Project Statistics + +- **~10,890** TypeScript/JavaScript files in mobile app +- **33** feature modules +- **60+** reusable components +- **32** language translations +- **4** theme variants (8 including dark modes) +- **2** platforms (mobile + web) +- **1** shared Supabase backend + +## 🔒 Security + +- **Row Level Security (RLS)** on all Supabase tables +- **JWT-based authentication** with middleware +- **Secure token storage** (platform-specific) +- **Automatic token rotation** +- **Environment variable protection** +- **Sensitive file exclusion** (.gitignore) + +## 🤝 Contributing + +1. Read the [CLAUDE.md](./CLAUDE.md) for architectural guidance +2. Follow the atomic design system for components +3. Use feature-based organization for new features +4. Test on both iOS and Android before committing +5. Run linting and formatting before pushing +6. Update documentation for significant changes + +## 📝 Common Development Tasks + +### Adding a New Feature +```bash +# 1. Create feature directory in mobile app +mkdir -p apps/mobile/features/my-feature/{components,hooks,services,store,types} + +# 2. Create index.ts for public API +touch apps/mobile/features/my-feature/index.ts + +# 3. Add feature-specific README if complex +touch apps/mobile/features/my-feature/README.md + +# 4. Update CLAUDE.md if architecturally significant +``` + +### Adding a New Route (Mobile) +```bash +# File-based routing with Expo Router +# Protected route: +touch apps/mobile/app/\(protected\)/my-route.tsx + +# Public route: +touch apps/mobile/app/\(public\)/my-route.tsx +``` + +### Platform-Specific Code (Mobile App Only) +```bash +# Create platform variants for iOS/Android differences +touch apps/mobile/features/my-feature/myService.ts # Default/shared +touch apps/mobile/features/my-feature/myService.ios.ts # iOS-specific +touch apps/mobile/features/my-feature/myService.android.ts # Android-specific + +# Metro bundler automatically resolves the correct file based on platform +# Note: .web.ts variants are no longer used - use apps/web/ for web features +``` + +### Adding a New Route (Web App) +```bash +# SvelteKit file-based routing +# Protected route: +mkdir -p apps/web/src/routes/\(protected\)/my-route +touch apps/web/src/routes/\(protected\)/my-route/+page.svelte + +# Public route: +mkdir -p apps/web/src/routes/my-route +touch apps/web/src/routes/my-route/+page.svelte +``` + +## 🐛 Known Issues + +1. **Android 16+**: Must be in foreground to start recording (platform restriction) +2. **Zero-byte recordings**: Occasional issue on some Android devices (retry mechanism implemented) +3. **Token refresh**: Email may not be in refreshed JWT (stored separately as workaround) + +## 📄 License + +Proprietary - All rights reserved + +--- + +## 🔗 Quick Links + +- **Documentation**: [CLAUDE.md](./CLAUDE.md) +- **Mobile App**: [apps/mobile/README.md](./apps/mobile/README.md) +- **Web App**: [apps/web/README.md](./apps/web/README.md) +- **Architecture**: See CLAUDE.md for detailed architecture +- **Issue Tracking**: (Add your issue tracker link) +- **Support**: (Add your support contact) + +--- + +**Built with ❤️ using React Native, Expo, SvelteKit, and Supabase** diff --git a/apps/memoro/apps/audio-backend/.env.example b/apps/memoro/apps/audio-backend/.env.example new file mode 100644 index 000000000..a7cfbb5cc --- /dev/null +++ b/apps/memoro/apps/audio-backend/.env.example @@ -0,0 +1,15 @@ +# Server Configuration +PORT=1337 + +# Azure Speech Service +AZURE_SPEECH_KEY=your-azure-speech-key +AZURE_SPEECH_REGION=swedencentral + +# Azure Storage Account +AZURE_STORAGE_ACCOUNT_NAME=your-storage-account +AZURE_STORAGE_ACCOUNT_KEY=your-storage-key + +# Supabase Configuration +SUPABASE_URL=https://npgifbrwhftlbrbaglmi.supabase.co +SUPABASE_SERVICE_KEY=your-service-key +SUPABASE_ANON_KEY=your-anon-key diff --git a/apps/memoro/apps/audio-backend/.gcloudignore b/apps/memoro/apps/audio-backend/.gcloudignore new file mode 100644 index 000000000..2264e4c18 --- /dev/null +++ b/apps/memoro/apps/audio-backend/.gcloudignore @@ -0,0 +1,13 @@ +.gcloudignore +.git +.gitignore +node_modules/ +npm-debug.log +.env +.env.local +.env.*.local +uploads/ +dist/ +*.log +README.md +.dockerignore \ No newline at end of file diff --git a/apps/memoro/apps/audio-backend/.gitignore b/apps/memoro/apps/audio-backend/.gitignore new file mode 100644 index 000000000..4d92c3db4 --- /dev/null +++ b/apps/memoro/apps/audio-backend/.gitignore @@ -0,0 +1,6 @@ +/node_modules +/dist +pubsub-service-account-key.json +# Deployment secrets +.env.deploy +DEPLOY.md diff --git a/apps/memoro/apps/audio-backend/CHANGELOG.md b/apps/memoro/apps/audio-backend/CHANGELOG.md new file mode 100644 index 000000000..8328e19c4 --- /dev/null +++ b/apps/memoro/apps/audio-backend/CHANGELOG.md @@ -0,0 +1,26 @@ +# Audio Microservice Changelog + +## [Unreleased] + +### Added +- Service-to-service authentication using Supabase service role keys +- Support for `MEMORO_SUPABASE_SERVICE_KEY` environment variable +- UserId parameter in batch metadata updates for ownership validation + +### Changed +- All memoro service callbacks now use dedicated `/service/` endpoints +- Authentication uses service role key instead of user JWT tokens +- Updated callback methods: + - `notifyTranscriptionComplete`: Now calls `/memoro/service/transcription-completed` + - `notifyAppendTranscriptionComplete`: Now calls `/memoro/service/append-transcription-completed` + - `storeBatchJobMetadata`: Now calls `/memoro/service/update-batch-metadata` + +### Fixed +- 401 authentication errors when calling memoro service +- Callbacks no longer fail due to expired user tokens +- Service-to-service communication is now independent of user sessions + +### Security +- Service role keys are never exposed to clients +- All service-to-service communication uses HTTPS +- Environment variables store sensitive credentials \ No newline at end of file diff --git a/apps/memoro/apps/audio-backend/Dockerfile b/apps/memoro/apps/audio-backend/Dockerfile new file mode 100644 index 000000000..0af302a93 --- /dev/null +++ b/apps/memoro/apps/audio-backend/Dockerfile @@ -0,0 +1,42 @@ +FROM node:20-alpine + +# Install FFmpeg 8.x from Alpine edge repository +# - Native support for iOS spatial audio 'chnl' v1 metadata box +# - Fixes: "Unsupported 'chnl' box with version 1" error +# - Install mpg123-libs from edge to avoid symbol conflicts +RUN apk add --no-cache \ + --repository=https://dl-cdn.alpinelinux.org/alpine/edge/main \ + --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community \ + ffmpeg \ + mpg123-libs + +WORKDIR /app + +# Copy package files +COPY package*.json ./ + +# Install all dependencies (including dev dependencies for build) +RUN npm ci + +# Copy source code +COPY . . + +# Build the application +RUN npm run build + +# Remove dev dependencies to reduce image size +RUN npm prune --production + +# Create uploads directory +RUN mkdir -p uploads + +# Cloud Run uses PORT environment variable +EXPOSE ${PORT:-1337} + +# Use non-root user for security +RUN addgroup -g 1001 -S nodejs +RUN adduser -S nestjs -u 1001 +USER nestjs + +# Start the application +CMD ["npm", "run", "start:prod"] \ No newline at end of file diff --git a/apps/memoro/apps/audio-backend/README.md b/apps/memoro/apps/audio-backend/README.md new file mode 100644 index 000000000..3d88ea145 --- /dev/null +++ b/apps/memoro/apps/audio-backend/README.md @@ -0,0 +1,265 @@ +# Enhanced Audio & Video Transcription Microservice + +NestJS microservice for advanced audio and video processing with transcription. Features dual routing: fast real-time processing and enhanced Azure Batch transcription for long files. + +## 🎯 What It Does + +### Audio Processing +- **Receives audio file** uploads (MP3, WAV, M4A, AAC, OGG, WebM, FLAC) +- **Validates format** and file size (50MB max) +- **Converts to Azure-compatible WAV format** using FFmpeg +- **Enhanced diarization** with up to 10 speaker detection +- **Multi-language support** with automatic language identification and smart fallback +- **Uploads to Azure Blob Storage** with SAS tokens +- **Starts Azure Batch transcription** with advanced speaker processing +- **Recovery tracking** via memo metadata storage +- **Returns job ID** for tracking and recovery + +### Video Processing (NEW) +- **Extracts audio from video files** (MP4, MOV, AVI, MKV, WEBM, FLV, WMV) +- **Automatic video-to-audio conversion** using FFmpeg +- **High-quality audio extraction** optimized for speech recognition +- **Supports all video formats** with audio tracks +- **Smart routing** (fast <115min, batch ≥115min) based on extracted audio duration +- **Full transcription pipeline** with speaker diarization +- **Progress tracking** and error handling + +## 🚀 Quick Start + +```bash +# Install dependencies +npm install + +# Configure environment +cp .env.example .env +# Edit .env with your Azure credentials + +# Start development server +npm run start:dev +# Service runs on port 1337 +``` + +## 📡 API Endpoints + +### Process Video File (NEW) +```bash +POST /audio/process-video +Content-Type: application/json +Authorization: Bearer + +curl -X POST http://localhost:1337/audio/process-video \ + -H "Authorization: Bearer your-jwt-token" \ + -H "Content-Type: application/json" \ + -d '{ + "videoPath": "user123/memo456/video.mp4", + "memoId": "memo456", + "userId": "user123", + "spaceId": "space789", + "recordingLanguages": ["en-US", "de-DE"], + "enableDiarization": true + }' +``` + +**Supported formats:** MP4, MOV, AVI, MKV, WEBM, FLV, WMV, MPEG +**Required Authentication:** Bearer JWT token +**Fields:** +- `videoPath` (required) - Supabase storage path to video file +- `memoId` (required) - Memo identifier +- `userId` (required) - User identifier +- `spaceId` (optional) - Space identifier +- `recordingLanguages` (optional) - Array of language codes +- `enableDiarization` (optional) - Enable speaker detection (default: true) + +**Response:** +```json +{ + "success": true, + "route": "fast", + "source": "video", + "memoId": "memo456", + "message": "Video processed and transcribed successfully via fast route" +} +``` + +### Upload Audio for Batch Transcription +```bash +POST /audio/transcribe +Content-Type: multipart/form-data + +curl -X POST http://localhost:1337/audio/transcribe \ + -F "audio=@your-audio-file.m4a" \ + -F "userId=user123" \ + -F "spaceId=space456" +``` + +**Supported formats:** MP3, WAV, M4A, AAC, OGG, WebM, FLAC +**Max file size:** 50MB +**Fields:** +- `audio` (required) - Audio file +- `userId` (optional) - User identifier +- `spaceId` (optional) - Space identifier + +### Convert and Transcribe (with Supabase Integration) +```bash +POST /audio/convert-and-transcribe +Content-Type: multipart/form-data +Authorization: Bearer + +curl -X POST http://localhost:1337/audio/convert-and-transcribe \ + -H "Authorization: Bearer your-jwt-token" \ + -F "audio=@your-audio-file.m4a" \ + -F "audioPath=user123/memo456/audio.m4a" \ + -F "memoId=memo456" \ + -F "recordingLanguages=en-US,es-ES" +``` + +**Required Authentication:** Bearer JWT token +**Fields:** +- `audio` (required) - Audio file +- `audioPath` (required) - Supabase storage path +- `memoId` (required) - Memo identifier +- `recordingLanguages` (optional) - Comma-separated language codes (if not provided, auto-detects from 10 common languages) + +## 📊 Response Examples + +### Success Response +```json +{ + "status": "processing", + "type": "batch", + "jobId": "azure-batch-job-123", + "userId": "user123", + "spaceId": "space456", + "duration": 3600.5, + "message": "Batch transcription started. Webhook will notify when complete." +} +``` + +### Error Response +```json +{ + "status": "failed", + "message": "Azure Storage credentials not configured", + "type": "batch", + "jobId": null, + "userId": "user123", + "spaceId": "space456" +} +``` + +## ⚙️ Configuration + +Required environment variables: + +```env +# Azure Configuration +AZURE_SPEECH_KEY=your-azure-speech-key +AZURE_SPEECH_REGION=swedencentral +AZURE_STORAGE_ACCOUNT_NAME=your-storage-account +AZURE_STORAGE_ACCOUNT_KEY=your-storage-key + +# Supabase Configuration +SUPABASE_URL=https://npgifbrwhftlbrbaglmi.supabase.co +SUPABASE_SERVICE_KEY=your-service-key +SUPABASE_ANON_KEY=your-anon-key + +# Memoro Service Integration +MEMORO_SERVICE_URL=https://memoro-service-111768794939.europe-west3.run.app + +# Server Configuration +PORT=1337 +``` + +## 🐳 Docker + +```bash +# Build image +docker build -t audio-microservice . + +# Run container +docker run -p 1337:1337 --env-file .env audio-microservice +``` + +## 🔄 How It Works + +### Enhanced Batch Transcription Route (`/audio/transcribe-from-storage`) +1. **Storage Download** → Download audio file from Supabase Storage +2. **Duration Analysis** → Calculate audio length using FFmpeg +3. **Convert** → FFmpeg converts to Azure-compatible WAV (PCM 16-bit LE, 16kHz mono) +4. **Upload** → Store in Azure Blob Storage with 6-hour SAS token +5. **Enhanced Batch Job** → Create Azure Speech batch transcription job with: + - **Advanced diarization** (up to 10 speakers) + - **Smart language identification** with fallback to 10 common languages when auto mode is used + - **Word-level timestamps** + - **Webhook callback configuration** +6. **Metadata Storage** → Store jobId in memo metadata for recovery tracking +7. **Response** → Return job ID and processing status + +### Fast Transcription Route (`/audio/convert-and-transcribe-from-storage`) +1. **Authentication** → Validate Bearer JWT token +2. **Storage Download** → Download audio from Supabase Storage +3. **Duration Analysis** → Calculate audio length using FFmpeg +4. **Convert** → Convert to WAV format if needed +5. **Supabase Upload** → Store converted audio in Supabase Storage (overwrite original) +6. **Edge Function** → Call Supabase transcribe function for real-time processing +7. **Response** → Return transcription results or processing status + +### Recovery System +- **Metadata Tracking** → Each batch job stores jobId in memo metadata using direct memo ID lookup (improved 2025-06-08) +- **Authentication Fixed** → Proper JWT token handling for metadata storage (fixed 2025-06-08) +- **Webhook Failure Recovery** → Planned cron job system for stuck transcriptions +- **Status Monitoring** → Integration with memoro-service for batch job tracking + +## 🌍 Language Detection + +The service supports intelligent language detection with two modes: + +### Specific Language Mode +When `recordingLanguages` is provided, Azure will attempt to identify the language from the specified list: +```bash +# Example: Detect Spanish or English +-F "recordingLanguages=es-ES,en-US" +``` + +### Auto Mode (Smart Fallback) +When no `recordingLanguages` are provided, the service automatically uses a curated list of 10 common languages: +- `de-DE` (German) +- `en-GB` (English - UK) +- `fr-FR` (French) +- `it-IT` (Italian) +- `es-ES` (Spanish) +- `sv-SE` (Swedish) +- `ru-RU` (Russian) +- `nl-NL` (Dutch) +- `tr-TR` (Turkish) +- `pt-PT` (Portuguese) + +This ensures reliable language detection even when the frontend is in auto mode, improving transcription accuracy across different languages. + +## 🔧 Integration Example + +```javascript +// Call from another microservice +const formData = new FormData(); +formData.append('audio', audioFileBuffer); +formData.append('userId', 'user123'); +formData.append('spaceId', 'space456'); + +const response = await fetch('http://localhost:1337/audio/transcribe', { + method: 'POST', + body: formData +}); + +const result = await response.json(); +console.log('Job ID:', result.jobId); +``` + +Optimized for long audio files with Azure Batch transcription! 🎵 + +example response: {"status":"processing","type":"batch","jobId":"287e93a0-3065-487d-9a22-36c3cfb5e1dc","userId":"test-user","duration":2407.119819,"message":"Batch transcription started. Webhook will notify when complete."} + +Service URL: https://audio-microservice-111768794939.europe-west3.run.app# audio-middleware +# Deployment test Sat Jul 26 19:26:53 CEST 2025 + + +test \ No newline at end of file diff --git a/apps/memoro/apps/audio-backend/deploy.sh b/apps/memoro/apps/audio-backend/deploy.sh new file mode 100755 index 000000000..e65ceca09 --- /dev/null +++ b/apps/memoro/apps/audio-backend/deploy.sh @@ -0,0 +1,46 @@ +#!/bin/bash + +# Load environment variables from .env.deploy and deploy to Google Cloud Run + +# Extract environment variables from .env.deploy (ignoring quotes and comments) +ENV_VARS="" +while IFS= read -r line || [[ -n "$line" ]]; do + # Skip empty lines and comments + if [[ -z "$line" || "$line" =~ ^[[:space:]]*# ]]; then + continue + fi + + # Extract key:value pairs, removing quotes and extra spaces + if [[ "$line" =~ ^[[:space:]]*([^:]+):[[:space:]]*\"?([^\"]*)\"?[[:space:]]*$ ]]; then + key="${BASH_REMATCH[1]// /}" + value="${BASH_REMATCH[2]}" + + # Add to ENV_VARS string + if [[ -n "$ENV_VARS" ]]; then + ENV_VARS="$ENV_VARS,$key=$value" + else + ENV_VARS="$key=$value" + fi + fi +done < .env.deploy + +# Add PORT if not present +if [[ ! "$ENV_VARS" =~ PORT= ]]; then + ENV_VARS="$ENV_VARS" +fi + +echo "Deploying with environment variables..." +echo "ENV_VARS: $ENV_VARS" + +# Deploy to Google Cloud Run +gcloud run deploy audio-microservice \ + --source . \ + --platform managed \ + --region europe-west3 \ + --allow-unauthenticated \ + --port 1337 \ + --memory 2Gi \ + --cpu 2 \ + --timeout 900 \ + --max-instances 10 \ + --set-env-vars "$ENV_VARS" \ No newline at end of file diff --git a/apps/memoro/apps/audio-backend/docs/to dos/api-exposition-roadmap.md b/apps/memoro/apps/audio-backend/docs/to dos/api-exposition-roadmap.md new file mode 100644 index 000000000..e4b3493c1 --- /dev/null +++ b/apps/memoro/apps/audio-backend/docs/to dos/api-exposition-roadmap.md @@ -0,0 +1,459 @@ +# API-Exposition Roadmap + +## Übersicht + +Dieser Plan beschreibt alle notwendigen Schritte, um den Audio-Middleware-Service als professionelle, öffentliche API anzubieten. + +**Status**: Der Service IST bereits eine REST-API, benötigt aber noch Production-Ready Features für öffentliche Nutzung. + +--- + +## Was fehlt noch für eine professionelle API-Exposition? + +### 🔐 1. Authentifizierung & Autorisierung + +**Aktuell**: Nur simple Bearer-Token-Prüfung ohne Validierung +```typescript +// Aktuelle Implementierung in audio.controller.ts:44-46 +if (!authHeader || !authHeader.startsWith('Bearer ')) { + throw new BadRequestException('Authorization token is required'); +} +``` + +**Was fehlt:** +- API-Key-Management-System (API-Keys generieren, rotieren, widerrufen) +- JWT-Token-Validierung (derzeit wird Token nur weitergegeben, nicht validiert) +- OAuth 2.0 / OpenID Connect Integration +- Unterschiedliche Permission-Levels (Read/Write/Admin) +- Service-to-Service Authentifizierung + +**Technologien:** +- `@nestjs/passport` +- `@nestjs/jwt` +- `passport-jwt` + +--- + +### 📚 2. API-Dokumentation (OpenAPI/Swagger) + +**Was fehlt:** +- `@nestjs/swagger` Integration +- Automatische API-Docs auf `/api-docs` +- DTOs mit Decorators für automatische Validierung +- Request/Response-Beispiele +- Interaktive API-Playground + +**Beispiel-Implementation:** +```typescript +// Aktuell fehlt: +@ApiTags('audio') +@ApiBearerAuth() +export class AudioController { + + @ApiOperation({ summary: 'Process video file and transcribe' }) + @ApiResponse({ status: 200, description: 'Success', type: ProcessVideoResponse }) + @Post('process-video') + async processVideo(@Body() body: ProcessVideoDto) { ... } +} +``` + +**Technologien:** +- `@nestjs/swagger` +- `swagger-ui-express` + +--- + +### 🛡️ 3. Rate Limiting & Throttling + +**Was fehlt:** +- Request-Limits pro API-Key (z.B. 100 Requests/Minute) +- Throttling für ressourcenintensive Endpunkte +- `@nestjs/throttler` Package +- Unterschiedliche Limits für verschiedene Tier-Levels (Free/Pro/Enterprise) + +**Beispiel:** +```typescript +@Throttle(10, 60) // 10 requests per 60 seconds +@Post('process-video') +async processVideo() { ... } +``` + +**Technologien:** +- `@nestjs/throttler` +- Redis für distributed rate limiting + +--- + +### ✅ 4. Input-Validierung mit DTOs + +**Aktuell**: Manuelle Validierung +```typescript +if (!body.audioPath) { + throw new BadRequestException('Audio path is required'); +} +``` + +**Besser: Class-Validator DTOs:** +```typescript +class ProcessVideoDto { + @IsString() + @IsNotEmpty() + videoPath: string; + + @IsString() + @IsNotEmpty() + memoId: string; + + @IsArray() + @IsOptional() + recordingLanguages?: string[]; + + @IsString() + @IsOptional() + callbackUrl?: string; +} +``` + +**Technologien:** +- `class-validator` +- `class-transformer` + +--- + +### 📊 5. Monitoring, Logging & Analytics + +**Was fehlt:** +- Request/Response Logging (strukturiert) +- API-Nutzungsstatistiken pro API-Key +- Performance-Metriken (Latency, Success Rate) +- Error-Tracking (z.B. Sentry Integration) +- Dashboard für API-Health-Monitoring + +**Features:** +- Strukturiertes JSON-Logging +- Request-ID-Tracking über alle Services +- Performance-Metriken (P50, P95, P99 Latency) +- Error-Rate-Monitoring +- API-Usage-Analytics + +**Technologien:** +- `winston` oder `pino` für Logging +- Sentry für Error-Tracking +- Prometheus + Grafana für Metrics +- Google Cloud Monitoring + +--- + +### 🔢 6. API-Versionierung + +**Was fehlt:** +```typescript +// Beispiel: +@Controller('v1/audio') // Version 1 +@Controller('v2/audio') // Version 2 mit breaking changes +``` + +**Best Practices:** +- URL-basierte Versionierung (`/v1/audio`, `/v2/audio`) +- Sunset-Header für deprecated Endpoints +- Migrations-Guide zwischen Versionen + +--- + +### 💰 7. Quotas & Billing + +**Was fehlt:** +- Nutzungslimits (Minuten Transkription pro Monat) +- Kostenberechnung basierend auf Nutzung +- Billing-Integration (Stripe, etc.) +- Quota-Überwachung und Warnungen +- Usage-basierte Preismodelle + +**Features:** +- Free Tier: 100 Minuten/Monat +- Pro Tier: 1000 Minuten/Monat +- Enterprise: Custom Limits +- Echtzeitüberwachung der Nutzung + +**Technologien:** +- Stripe für Billing +- Redis/PostgreSQL für Quota-Tracking + +--- + +### 🔄 8. Webhook-Management + +**Aktuell**: Webhooks werden gesendet, aber: +- Kein Interface zum Registrieren/Verwalten von Webhooks +- Keine Webhook-Retry-Logik mit Exponential Backoff +- Kein Webhook-Event-Log +- Keine Webhook-Signatur-Validierung + +**Was fehlt:** +- Webhook-Registrierung-API +- Retry-Mechanismus (3 Retries mit Backoff) +- Webhook-Event-History +- HMAC-Signatur für Sicherheit +- Webhook-Testing-Tools + +--- + +### 📦 9. SDKs & Client Libraries + +**Was fehlt:** +- JavaScript/TypeScript SDK +- Python SDK +- Java SDK +- Go SDK +- Code-Beispiele für verschiedene Sprachen + +**Beispiel TypeScript SDK:** +```typescript +import { AudioAPI } from '@memo/audio-api'; + +const client = new AudioAPI({ apiKey: 'your-api-key' }); + +const result = await client.processVideo({ + videoPath: 'gs://bucket/video.mp4', + memoId: 'memo-123', + recordingLanguages: ['de-DE'] +}); +``` + +--- + +### 🌐 10. Developer Portal + +**Was fehlt:** +- Self-Service API-Key-Generierung +- Interaktive API-Dokumentation +- Code-Beispiele und Tutorials +- Nutzungsstatistiken-Dashboard +- Support/Ticketing-System +- Changelog und Release Notes + +**Features:** +- User-Registrierung und Login +- API-Key-Management (Erstellen, Rotieren, Löschen) +- Live-API-Testing-Playground +- Nutzungs-Dashboard mit Grafiken +- Billing-Übersicht + +--- + +### 🔒 11. Security Headers & CORS + +**Aktuell**: `app.enableCors()` (zu permissiv) + +**Besser:** +```typescript +app.enableCors({ + origin: process.env.ALLOWED_ORIGINS?.split(','), + methods: ['POST', 'GET'], + credentials: true, + maxAge: 3600 +}); + +// Helmet.js für Security Headers +app.use(helmet({ + contentSecurityPolicy: true, + hsts: true, + noSniff: true +})); +``` + +**Zusätzliche Security:** +- HTTPS-Only +- API-Key-Verschlüsselung im Storage +- Request-Signing für sensitive Operationen +- IP-Whitelisting (optional) + +**Technologien:** +- `helmet` +- `@nestjs/cors` + +--- + +## 📋 Priorisierte Umsetzungs-Roadmap + +### Phase 1: Basis-Absicherung + +**Ziel**: Minimale Production-Ready API + +**Tasks:** +1. ✅ DTOs mit class-validator implementieren + - ProcessVideoDto + - TranscribeDto + - ConvertAndTranscribeDto + - Response-DTOs + +2. ✅ API-Key-Authentifizierung + - API-Key-Generation + - API-Key-Validierung + - Datenbank-Schema für Keys + +3. ✅ Rate Limiting + - @nestjs/throttler Setup + - Pro-Endpoint-Limits + - Redis-Integration für distributed limiting + +4. ✅ Swagger-Dokumentation + - @nestjs/swagger Setup + - Controller-Decorators + - DTO-Documentation + - API-Docs auf /api-docs + +**Geschätzter Aufwand**: 1-2 Wochen + +--- + +### Phase 2: Professional Features + +**Ziel**: Production-Grade Monitoring & Security + +**Tasks:** +5. ✅ API-Versionierung + - v1/audio Endpoints + - Versionierungs-Strategie dokumentieren + +6. ✅ Strukturiertes Logging + - Winston/Pino Integration + - Request-ID-Tracking + - Strukturierte Log-Formate + +7. ✅ Error-Tracking + - Sentry Integration + - Error-Kategorisierung + - Alert-Konfiguration + +8. ✅ CORS-Konfiguration + - Environment-basierte Origin-Liste + - Helmet.js Integration + +9. ✅ Webhook-Retry-Logik + - Exponential Backoff + - Retry-Limits + - Event-Logging + +**Geschätzter Aufwand**: 2-3 Wochen + +--- + +### Phase 3: Enterprise-Features + +**Ziel**: Vollständiges API-Produkt + +**Tasks:** +10. ✅ Developer Portal + - Frontend-Entwicklung + - User-Management + - API-Key-Management-UI + - Usage-Dashboard + +11. ✅ SDKs + - TypeScript SDK + - Python SDK + - Code-Generatoren + +12. ✅ Quotas & Billing + - Quota-System + - Stripe-Integration + - Usage-Metering + +13. ✅ Webhook-Management-API + - Registrierung + - Testing-Tools + - Event-History + +14. ✅ Performance-Monitoring + - Prometheus-Metrics + - Grafana-Dashboards + - Alerting + +**Geschätzter Aufwand**: 4+ Wochen + +--- + +## 🎯 Quick Wins (Sofort umsetzbar) + +Diese Features können schnell implementiert werden und bringen sofortigen Mehrwert: + +1. **Swagger-Dokumentation** (1-2 Tage) + - Schnelle Übersicht für Entwickler + - Interaktives Testing + +2. **DTOs mit Validation** (2-3 Tage) + - Bessere Fehler-Messages + - Automatische Validierung + +3. **Rate Limiting** (1 Tag) + - Schutz vor Missbrauch + - Einfache Implementation + +4. **Strukturiertes Logging** (1-2 Tage) + - Besseres Debugging + - Production-Monitoring + +--- + +## 📚 Zusätzliche Empfehlungen + +### Performance-Optimierungen +- Response-Caching für häufige Requests +- Database-Connection-Pooling +- Background-Job-Queue für lange Prozesse + +### Testing +- Unit-Tests für alle Services +- Integration-Tests für API-Endpoints +- Load-Testing für Performance-Validierung + +### Documentation +- API-Reference-Dokumentation +- Getting-Started-Guide +- Code-Beispiele für alle Endpoints +- Troubleshooting-Guide + +### Compliance +- DSGVO-Compliance (Audio-Daten) +- Daten-Löschungs-Policies +- Audit-Logs für Compliance + +--- + +## 🔧 Benötigte Dependencies (Phase 1) + +```json +{ + "dependencies": { + "@nestjs/swagger": "^7.1.0", + "@nestjs/throttler": "^5.0.0", + "@nestjs/passport": "^10.0.0", + "@nestjs/jwt": "^10.1.0", + "class-validator": "^0.14.0", + "class-transformer": "^0.5.1", + "helmet": "^7.0.0", + "passport-jwt": "^4.0.1", + "bcrypt": "^5.1.1" + }, + "devDependencies": { + "@types/passport-jwt": "^3.0.9", + "@types/bcrypt": "^5.0.0" + } +} +``` + +--- + +## 💡 Nächste Schritte + +Welcher Aspekt soll zuerst implementiert werden? + +**Empfehlung**: Start mit Phase 1, Task 1-4 (Basis-Absicherung) + +1. DTOs & Validation +2. Swagger-Dokumentation +3. Rate Limiting +4. API-Key-Authentifizierung + +Dies schafft eine solide Basis für alle weiteren Features. diff --git a/apps/memoro/apps/audio-backend/nest-cli.json b/apps/memoro/apps/audio-backend/nest-cli.json new file mode 100644 index 000000000..68d1974c4 --- /dev/null +++ b/apps/memoro/apps/audio-backend/nest-cli.json @@ -0,0 +1,5 @@ +{ + "$schema": "https://json.schemastore.org/nest-cli", + "collection": "@nestjs/schematics", + "sourceRoot": "src" +} diff --git a/apps/memoro/apps/audio-backend/package.json b/apps/memoro/apps/audio-backend/package.json new file mode 100644 index 000000000..508f89eee --- /dev/null +++ b/apps/memoro/apps/audio-backend/package.json @@ -0,0 +1,37 @@ +{ + "name": "@memoro/audio-backend", + "version": "1.0.0", + "description": "Simple microservice for audio transcription with batch routing", + "main": "dist/main.js", + "scripts": { + "build": "nest build", + "start": "nest start", + "start:dev": "nest start --watch", + "start:prod": "node dist/main" + }, + "dependencies": { + "@azure/storage-blob": "^12.17.0", + "@nestjs/common": "^10.0.0", + "@nestjs/config": "^3.0.0", + "@nestjs/core": "^10.0.0", + "@nestjs/platform-express": "^10.0.0", + "@nestjs/swagger": "^7.4.2", + "@nestjs/throttler": "^5.2.0", + "@supabase/supabase-js": "^2.41.0", + "class-transformer": "^0.5.1", + "class-validator": "^0.14.3", + "fluent-ffmpeg": "^2.1.2", + "helmet": "^8.1.0", + "multer": "^1.4.5-lts.1", + "reflect-metadata": "^0.1.13", + "rxjs": "^7.8.1", + "swagger-ui-express": "^5.0.1" + }, + "devDependencies": { + "@nestjs/cli": "^10.0.0", + "@types/fluent-ffmpeg": "^2.1.21", + "@types/multer": "^1.4.7", + "@types/node": "^20.3.1", + "typescript": "^5.1.3" + } +} diff --git a/apps/memoro/apps/audio-backend/src/app.module.ts b/apps/memoro/apps/audio-backend/src/app.module.ts new file mode 100644 index 000000000..ee48734d4 --- /dev/null +++ b/apps/memoro/apps/audio-backend/src/app.module.ts @@ -0,0 +1,43 @@ +import { Module } from '@nestjs/common'; +import { ConfigModule } from '@nestjs/config'; +import { MulterModule } from '@nestjs/platform-express'; +import { ThrottlerModule, ThrottlerGuard } from '@nestjs/throttler'; +import { APP_GUARD } from '@nestjs/core'; +import { AudioController } from './audio.controller'; +import { AudioService } from './audio.service'; + +@Module({ + imports: [ + ConfigModule.forRoot({ isGlobal: true }), + MulterModule.register({ + dest: './uploads', + limits: { fileSize: 500 * 1024 * 1024 }, // 500MB + }), + ThrottlerModule.forRoot([ + { + name: 'short', + ttl: 1000, // 1 second + limit: 3, // 3 requests per second + }, + { + name: 'medium', + ttl: 60000, // 1 minute + limit: 20, // 20 requests per minute + }, + { + name: 'long', + ttl: 3600000, // 1 hour + limit: 100, // 100 requests per hour + }, + ]), + ], + controllers: [AudioController], + providers: [ + AudioService, + { + provide: APP_GUARD, + useClass: ThrottlerGuard, + }, + ], +}) +export class AppModule {} diff --git a/apps/memoro/apps/audio-backend/src/audio.controller.ts b/apps/memoro/apps/audio-backend/src/audio.controller.ts new file mode 100644 index 000000000..72afcaa44 --- /dev/null +++ b/apps/memoro/apps/audio-backend/src/audio.controller.ts @@ -0,0 +1,205 @@ +import { + Controller, + Post, + Get, + Body, + Param, + BadRequestException, + Logger, + Headers, +} from '@nestjs/common'; +import { ApiTags, ApiOperation, ApiResponse, ApiBearerAuth, ApiParam } from '@nestjs/swagger'; +import { AudioService } from './audio.service'; +import { + TranscribeRealtimeDto, + TranscribeFromStorageDto, + ProcessVideoDto, + TranscriptionResponseDto, + BatchStatusResponseDto, +} from './dto'; + +@ApiTags('Audio Transcription') +@ApiBearerAuth() +@Controller('audio') +export class AudioController { + private readonly logger = new Logger(AudioController.name); + constructor(private readonly audioService: AudioService) {} + + @Post('transcribe-realtime') + @ApiOperation({ + summary: 'Transcribe audio file in real-time', + description: + 'Process and transcribe audio files using real-time transcription with automatic fallback to batch processing for longer files (>115 minutes). Supports speaker diarization and multi-language detection.', + }) + @ApiResponse({ + status: 200, + description: 'Transcription completed successfully', + type: TranscriptionResponseDto, + }) + @ApiResponse({ + status: 400, + description: 'Bad request - invalid input parameters', + }) + async transcribeRealtime( + @Body() body: TranscribeRealtimeDto, + @Headers('authorization') authHeader?: string + ) { + if (!authHeader || !authHeader.startsWith('Bearer ')) { + throw new BadRequestException('Authorization token is required'); + } + + const token = authHeader.replace('Bearer ', ''); + + this.logger.log(`Starting fast transcription: ${body.audioPath} for memo ${body.memoId}`); + + try { + const result = await this.audioService.transcribeRealtimeWithFallback( + body.audioPath, + body.memoId, + body.userId, + body.spaceId, + body.recordingLanguages || [], + token, + body.enableDiarization, + body.isAppend, + body.recordingIndex + ); + + return result; + } catch (error) { + this.logger.error('Error in transcribe-realtime with fallback:', error); + throw new BadRequestException( + `Transcription failed after all fallback attempts: ${error.message}` + ); + } + } + + @Post('transcribe-from-storage') + @ApiOperation({ + summary: 'Transcribe audio from cloud storage', + description: + 'Process audio files directly from cloud storage paths. Supports both Google Cloud Storage (gs://) and Supabase storage paths.', + }) + @ApiResponse({ + status: 200, + description: 'Transcription completed successfully', + type: TranscriptionResponseDto, + }) + @ApiResponse({ + status: 400, + description: 'Bad request - invalid input parameters', + }) + async transcribeFromStorage( + @Body() body: TranscribeFromStorageDto, + @Headers('authorization') authHeader?: string + ) { + if (!authHeader || !authHeader.startsWith('Bearer ')) { + throw new BadRequestException('Authorization token is required'); + } + + const token = authHeader.replace('Bearer ', ''); + + this.logger.log(`Processing audio from storage: ${body.audioPath}`); + + try { + // Process audio using storage path + const result = await this.audioService.processAudioFromStorage( + body.audioPath, + body.userId, + body.spaceId, + body.recordingLanguages, + token, + body.memoId, + body.enableDiarization + ); + + return result; + } catch (error) { + this.logger.error('Error in transcribe-from-storage:', error); + throw new BadRequestException(`Transcription failed: ${error.message}`); + } + } + + @Get('batch-status/:jobId') + @ApiOperation({ + summary: 'Check batch transcription job status', + description: + 'Check the status and retrieve results of a batch transcription job. Used for long audio files that are processed asynchronously.', + }) + @ApiParam({ + name: 'jobId', + description: 'Batch transcription job ID', + example: 'batch-job-12345', + }) + @ApiResponse({ + status: 200, + description: 'Job status retrieved successfully', + type: BatchStatusResponseDto, + }) + @ApiResponse({ + status: 400, + description: 'Bad request - invalid job ID', + }) + async checkBatchStatus( + @Param('jobId') jobId: string, + @Headers('authorization') authHeader?: string + ) { + if (!jobId) { + throw new BadRequestException('Job ID is required'); + } + + this.logger.log(`Checking batch transcription status for job: ${jobId}`); + + try { + const result = await this.audioService.checkBatchTranscriptionStatus(jobId); + return result; + } catch (error) { + this.logger.error('Error checking batch status:', error); + throw new BadRequestException(`Status check failed: ${error.message}`); + } + } + + @Post('process-video') + @ApiOperation({ + summary: 'Process video file and transcribe audio', + description: + 'Extract audio from video files and transcribe automatically. Supports multiple video formats (MP4, MOV, AVI, MKV, WEBM, FLV, WMV) with automatic format detection and conversion.', + }) + @ApiResponse({ + status: 200, + description: 'Video processing and transcription completed successfully', + type: TranscriptionResponseDto, + }) + @ApiResponse({ + status: 400, + description: 'Bad request - invalid input parameters', + }) + async processVideo(@Body() body: ProcessVideoDto, @Headers('authorization') authHeader?: string) { + if (!authHeader || !authHeader.startsWith('Bearer ')) { + throw new BadRequestException('Authorization token is required'); + } + + const token = authHeader.replace('Bearer ', ''); + + this.logger.log(`Processing video file: ${body.videoPath} for memo ${body.memoId}`); + + try { + const result = await this.audioService.processVideoFile( + body.videoPath, + body.memoId, + body.userId, + body.spaceId, + body.recordingLanguages || [], + token, + body.enableDiarization, + body.isAppend, + body.recordingIndex + ); + + return result; + } catch (error) { + this.logger.error('Error processing video:', error); + throw new BadRequestException(`Video processing failed: ${error.message}`); + } + } +} diff --git a/apps/memoro/apps/audio-backend/src/audio.service.ts b/apps/memoro/apps/audio-backend/src/audio.service.ts new file mode 100644 index 000000000..6e4c8d06e --- /dev/null +++ b/apps/memoro/apps/audio-backend/src/audio.service.ts @@ -0,0 +1,2491 @@ +import { Injectable, Logger } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import * as ffmpeg from 'fluent-ffmpeg'; +import * as fs from 'fs'; +import * as path from 'path'; +import * as os from 'os'; + +@Injectable() +export class AudioService { + private readonly logger = new Logger(AudioService.name); + private readonly batchThresholdMinutes = 115; // 1h55m + + constructor(private readonly configService: ConfigService) {} + + /** + * Fast transcription with automatic fallback handling and timeout management + */ + async transcribeRealtimeWithFallback( + audioPath: string, + memoId: string, + userId: string, + spaceId?: string, + recordingLanguages?: string[], + token?: string, + enableDiarization?: boolean, + isAppend?: boolean, + recordingIndex?: number + ) { + // Configurable timeouts based on environment + const TOTAL_TIMEOUT = parseInt('1200000'); // 20 minutes default + const FAST_TIMEOUT = parseInt('1200000'); // 20 minutes default + const startTime = Date.now(); + + const checkTimeout = (stage: string) => { + const elapsed = Date.now() - startTime; + if (elapsed > TOTAL_TIMEOUT) { + throw new Error(`Fallback chain timeout exceeded after ${elapsed}ms in stage: ${stage}`); + } + return TOTAL_TIMEOUT - elapsed; // Return remaining time + }; + + try { + this.logger.log( + `[transcribeRealtimeWithFallback] Starting transcription with fallback for ${audioPath}` + ); + + // Attempt 1: Try fast transcription with timeout + try { + checkTimeout('initial-fast'); + return await Promise.race([ + this.transcribeRealtime( + audioPath, + memoId, + userId, + spaceId, + recordingLanguages, + token, + enableDiarization, + isAppend, + recordingIndex + ), + new Promise((_, reject) => + setTimeout(() => reject(new Error('Fast route timeout')), FAST_TIMEOUT) + ), + ]); + } catch (fastError) { + this.logger.warn( + `[transcribeRealtimeWithFallback] Fast route failed: ${fastError.message}` + ); + + // Check if this is a rate limit error (429) that should retry with different service + if (this.shouldRetryWithDifferentService(fastError)) { + const remainingTime = checkTimeout('service-retry'); + this.logger.log( + `[transcribeRealtimeWithFallback] Attempting service retry for rate limit error (${remainingTime}ms remaining)` + ); + + // Attempt 2: Try with different Azure service + try { + const serviceRetryTimeout = Math.min(FAST_TIMEOUT, remainingTime - 5000); // Leave 5s buffer + return await Promise.race([ + this.transcribeRealtimeWithServiceRetry( + audioPath, + memoId, + userId, + spaceId, + recordingLanguages, + token, + enableDiarization, + isAppend, + recordingIndex + ), + new Promise((_, reject) => + setTimeout(() => reject(new Error('Service retry timeout')), serviceRetryTimeout) + ), + ]); + } catch (serviceRetryError) { + this.logger.warn( + `[transcribeRealtimeWithFallback] Service retry failed: ${serviceRetryError.message}` + ); + // Continue to next fallback option + } + } + + // Check if this is a 422 error or format-related issue that could be resolved by conversion + if (this.shouldRetryWithConversion(fastError)) { + const remainingTime = checkTimeout('conversion-retry'); + this.logger.log( + `[transcribeRealtimeWithFallback] Attempting conversion retry for format-related error (${remainingTime}ms remaining)` + ); + + // Attempt 3: Try with additional audio conversion/preprocessing + try { + const conversionTimeout = Math.min(FAST_TIMEOUT, remainingTime - 5000); // Leave 5s buffer + return await Promise.race([ + this.transcribeRealtimeWithConversion( + audioPath, + memoId, + userId, + spaceId, + recordingLanguages, + token, + enableDiarization, + isAppend, + recordingIndex + ), + new Promise((_, reject) => + setTimeout(() => reject(new Error('Conversion retry timeout')), conversionTimeout) + ), + ]); + } catch (conversionError) { + this.logger.warn( + `[transcribeRealtimeWithFallback] Conversion retry failed: ${conversionError.message}` + ); + + // Attempt 4: Fallback to batch processing + checkTimeout('batch-fallback'); + this.logger.log(`[transcribeRealtimeWithFallback] Falling back to batch processing`); + return await this.fallbackToBatchProcessing( + audioPath, + memoId, + userId, + spaceId, + recordingLanguages, + token, + enableDiarization, + isAppend, + recordingIndex + ); + } + } else { + // For non-format, non-rate-limit errors, go directly to batch fallback + checkTimeout('direct-batch-fallback'); + this.logger.log( + `[transcribeRealtimeWithFallback] Non-format error, falling back to batch processing` + ); + return await this.fallbackToBatchProcessing( + audioPath, + memoId, + userId, + spaceId, + recordingLanguages, + token, + enableDiarization, + isAppend, + recordingIndex + ); + } + } + } catch (error) { + this.logger.error( + `[transcribeRealtimeWithFallback] All fallback attempts failed after ${Date.now() - startTime}ms:`, + error + ); + + // Determine which stage failed for better error reporting + let fallbackStage = 'unknown'; + if (error.message?.includes('timeout')) { + fallbackStage = 'timeout'; + } else if (error.message?.includes('service-retry')) { + fallbackStage = 'service-retry'; + } else if (error.message?.includes('conversion')) { + fallbackStage = 'conversion-retry'; + } else if (error.message?.includes('batch')) { + fallbackStage = 'batch-fallback'; + } else { + fallbackStage = 'initial-fast'; + } + + // Notify memoro service of final failure with enhanced context + try { + await this.notifyTranscriptionErrorWithContext( + memoId, + userId, + error.message, + 'fast', + fallbackStage, + token + ); + } catch (notifyError) { + this.logger.error(`Failed to notify transcription error:`, notifyError); + } + + throw error; + } + } + + /** + * Fast transcription using Azure Speech API for files <115 minutes and <300MB + */ + async transcribeRealtime( + audioPath: string, + memoId: string, + userId: string, + spaceId?: string, + recordingLanguages?: string[], + token?: string, + enableDiarization?: boolean, + isAppend?: boolean, + recordingIndex?: number + ) { + try { + this.logger.log(`[transcribeRealtime] Starting fast transcription for ${audioPath}`); + + // Download audio from storage + const audioBuffer = await this.downloadFromStorage(audioPath, token); + this.logger.log(`Downloaded audio: ${audioBuffer.length} bytes`); + + // Convert to Azure-compatible format + const convertedAudio = await this.convertAudioForAzure(audioBuffer, audioPath); + this.logger.log(`Converted audio: ${convertedAudio.length} bytes`); + + // Perform real-time transcription using Azure Speech API + const transcriptionResult = await this.performRealtimeTranscription( + convertedAudio, + recordingLanguages, + enableDiarization + ); + + // Send appropriate callback based on operation type + if (isAppend) { + // Send append-specific callback + await this.notifyAppendTranscriptionComplete( + memoId, + userId, + transcriptionResult, + 'fast', + token, + recordingIndex + ); + this.logger.log(`[transcribeRealtime] Sent append callback for memo ${memoId}`); + } else { + // Send regular transcription callback + await this.notifyTranscriptionComplete(memoId, userId, transcriptionResult, 'fast', token); + } + + return { + success: true, + route: 'fast', + memoId, + message: 'Fast transcription completed successfully', + }; + } catch (error) { + this.logger.error(`[transcribeRealtime] Error:`, error); + throw error; // Don't notify here - let the fallback handler manage notifications + } + } + + /** + * Get random Azure Speech Service configuration for load balancing + */ + private getRandomSpeechService() { + const speechServices = [ + { + key: this.configService.get('PROD_MEMORO_TRANSCRIBE_SWE'), + endpoint: + 'https://swedencentral.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe', + region: 'swedencentral', + name: 'prod-memoro-transcribe-swe', + }, + { + key: this.configService.get('PROD_MEMORO_TRANSCRIBE_SWE2'), + endpoint: + 'https://swedencentral.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe', + region: 'swedencentral', + name: 'prod-memoro-transcribe-swe2', + }, + { + key: this.configService.get('PROD_MEMORO_TRANSCRIBE_SWE3'), + endpoint: + 'https://swedencentral.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe', + region: 'swedencentral', + name: 'prod-memoro-transcribe-swe3', + }, + { + key: this.configService.get('PROD_MEMORO_TRANSCRIBE_SWE4'), + endpoint: + 'https://swedencentral.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe', + region: 'swedencentral', + name: 'prod-memoro-transcribe-swe4', + }, + ]; + + // Filter out services without keys and fallback to original config + const validServices = speechServices.filter((service) => service.key); + + if (validServices.length === 0) { + // Fallback to original single service configuration + const azureKey = this.configService.get('AZURE_SPEECH_KEY'); + const azureRegion = this.configService.get('AZURE_SPEECH_REGION'); + + if (!azureKey || !azureRegion) { + throw new Error('No Azure Speech credentials configured'); + } + + return { + key: azureKey, + endpoint: `https://${azureRegion}.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe`, + region: azureRegion, + name: 'fallback-service', + }; + } + + // Random selection for load balancing + const randomIndex = Math.floor(Math.random() * validServices.length); + const selectedService = validServices[randomIndex]; + + this.logger.log( + `[getRandomSpeechService] Selected service: ${selectedService.name} (${randomIndex + 1}/${validServices.length})` + ); + + return selectedService; + } + + /** + * Performs real-time transcription using Azure Speech API with load balancing + */ + private async performRealtimeTranscription( + audioBuffer: Buffer, + recordingLanguages?: string[], + enableDiarization?: boolean + ) { + const speechService = this.getRandomSpeechService(); + + // FIXED: Correct Azure Fast Transcription API diarization configuration + const definition: any = { + wordLevelTimestampsEnabled: true, + punctuationMode: 'Automatic', + profanityFilterMode: 'None', + }; + + // Conditionally add diarization based on user preference (default: enabled) + if (enableDiarization !== false) { + definition.diarization = { + enabled: true, + maxSpeakers: 10, // Correct format: maxSpeakers instead of speakers.maxCount + }; + } + + // Language identification setup + const CANDIDATE_LOCALES = [ + 'de-DE', + 'en-GB', + 'fr-FR', + 'it-IT', + 'es-ES', + 'sv-SE', + 'ru-RU', + 'nl-NL', + 'tr-TR', + 'pt-PT', + ]; + + if (recordingLanguages && recordingLanguages.length > 0) { + this.logger.log(`Using provided languages: ${recordingLanguages.join(', ')}`); + definition['languageIdentification'] = { + candidateLocales: recordingLanguages, + }; + } else { + this.logger.log(`Using default candidate locales: ${CANDIDATE_LOCALES.join(', ')}`); + definition['languageIdentification'] = { + candidateLocales: CANDIDATE_LOCALES, + }; + } + + // Prepare form data + const formData = new FormData(); + + // DEBUG: Log the exact definition being sent to Azure + this.logger.log( + `[Azure Request] DEBUG - Definition being sent: ${JSON.stringify(definition, null, 2)}` + ); + + formData.append('definition', JSON.stringify(definition)); + + // Create blob from buffer + const audioBlob = new Blob([audioBuffer], { type: 'audio/wav' }); + formData.append('audio', audioBlob, 'audio.wav'); + + this.logger.log(`Sending to Azure Speech API (${speechService.name})...`); + + const response = await fetch(`${speechService.endpoint}?api-version=2024-11-15`, { + method: 'POST', + headers: { + 'Ocp-Apim-Subscription-Key': speechService.key, + Accept: 'application/json', + }, + body: formData, + }); + + if (!response.ok) { + const errorText = await response.text(); + + // Log comprehensive error details for 429 analysis with special tags + if (response.status === 429) { + // Special tagged log for easy filtering: [AZURE_429_ERROR] + this.logger.error( + `[AZURE_429_ERROR] Azure Speech API Rate Limited - Service: ${speechService.name}` + ); + this.logger.error(`[AZURE_429_ERROR] Status: ${response.status}`); + this.logger.error(`[AZURE_429_ERROR] Response body: ${errorText}`); + this.logger.error( + `[AZURE_429_ERROR] Retry-After: ${response.headers.get('retry-after') || 'not provided'}` + ); + this.logger.error( + `[AZURE_429_ERROR] x-ms-service-quota-reason: ${response.headers.get('x-ms-service-quota-reason') || 'not provided'}` + ); + this.logger.error( + `[AZURE_429_ERROR] x-ms-request-id: ${response.headers.get('x-ms-request-id') || 'not provided'}` + ); + this.logger.error( + `[AZURE_429_ERROR] x-ms-retry-after-ms: ${response.headers.get('x-ms-retry-after-ms') || 'not provided'}` + ); + this.logger.error( + `[AZURE_429_ERROR] x-ms-error-code: ${response.headers.get('x-ms-error-code') || 'not provided'}` + ); + + // Log all headers for comprehensive analysis + const allHeaders = {}; + response.headers.forEach((value, key) => { + allHeaders[key] = value; + }); + this.logger.error( + `[AZURE_429_ERROR] All response headers: ${JSON.stringify(allHeaders, null, 2)}` + ); + } else if (response.status === 422) { + // Special tagged log for format errors: [AZURE_422_ERROR] + this.logger.error( + `[AZURE_422_ERROR] Azure Speech API Format Error - Service: ${speechService.name}` + ); + this.logger.error(`[AZURE_422_ERROR] Status: ${response.status}`); + this.logger.error(`[AZURE_422_ERROR] Response body: ${errorText}`); + this.logger.error( + `[AZURE_422_ERROR] x-ms-request-id: ${response.headers.get('x-ms-request-id') || 'not provided'}` + ); + } else { + this.logger.error(`Azure API error: ${response.status} - ${errorText}`); + } + + throw new Error(`Azure Speech API error: ${response.status} - ${errorText}`); + } + + const result = await response.json(); + this.logger.log(`Azure transcription result received`); + + // DEBUG: Log what Azure is actually returning for diarization analysis + this.logger.log(`[Azure Response] DEBUG - Full result keys: ${Object.keys(result).join(', ')}`); + this.logger.log( + `[Azure Response] DEBUG - Has phrases: ${!!result.phrases} (count: ${result.phrases?.length || 0})` + ); + + if (result.phrases && result.phrases.length > 0) { + const firstPhrase = result.phrases[0]; + this.logger.log( + `[Azure Response] DEBUG - First phrase keys: ${Object.keys(firstPhrase).join(', ')}` + ); + this.logger.log( + `[Azure Response] DEBUG - First phrase has speaker: ${firstPhrase.speaker !== undefined} (value: ${firstPhrase.speaker})` + ); + this.logger.log( + `[Azure Response] DEBUG - First phrase sample: ${JSON.stringify(firstPhrase, null, 2)}` + ); + + // Count how many phrases have speaker info + const phrasesWithSpeakers = result.phrases.filter((p) => p.speaker !== undefined); + this.logger.log( + `[Azure Response] DEBUG - Phrases with speaker data: ${phrasesWithSpeakers.length}/${result.phrases.length}` + ); + } else { + this.logger.warn(`[Azure Response] DEBUG - No phrases found in result!`); + } + + // Process the result to match existing data structure + return this.processTranscriptionResult(result); + } + + /** + * Performs real-time transcription with retry logic (selects different service) + */ + private async performRealtimeTranscriptionWithRetry( + audioBuffer: Buffer, + recordingLanguages?: string[], + enableDiarization?: boolean, + excludeServices: string[] = [] + ) { + // Get all available services and exclude the ones that already failed + const allServices = [ + { + key: this.configService.get('PROD_MEMORO_TRANSCRIBE_SWE'), + endpoint: + 'https://swedencentral.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe', + region: 'swedencentral', + name: 'prod-memoro-transcribe-swe', + }, + { + key: this.configService.get('PROD_MEMORO_TRANSCRIBE_SWE2'), + endpoint: + 'https://swedencentral.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe', + region: 'swedencentral', + name: 'prod-memoro-transcribe-swe2', + }, + { + key: this.configService.get('PROD_MEMORO_TRANSCRIBE_SWE3'), + endpoint: + 'https://swedencentral.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe', + region: 'swedencentral', + name: 'prod-memoro-transcribe-swe3', + }, + { + key: this.configService.get('PROD_MEMORO_TRANSCRIBE_SWE4'), + endpoint: + 'https://swedencentral.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe', + region: 'swedencentral', + name: 'prod-memoro-transcribe-swe4', + }, + ]; + + // Filter out excluded services and services without keys + const availableServices = allServices.filter( + (service) => service.key && !excludeServices.includes(service.name) + ); + + if (availableServices.length === 0) { + throw new Error('No available Azure Speech services for retry'); + } + + // Pick a random service from available ones + const randomIndex = Math.floor(Math.random() * availableServices.length); + const speechService = availableServices[randomIndex]; + + this.logger.log( + `[performRealtimeTranscriptionWithRetry] Selected service: ${speechService.name} (${randomIndex + 1}/${availableServices.length} available)` + ); + + // Enhanced configuration with speaker diarization (up to 10 speakers) + const definition: any = { + wordLevelTimestampsEnabled: true, + punctuationMode: 'Automatic', + profanityFilterMode: 'None', + }; + + // Conditionally add diarization based on user preference (default: enabled) + if (enableDiarization !== false) { + definition.diarization = { + enabled: true, + maxSpeakers: 10, + }; + } + + // Language identification setup + const CANDIDATE_LOCALES = [ + 'de-DE', + 'en-GB', + 'fr-FR', + 'it-IT', + 'es-ES', + 'sv-SE', + 'ru-RU', + 'nl-NL', + 'tr-TR', + 'pt-PT', + ]; + + if (recordingLanguages && recordingLanguages.length > 0) { + this.logger.log(`Using provided languages: ${recordingLanguages.join(', ')}`); + definition['languageIdentification'] = { + candidateLocales: recordingLanguages, + }; + } else { + this.logger.log(`Using default candidate locales: ${CANDIDATE_LOCALES.join(', ')}`); + definition['languageIdentification'] = { + candidateLocales: CANDIDATE_LOCALES, + }; + } + + // Prepare form data + const formData = new FormData(); + formData.append('definition', JSON.stringify(definition)); + + // Create blob from buffer + const audioBlob = new Blob([audioBuffer], { type: 'audio/wav' }); + formData.append('audio', audioBlob, 'audio.wav'); + + this.logger.log(`Sending to Azure Speech API retry (${speechService.name})...`); + + const response = await fetch(`${speechService.endpoint}?api-version=2024-11-15`, { + method: 'POST', + headers: { + 'Ocp-Apim-Subscription-Key': speechService.key, + Accept: 'application/json', + }, + body: formData, + }); + + if (!response.ok) { + const errorText = await response.text(); + + // Log comprehensive error details for 429 analysis with special tags + if (response.status === 429) { + // Special tagged log for easy filtering: [AZURE_429_RETRY_ERROR] + this.logger.error( + `[AZURE_429_RETRY_ERROR] Azure Speech API Rate Limited on Retry - Service: ${speechService.name}` + ); + this.logger.error(`[AZURE_429_RETRY_ERROR] Status: ${response.status}`); + this.logger.error(`[AZURE_429_RETRY_ERROR] Response body: ${errorText}`); + this.logger.error( + `[AZURE_429_RETRY_ERROR] Retry-After: ${response.headers.get('retry-after') || 'not provided'}` + ); + this.logger.error( + `[AZURE_429_RETRY_ERROR] x-ms-service-quota-reason: ${response.headers.get('x-ms-service-quota-reason') || 'not provided'}` + ); + this.logger.error( + `[AZURE_429_RETRY_ERROR] x-ms-request-id: ${response.headers.get('x-ms-request-id') || 'not provided'}` + ); + this.logger.error( + `[AZURE_429_RETRY_ERROR] x-ms-retry-after-ms: ${response.headers.get('x-ms-retry-after-ms') || 'not provided'}` + ); + this.logger.error( + `[AZURE_429_RETRY_ERROR] x-ms-error-code: ${response.headers.get('x-ms-error-code') || 'not provided'}` + ); + + // Log all headers for comprehensive analysis + const allHeaders = {}; + response.headers.forEach((value, key) => { + allHeaders[key] = value; + }); + this.logger.error( + `[AZURE_429_RETRY_ERROR] All response headers: ${JSON.stringify(allHeaders, null, 2)}` + ); + } else if (response.status === 422) { + // Special tagged log for format errors: [AZURE_422_RETRY_ERROR] + this.logger.error( + `[AZURE_422_RETRY_ERROR] Azure Speech API Format Error on Retry - Service: ${speechService.name}` + ); + this.logger.error(`[AZURE_422_RETRY_ERROR] Status: ${response.status}`); + this.logger.error(`[AZURE_422_RETRY_ERROR] Response body: ${errorText}`); + this.logger.error( + `[AZURE_422_RETRY_ERROR] x-ms-request-id: ${response.headers.get('x-ms-request-id') || 'not provided'}` + ); + } else { + this.logger.error(`Azure API retry error: ${response.status} - ${errorText}`); + } + + throw new Error(`Azure Speech API retry error: ${response.status} - ${errorText}`); + } + + const result = await response.json(); + this.logger.log(`Azure transcription retry result received`); + + // Process the result to match existing data structure + return this.processTranscriptionResult(result); + } + + /** + * Determines if error should trigger conversion retry + */ + private shouldRetryWithConversion(error: any): boolean { + // Direct status code check + const statusCode = + error.status || error.response?.status || this.extractStatusFromMessage(error.message); + const is422Error = statusCode === 422; + + // More specific format error patterns + const formatErrorPatterns = [ + /unsupported.*format/i, + /invalid.*audio/i, + /codec.*not.*supported/i, + /content.*type.*unsupported/i, + /bitrate.*not.*supported/i, + /sample.*rate.*invalid/i, + /audio.*format.*error/i, + /media.*type.*not.*supported/i, + ]; + + const errorText = error.message || error.toString() || ''; + const isFormatError = formatErrorPatterns.some((pattern) => pattern.test(errorText)); + + const shouldRetry = is422Error || isFormatError; + this.logger.log( + `[shouldRetryWithConversion] Error analysis: status=${statusCode}, 422=${is422Error}, format=${isFormatError}, shouldRetry=${shouldRetry}` + ); + + return shouldRetry; + } + + /** + * Determines if error should trigger service retry (429, 503, etc.) + */ + private shouldRetryWithDifferentService(error: any): boolean { + const statusCode = + error.status || error.response?.status || this.extractStatusFromMessage(error.message); + const retryableStatuses = [429, 503, 502, 500]; // Rate limit, service unavailable, bad gateway, internal error + + const shouldRetry = retryableStatuses.includes(statusCode); + this.logger.log( + `[shouldRetryWithDifferentService] Error analysis: status=${statusCode}, shouldRetry=${shouldRetry}` + ); + + return shouldRetry; + } + + /** + * Extract status code from error message like "Azure Speech API error: 429 - ..." + */ + private extractStatusFromMessage(message: string): number | undefined { + if (!message) return undefined; + + const statusMatch = message.match(/error:\s*(\d{3})/i); + return statusMatch ? parseInt(statusMatch[1], 10) : undefined; + } + + /** + * Attempts transcription with service retry (different Azure endpoint) + */ + private async transcribeRealtimeWithServiceRetry( + audioPath: string, + memoId: string, + userId: string, + spaceId?: string, + recordingLanguages?: string[], + token?: string, + enableDiarization?: boolean, + isAppend?: boolean, + recordingIndex?: number + ) { + this.logger.log( + `[transcribeRealtimeWithServiceRetry] Attempting service retry for ${audioPath}` + ); + + // Download audio from storage + const audioBuffer = await this.downloadFromStorage(audioPath, token); + this.logger.log(`Downloaded audio for service retry: ${audioBuffer.length} bytes`); + + // Convert to Azure-compatible format + const convertedAudio = await this.convertAudioForAzure(audioBuffer, audioPath); + this.logger.log(`Converted audio for service retry: ${convertedAudio.length} bytes`); + + // Perform real-time transcription using a different Azure Speech API service + const transcriptionResult = await this.performRealtimeTranscriptionWithRetry( + convertedAudio, + recordingLanguages, + enableDiarization + ); + + // Send appropriate callback based on operation type + if (isAppend) { + await this.notifyAppendTranscriptionComplete( + memoId, + userId, + transcriptionResult, + 'fast', + token, + recordingIndex + ); + } else { + await this.notifyTranscriptionComplete(memoId, userId, transcriptionResult, 'fast', token); + } + + return { + success: true, + route: 'fast-service-retry', + memoId, + message: 'Fast transcription completed after service retry', + }; + } + + /** + * Attempts transcription with enhanced conversion preprocessing + */ + private async transcribeRealtimeWithConversion( + audioPath: string, + memoId: string, + userId: string, + spaceId?: string, + recordingLanguages?: string[], + token?: string, + enableDiarization?: boolean, + isAppend?: boolean, + recordingIndex?: number + ) { + this.logger.log( + `[transcribeRealtimeWithConversion] Attempting enhanced conversion for ${audioPath}` + ); + + // Download audio from storage + const audioBuffer = await this.downloadFromStorage(audioPath, token); + this.logger.log(`Downloaded audio for conversion retry: ${audioBuffer.length} bytes`); + + // Apply enhanced conversion with multiple format attempts + const convertedAudio = await this.enhancedAudioConversion(audioBuffer, audioPath); + this.logger.log(`Enhanced conversion completed: ${convertedAudio.length} bytes`); + + // Perform real-time transcription using Azure Speech API + const transcriptionResult = await this.performRealtimeTranscription( + convertedAudio, + recordingLanguages, + enableDiarization + ); + + // Send appropriate callback based on operation type + if (isAppend) { + await this.notifyAppendTranscriptionComplete( + memoId, + userId, + transcriptionResult, + 'fast', + token, + recordingIndex + ); + } else { + await this.notifyTranscriptionComplete(memoId, userId, transcriptionResult, 'fast', token); + } + + return { + success: true, + route: 'fast-conversion-retry', + memoId, + message: 'Fast transcription completed after conversion retry', + }; + } + + /** + * Enhanced audio conversion with proper resource management and timeout + */ + private async enhancedAudioConversion(audioBuffer: Buffer, audioPath?: string): Promise { + this.logger.log('[enhancedAudioConversion] Attempting enhanced conversion'); + + const tempDir = os.tmpdir(); + + // Extract the actual file extension from audioPath + const fileExt = audioPath ? path.extname(audioPath) : '.m4a'; // fallback to .m4a + const inputFile = path.join(tempDir, `input_enhanced_${Date.now()}${fileExt}`); + const outputFile = path.join(tempDir, `output_enhanced_${Date.now()}.wav`); + + // Map common extensions to ffmpeg format names + const formatMap: Record = { + '.m4a': 'mp4', + '.mp4': 'mp4', + '.mp3': 'mp3', + '.wav': 'wav', + '.aac': 'aac', + '.ogg': 'ogg', + '.webm': 'webm', + '.flac': 'flac', + }; + + let inputFormat = formatMap[fileExt.toLowerCase()]; + + const cleanup = async () => { + try { + await Promise.all([ + fs.promises.unlink(inputFile).catch(() => {}), + fs.promises.unlink(outputFile).catch(() => {}), + ]); + } catch (error) { + this.logger.warn('Cleanup warning:', error); + } + }; + + try { + // Use async file operations + await fs.promises.writeFile(inputFile, audioBuffer); + + // Probe the file to detect actual format (fixes extension/content mismatch issues) + const probeResult = await this.probeAudioFile(inputFile); + if (probeResult.valid && probeResult.format) { + const probedFormat = probeResult.format.split(',')[0].trim(); + const probeFormatMap: Record = { + mp3: 'mp3', + mov: 'mp4', + mp4: 'mp4', + m4a: 'mp4', + wav: 'wav', + aac: 'aac', + ogg: 'ogg', + webm: 'webm', + flac: 'flac', + matroska: 'matroska', + }; + + if (probeFormatMap[probedFormat]) { + const detectedFormat = probeFormatMap[probedFormat]; + if (detectedFormat !== inputFormat) { + this.logger.warn( + `[enhancedAudioConversion] Format mismatch: extension suggests ${inputFormat}, content is ${detectedFormat}. Using detected format.` + ); + inputFormat = detectedFormat; + } + } + this.logger.log( + `[enhancedAudioConversion] Probed format: ${probeResult.format}, codec: ${probeResult.codec}` + ); + } + + return new Promise((resolve, reject) => { + const command = ffmpeg(inputFile) + .audioCodec('pcm_s16le') // PCM 16-bit little-endian + .audioFrequency(16000) // 16kHz sample rate (Azure's preferred) + .audioChannels(1) // Mono + .format('wav') // WAV format + .inputOptions([ + '-err_detect', + 'ignore_err', // Ignore unsupported metadata boxes (e.g., chnl v1) + '-fflags', + '+genpts', // Generate presentation timestamps + ]) + .audioFilters([ + 'highpass=f=80', // Remove very low frequencies + 'lowpass=f=8000', // Remove frequencies above 8kHz + 'volume=1.5', // Slight volume boost + ]) + .outputOptions(['-y']); // Force overwrite existing files + + // Use the actual detected format instead of file extension + if (inputFormat) { + command.inputFormat(inputFormat); + this.logger.log( + `[enhancedAudioConversion] Using input format: ${inputFormat} for file: ${fileExt}` + ); + } else { + this.logger.warn( + `[enhancedAudioConversion] Unknown format ${fileExt}, letting ffmpeg auto-detect` + ); + } + + command + .on('end', async () => { + try { + const converted = await fs.promises.readFile(outputFile); + await cleanup(); + this.logger.log(`✅ Enhanced audio conversion completed from ${fileExt} to WAV`); + resolve(converted); + } catch (error) { + await cleanup(); + reject(error); + } + }) + .on('error', async (err) => { + await cleanup(); + this.logger.error(`❌ Enhanced conversion error for ${fileExt}:`, err); + reject(err); + }) + .save(outputFile); + }); + } catch (error) { + await cleanup(); + throw error; + } + } + + /** + * Fallback to batch processing when fast routes fail + */ + private async fallbackToBatchProcessing( + audioPath: string, + memoId: string, + userId: string, + spaceId?: string, + recordingLanguages?: string[], + token?: string, + enableDiarization?: boolean, + isAppend?: boolean, + recordingIndex?: number + ) { + this.logger.log(`[fallbackToBatchProcessing] Starting batch fallback for ${audioPath}`); + + try { + // Use existing batch processing logic + const result = await this.processAudioFromStorage( + audioPath, + userId, + spaceId, + recordingLanguages, + token, + memoId, + enableDiarization + ); + + // Notify memoro service that we've fallen back to batch + // Note: The batch webhook will handle the final callback when processing completes + this.logger.log( + `[fallbackToBatchProcessing] Successfully initiated batch processing: ${result.jobId}` + ); + + return { + success: true, + route: 'batch-fallback', + memoId, + jobId: result.jobId, + message: 'Fell back to batch processing after fast route failures', + }; + } catch (batchError) { + this.logger.error(`[fallbackToBatchProcessing] Batch fallback also failed:`, batchError); + throw new Error(`All transcription methods failed. Last error: ${batchError.message}`); + } + } + + /** + * Process Azure transcription result to match existing data structure + */ + private processTranscriptionResult(azureResult: any) { + let text = ''; + let primaryAudioLanguage = null; + let allDetectedPhraseLanguages = ['de-DE']; // Fallback + + // Extract language information - ALWAYS use phrase-level analysis for accuracy + // Azure's top-level locale can be incorrect, so we count phrases by language + if (azureResult.phrases && Array.isArray(azureResult.phrases)) { + const languageCounts = {}; + const languageTextCounts = {}; // Count characters per language for more accuracy + + for (const phrase of azureResult.phrases) { + if (phrase.locale && typeof phrase.locale === 'string') { + // Count phrases + languageCounts[phrase.locale] = (languageCounts[phrase.locale] || 0) + 1; + + // Count characters for weighted analysis + const textLength = phrase.text ? phrase.text.length : 0; + languageTextCounts[phrase.locale] = (languageTextCounts[phrase.locale] || 0) + textLength; + } + } + + const uniqueLanguages = Object.keys(languageCounts); + if (uniqueLanguages.length > 0) { + // Find most frequent language by character count (more accurate than phrase count) + let mostFrequent = uniqueLanguages[0]; + let maxCharCount = languageTextCounts[mostFrequent] || 0; + + for (const locale of uniqueLanguages) { + const charCount = languageTextCounts[locale] || 0; + if (charCount > maxCharCount) { + mostFrequent = locale; + maxCharCount = charCount; + } + } + + primaryAudioLanguage = mostFrequent; + allDetectedPhraseLanguages = uniqueLanguages; + + // Debug logging for language detection + this.logger.log(`[Language Detection] Phrase counts: ${JSON.stringify(languageCounts)}`); + this.logger.log( + `[Language Detection] Character counts: ${JSON.stringify(languageTextCounts)}` + ); + this.logger.log( + `[Language Detection] Primary language: ${primaryAudioLanguage} (${maxCharCount} chars)` + ); + } + } else if (azureResult.locale && typeof azureResult.locale === 'string') { + // Fallback to top-level locale only if no phrases available + primaryAudioLanguage = azureResult.locale; + allDetectedPhraseLanguages = [azureResult.locale]; + this.logger.log( + `[Language Detection] Using top-level locale fallback: ${primaryAudioLanguage}` + ); + } + + // Extract transcript text + if (azureResult.combinedPhrases && Array.isArray(azureResult.combinedPhrases)) { + text = azureResult.combinedPhrases[0]?.text || ''; + } else if (azureResult.phrases && Array.isArray(azureResult.phrases)) { + text = azureResult.phrases.map((phrase: { text?: string }) => phrase.text || '').join(' '); + } + + // Process speaker information (enhanced diarization) + const utterances = []; + const speakerMap = {}; + const speakers = {}; + + if (azureResult.phrases) { + azureResult.phrases.forEach( + (segment: { + speaker?: number; + text?: string; + offsetMilliseconds?: number; + durationMilliseconds?: number; + }) => { + if (segment.speaker !== undefined && segment.text) { + const speakerId = `speaker${segment.speaker}`; + + utterances.push({ + speakerId, + text: segment.text, + offset: segment.offsetMilliseconds, + duration: segment.durationMilliseconds, + }); + + if (!speakerMap[speakerId]) speakerMap[speakerId] = []; + speakerMap[speakerId].push({ + text: segment.text, + offset: segment.offsetMilliseconds, + duration: segment.durationMilliseconds, + }); + } + } + ); + } + + // Sort utterances by time + utterances.sort((a, b) => a.offset - b.offset); + + // Create speaker labels + new Set(utterances.map((u) => u.speakerId)).forEach((id) => { + speakers[id] = `Speaker ${id.replace('speaker', '')}`; + }); + + const speakerCount = Object.keys(speakers).length; + + // Enhanced diarization logging for debugging + this.logger.log( + `[processTranscriptionResult] Transcription processed: ${text.length} chars, ${speakerCount} speakers, language: ${primaryAudioLanguage}` + ); + this.logger.log(`[processTranscriptionResult] Utterances count: ${utterances.length}`); + this.logger.log( + `[processTranscriptionResult] Has speaker data: ${Object.keys(speakers).length > 0}` + ); + this.logger.log( + `[processTranscriptionResult] Has speakerMap data: ${Object.keys(speakerMap).length > 0}` + ); + + if (utterances.length > 0) { + this.logger.log( + `[processTranscriptionResult] First utterance sample: ${JSON.stringify(utterances[0])}` + ); + } + + if (Object.keys(speakers).length > 0) { + this.logger.log(`[processTranscriptionResult] Speaker labels: ${JSON.stringify(speakers)}`); + } + + return { + text, + primary_language: primaryAudioLanguage, + languages: allDetectedPhraseLanguages, + utterances: utterances.length > 0 ? utterances : null, + speakers: Object.keys(speakers).length > 0 ? speakers : null, + speakerMap: Object.keys(speakerMap).length > 0 ? speakerMap : null, + }; + } + + /** + * Notify memoro service of successful append transcription + */ + private async notifyAppendTranscriptionComplete( + memoId: string, + userId: string, + transcriptionResult: any, + route: 'fast' | 'batch', + token?: string, + recordingIndex?: number + ) { + const memoroServiceUrl = this.configService.get('MEMORO_SERVICE_URL'); + + if (!memoroServiceUrl) { + this.logger.error('CRITICAL: MEMORO_SERVICE_URL is not configured'); + throw new Error('Missing required configuration: MEMORO_SERVICE_URL'); + } + + try { + this.logger.log( + `[notifyAppendTranscriptionComplete] Sending append callback for memo ${memoId}, recordingIndex: ${recordingIndex}` + ); + + // Use service role key for service-to-service authentication + const serviceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'); + + if (!serviceKey) { + this.logger.error( + 'CRITICAL: MEMORO_SUPABASE_SERVICE_KEY is not configured for service-to-service communication' + ); + throw new Error('Missing required configuration: MEMORO_SUPABASE_SERVICE_KEY'); + } + + const response = await fetch( + `${memoroServiceUrl}/memoro/service/append-transcription-completed`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${serviceKey}`, + }, + body: JSON.stringify({ + memoId, + userId, + transcriptionResult, + route, + success: true, + recordingIndex, + }), + } + ); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error(`Memoro service error: ${response.status} - ${errorText}`); + } + + this.logger.log( + `Successfully notified memoro service of append completion for memo ${memoId}` + ); + } catch (error) { + this.logger.error('Error notifying memoro service of append transcription:', error); + throw error; + } + } + + /** + * Notify memoro service of successful transcription + */ + private async notifyTranscriptionComplete( + memoId: string, + userId: string, + transcriptionResult: any, + route: 'fast' | 'batch', + token?: string + ) { + const memoroServiceUrl = this.configService.get('MEMORO_SERVICE_URL'); + + if (!memoroServiceUrl) { + this.logger.error('CRITICAL: MEMORO_SERVICE_URL is not configured'); + throw new Error('Missing required configuration: MEMORO_SERVICE_URL'); + } + + try { + // DEBUG: Log what we're sending to memoro service + this.logger.log(`[notifyTranscriptionComplete] Sending callback for memo ${memoId}`); + this.logger.log( + `[notifyTranscriptionComplete] transcriptionResult keys: ${Object.keys(transcriptionResult || {}).join(', ')}` + ); + this.logger.log( + `[notifyTranscriptionComplete] Has text: ${!!transcriptionResult?.text} (length: ${transcriptionResult?.text?.length || 0})` + ); + this.logger.log( + `[notifyTranscriptionComplete] Has utterances: ${!!transcriptionResult?.utterances} (count: ${transcriptionResult?.utterances?.length || 0})` + ); + this.logger.log( + `[notifyTranscriptionComplete] Has speakers: ${!!transcriptionResult?.speakers}` + ); + this.logger.log( + `[notifyTranscriptionComplete] Has speakerMap: ${!!transcriptionResult?.speakerMap}` + ); + + // Use service role key for service-to-service authentication + const serviceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'); + + if (!serviceKey) { + this.logger.error( + 'CRITICAL: MEMORO_SUPABASE_SERVICE_KEY is not configured for service-to-service communication' + ); + throw new Error('Missing required configuration: MEMORO_SUPABASE_SERVICE_KEY'); + } + + const response = await fetch(`${memoroServiceUrl}/memoro/service/transcription-completed`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${serviceKey}`, + }, + body: JSON.stringify({ + memoId, + userId, + transcriptionResult, + route, + success: true, + }), + }); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error(`Memoro service error: ${response.status} - ${errorText}`); + } + + this.logger.log(`Successfully notified memoro service of completion for memo ${memoId}`); + } catch (error) { + this.logger.error('Error notifying memoro service:', error); + throw error; + } + } + + /** + * Notify memoro service of transcription error with enhanced context + */ + private async notifyTranscriptionErrorWithContext( + memoId: string, + userId: string, + errorMessage: string, + route: 'fast' | 'batch', + fallbackStage: string, + token?: string + ) { + const memoroServiceUrl = this.configService.get('MEMORO_SERVICE_URL'); + + if (!memoroServiceUrl) { + this.logger.error('CRITICAL: MEMORO_SERVICE_URL is not configured'); + throw new Error('Missing required configuration: MEMORO_SERVICE_URL'); + } + + try { + const errorContext = { + memoId, + userId, + route, + fallbackStage, + error: errorMessage, + timestamp: new Date().toISOString(), + success: false, + }; + + // Use service role key for service-to-service authentication + const serviceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'); + + if (!serviceKey) { + this.logger.error( + 'CRITICAL: MEMORO_SUPABASE_SERVICE_KEY is not configured for service-to-service communication' + ); + throw new Error('Missing required configuration: MEMORO_SUPABASE_SERVICE_KEY'); + } + + const response = await fetch(`${memoroServiceUrl}/memoro/service/transcription-completed`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${serviceKey}`, + }, + body: JSON.stringify(errorContext), + }); + + if (!response.ok) { + const errorText = await response.text(); + this.logger.error( + `Failed to notify error to memoro service: ${response.status} - ${errorText}` + ); + } + } catch (error) { + this.logger.error('Error notifying memoro service of error:', error); + } + } + + /** + * Notify memoro service of transcription error (legacy method for compatibility) + */ + // Method removed: notifyTranscriptionError - was unused (TSLint 6133) + + private async processAudio( + audioBuffer: Buffer, + userId: string, + spaceId?: string, + recordingLanguages?: string[], + enableDiarization?: boolean, + audioPath?: string + ) { + try { + // 1. Get audio duration + const duration = await this.getAudioDuration(audioBuffer); + const durationMinutes = duration / 60; + const shouldUseBatch = durationMinutes > this.batchThresholdMinutes; + + this.logger.log(`Audio: ${durationMinutes.toFixed(2)} minutes, batch: ${shouldUseBatch}`); + + const processedAudio = await this.convertAudioForAzure(audioBuffer, audioPath); + this.logger.log(`Converted audio: ${processedAudio.length} bytes`); + + // Upload to Azure Blob Storage + const blobUrl = await this.uploadToAzureBlob(processedAudio, userId); + this.logger.log(`Uploaded to Azure Blob: ${blobUrl}`); + + // Create Azure Batch Job + const jobId = await this.createBatchJob( + blobUrl, + userId, + recordingLanguages, + enableDiarization + ); + this.logger.log(`Created batch job: ${jobId}`); + + // Return immediate response + return { + status: 'processing', + type: 'batch', + jobId, + userId, + spaceId, + duration, + message: 'Batch transcription started. Webhook will notify when complete.', + }; + } catch (error) { + return { + status: 'failed', + message: error.message, + type: 'batch', + jobId: null, + userId, + spaceId, + }; + } + } + + private async getAudioDuration(audioBuffer: Buffer): Promise { + return new Promise((resolve, reject) => { + if (!audioBuffer || !(audioBuffer instanceof Buffer)) { + this.logger.error('Invalid audio buffer provided'); + return reject(new Error('Invalid audio buffer provided')); + } + + const tempFile = path.join(os.tmpdir(), `audio_${Date.now()}.tmp`); + + try { + fs.writeFileSync(tempFile, audioBuffer); + + ffmpeg.ffprobe(tempFile, (err, metadata) => { + // Cleanup + try { + fs.unlinkSync(tempFile); + } catch {} + + if (err) { + reject(err); + return; + } + + const duration = metadata?.format?.duration; + if (typeof duration === 'number') { + resolve(duration); + } else { + reject(new Error('Could not determine duration')); + } + }); + } catch (error) { + reject(error); + } + }); + } + + private async uploadToAzureBlob(audioBuffer: Buffer, userId: string): Promise { + const { + BlobServiceClient, + StorageSharedKeyCredential, + generateBlobSASQueryParameters, + BlobSASPermissions, + } = await import('@azure/storage-blob'); + + const accountName = this.configService.get('AZURE_STORAGE_ACCOUNT_NAME'); + const accountKey = this.configService.get('AZURE_STORAGE_ACCOUNT_KEY'); + + if (!accountName || !accountKey) { + throw new Error('Azure Storage credentials not configured'); + } + + const sharedKeyCredential = new StorageSharedKeyCredential(accountName, accountKey); + const blobServiceClient = new BlobServiceClient( + `https://${accountName}.blob.core.windows.net`, + sharedKeyCredential + ); + + const containerName = 'batch-transcription'; + const blobName = `${userId}/${Date.now()}_audio.wav`; + + try { + const containerClient = blobServiceClient.getContainerClient(containerName); + + // Ensure container exists + await containerClient.createIfNotExists(); + + const blockBlobClient = containerClient.getBlockBlobClient(blobName); + + await blockBlobClient.upload(audioBuffer, audioBuffer.length, { + blobHTTPHeaders: { blobContentType: 'audio/wav' }, + }); + + // Generate SAS token that expires in 2 hours + const sasOptions = { + containerName, + blobName, + permissions: BlobSASPermissions.parse('r'), // Read-only permission + startsOn: new Date(new Date().valueOf() - 5 * 60 * 1000), // Start 5 minutes ago to avoid clock skew issues + expiresOn: new Date(new Date().valueOf() + 6 * 60 * 60 * 1000), // Expires in 6 hours + }; + + // Generate the SAS token + const sasToken = generateBlobSASQueryParameters(sasOptions, sharedKeyCredential).toString(); + + // Construct the full URL with SAS token + const blobUrlWithSas = `${blockBlobClient.url}?${sasToken}`; + + this.logger.log( + `✅ Uploaded to Azure Blob with SAS token: ${blobUrlWithSas.substring(0, 100)}...` + ); + + return blobUrlWithSas; + } catch (error) { + this.logger.error('Azure Blob upload failed:', error); + throw error; + } + } + + private async createBatchJob( + blobUrl: string, + userId: string, + recordingLanguages?: string[], + enableDiarization?: boolean + ): Promise { + const speechService = this.getRandomSpeechService(); + const accountName = this.configService.get('AZURE_STORAGE_ACCOUNT_NAME'); + const accountKey = this.configService.get('AZURE_STORAGE_ACCOUNT_KEY'); + + if (!accountName || !accountKey) { + throw new Error('Azure Storage credentials not configured'); + } + + // Create a SAS token for the results container + const { + StorageSharedKeyCredential, + generateBlobSASQueryParameters, + ContainerSASPermissions, + BlobServiceClient, + } = await import('@azure/storage-blob'); + const sharedKeyCredential = new StorageSharedKeyCredential(accountName, accountKey); + const resultsContainerName = 'results'; + + // Ensure the results container exists + const blobServiceClient = new BlobServiceClient( + `https://${accountName}.blob.core.windows.net`, + sharedKeyCredential + ); + const containerClient = blobServiceClient.getContainerClient(resultsContainerName); + await containerClient.createIfNotExists(); + + // Generate SAS token for the results container + const sasToken = generateBlobSASQueryParameters( + { + containerName: resultsContainerName, + permissions: ContainerSASPermissions.parse('rcw'), // Read + Create + Write + startsOn: new Date(Date.now() - 5 * 60 * 1000), // Start 5 minutes ago to avoid clock skew issues + expiresOn: new Date(Date.now() + 24 * 60 * 60 * 1000), // Valid for 24 hours + }, + sharedKeyCredential + ).toString(); + + // Create the destination URL with SAS token + const destinationUrl = `https://${accountName}.blob.core.windows.net/${resultsContainerName}?${sasToken}`; + + this.logger.log(`Created destination container URL for results with SAS token`); + + // Define constants for speaker detection + const MAX_SPEAKERS = 10; + const DEFAULT_CANDIDATE_LOCALES = [ + 'en-US', + 'de-DE', + 'en-GB', + 'fr-FR', + 'it-IT', + 'es-ES', + 'sv-SE', + 'ru-RU', + 'nl-NL', + 'tr-TR', + 'pt-PT', + ]; + + // Build candidate locales list - ensure main locale is included and no duplicates + const mainLocale = recordingLanguages?.[0] || 'en-US'; + let candidateLocales = + recordingLanguages && recordingLanguages.length > 0 + ? Array.from(new Set([mainLocale, ...recordingLanguages, ...DEFAULT_CANDIDATE_LOCALES])) + : DEFAULT_CANDIDATE_LOCALES; + + // Azure requires: minimum 2, maximum 10 candidate locales + candidateLocales = candidateLocales.slice(0, 10); + if (candidateLocales.length < 2) { + // Ensure we have at least 2 locales by adding en-US as fallback + candidateLocales = Array.from(new Set([...candidateLocales, 'en-US', 'de-DE'])).slice(0, 10); + } + + // Build the transcription config with optional diarization + const properties: Record = { + wordLevelTimestampsEnabled: true, + punctuationMode: 'DictatedAndAutomatic', + profanityFilterMode: 'Masked', + destinationContainerUrl: destinationUrl, // This is REQUIRED for Azure to store results + // Add language identification - dynamically built candidate list + languageIdentification: { + candidateLocales: candidateLocales, + }, + }; + + // Conditionally add diarization based on user preference (default: enabled) + if (enableDiarization !== false) { + properties.diarizationEnabled = true; + properties.diarization = { + speakers: { + minCount: 1, + maxCount: MAX_SPEAKERS, + }, + }; + } + + const config: Record = { + contentUrls: [blobUrl], + properties, + locale: mainLocale, + displayName: `Batch transcription for ${userId}`, + }; + + this.logger.log( + `Enhanced batch transcription config (${speechService.name}): languages=${recordingLanguages?.join(', ') || 'default'}, maxSpeakers=${MAX_SPEAKERS}` + ); + console.log('Starting batch transcription with config: ' + JSON.stringify(config)); + try { + const batchEndpoint = speechService.endpoint.replace( + '/transcriptions:transcribe', + '/v3.1/transcriptions' + ); + const response = await fetch(batchEndpoint, { + method: 'POST', + headers: { + 'Ocp-Apim-Subscription-Key': speechService.key, + 'Content-Type': 'application/json', + }, + body: JSON.stringify(config), + }); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error(`Azure Batch API error: ${response.status} - ${errorText}`); + } + + const result = await response.json(); + const jobId = result.self.split('/').pop(); + + this.logger.log(`✅ Created batch job: ${jobId}`); + return jobId; + } catch (error) { + this.logger.error('Batch job creation failed:', error); + throw error; + } + } + + private async convertAudioForAzure(audioBuffer: Buffer, audioPath?: string): Promise { + const tempDir = os.tmpdir(); + + // Extract the actual file extension from audioPath + const fileExt = audioPath ? path.extname(audioPath) : '.m4a'; // fallback to .m4a + const inputFile = path.join(tempDir, `input_${Date.now()}${fileExt}`); + const outputFile = path.join(tempDir, `output_${Date.now()}.wav`); + + // Map common extensions to ffmpeg format names + const formatMap: Record = { + '.m4a': 'mp4', + '.mp4': 'mp4', + '.mp3': 'mp3', + '.wav': 'wav', + '.aac': 'aac', + '.ogg': 'ogg', + '.webm': 'webm', + '.flac': 'flac', + }; + + const inputFormat = formatMap[fileExt.toLowerCase()]; + + try { + // Write buffer to file + fs.writeFileSync(inputFile, audioBuffer); + + // Verify the written file size matches the buffer + const stats = fs.statSync(inputFile); + if (stats.size !== audioBuffer.length) { + this.logger.error(`Buffer size: ${audioBuffer.length}, Written file size: ${stats.size}`); + throw new Error( + `File write verification failed: expected ${audioBuffer.length} bytes, got ${stats.size} bytes` + ); + } + this.logger.log(`File written and verified: ${stats.size} bytes at ${inputFile}`); + + // Use ffprobe to validate the file before conversion + const probeResult = await this.probeAudioFile(inputFile); + if (!probeResult.valid) { + // Check if it's the known 'chnl' box issue + const isChnlIssue = + probeResult.error?.includes('Unsupported') && probeResult.error?.includes('chnl'); + + if (isChnlIssue) { + this.logger.warn(`FFprobe warning: ${probeResult.error}`); + this.logger.warn( + 'Detected unsupported chnl box (iOS spatial audio metadata) - will attempt conversion with error tolerance' + ); + // Don't throw - ffmpeg with -err_detect ignore_err can handle this + } else { + this.logger.error(`FFprobe error details: ${probeResult.error}`); + this.logger.error(`File path: ${inputFile}`); + this.logger.error(`File exists: ${fs.existsSync(inputFile)}`); + this.logger.error(`File size: ${fs.statSync(inputFile).size}`); + + // Log first 100 bytes as hex for debugging + const fileBuffer = fs.readFileSync(inputFile); + this.logger.error(`First 100 bytes (hex): ${fileBuffer.slice(0, 100).toString('hex')}`); + + throw new Error(`Audio file validation failed: ${probeResult.error}`); + } + } else { + this.logger.log( + `Audio file validated: format=${probeResult.format}, duration=${probeResult.duration}s, codec=${probeResult.codec}` + ); + } + + // IMPORTANT: Use the actual detected format from ffprobe, not the file extension + // This fixes issues where file extension doesn't match actual content (e.g., MP3 saved as .m4a) + let actualInputFormat = inputFormat; + if (probeResult.valid && probeResult.format) { + // ffprobe returns format names like "mp3", "mov,mp4,m4a,3gp,3g2,mj2", etc. + // Extract the primary format and map it to ffmpeg input format + const probedFormat = probeResult.format.split(',')[0].trim(); + const probeFormatMap: Record = { + mp3: 'mp3', + mov: 'mp4', + mp4: 'mp4', + m4a: 'mp4', + wav: 'wav', + aac: 'aac', + ogg: 'ogg', + webm: 'webm', + flac: 'flac', + matroska: 'matroska', + }; + + if (probeFormatMap[probedFormat]) { + actualInputFormat = probeFormatMap[probedFormat]; + if (actualInputFormat !== inputFormat) { + this.logger.warn( + `Format mismatch detected: extension suggests ${inputFormat}, but content is ${actualInputFormat}. Using detected format.` + ); + } + } + } + + // Wrap ffmpeg conversion in a Promise + return new Promise((resolve, reject) => { + const command = ffmpeg(inputFile) + .audioCodec('pcm_s16le') // PCM 16-bit little-endian + .audioFrequency(16000) // 16kHz sample rate + .audioChannels(1) // Mono + .format('wav') // WAV format + .inputOptions([ + '-err_detect', + 'ignore_err', // Ignore unsupported metadata boxes (e.g., chnl v1) + '-fflags', + '+genpts', // Generate presentation timestamps + ]) + .outputOptions(['-y']); // Force overwrite existing files + + // Use the actual detected format (from ffprobe) instead of file extension + if (actualInputFormat) { + command.inputFormat(actualInputFormat); + this.logger.log( + `Using input format: ${actualInputFormat} for file: ${fileExt} (detected: ${probeResult.format})` + ); + } else { + this.logger.warn(`Unknown format ${fileExt}, letting ffmpeg auto-detect`); + } + + command + .on('end', () => { + try { + const converted = fs.readFileSync(outputFile); + fs.unlinkSync(inputFile); + fs.unlinkSync(outputFile); + this.logger.log(`✅ Audio converted from ${fileExt} to WAV for Azure compatibility`); + resolve(converted); + } catch (error) { + reject(error); + } + }) + .on('error', (err) => { + this.logger.error(`❌ FFmpeg conversion error for ${fileExt}: ${err.message}`); + try { + if (fs.existsSync(inputFile)) fs.unlinkSync(inputFile); + if (fs.existsSync(outputFile)) fs.unlinkSync(outputFile); + } catch {} + reject(err); + }) + .save(outputFile); + }); + } catch (error) { + // Clean up on any error before ffmpeg + try { + if (fs.existsSync(inputFile)) fs.unlinkSync(inputFile); + if (fs.existsSync(outputFile)) fs.unlinkSync(outputFile); + } catch {} + throw error; + } + } + + async checkBatchTranscriptionStatus(jobId: string) { + const speechService = this.getRandomSpeechService(); + + try { + // Get transcription status + const batchEndpoint = speechService.endpoint.replace( + '/transcriptions:transcribe', + '/v3.1/transcriptions' + ); + const response = await fetch(`${batchEndpoint}/${jobId}`, { + method: 'GET', + headers: { + 'Ocp-Apim-Subscription-Key': speechService.key, + 'Content-Type': 'application/json', + }, + }); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error(`Azure Batch API error: ${response.status} - ${errorText}`); + } + + const result = await response.json(); + this.logger.log('Batch transcription full details:', JSON.stringify(result, null, 2)); + + // Get detailed error information if the job failed + let errorDetails = null; + if (result.status === 'Failed') { + errorDetails = await this.getTranscriptionError(jobId, speechService); + } + + // Format the response + return { + jobId, + status: result.status, + createdDateTime: result.createdDateTime, + lastActionDateTime: result.lastActionDateTime, + statusMessage: result.statusMessage || 'No status message provided', + percentCompleted: result.percentCompleted, + properties: result.properties, + errorDetails, + results: null, + rawResponse: result, + }; + } catch (error) { + this.logger.error('Error checking batch status:', error); + throw error; + } + } + + private async getTranscriptionError(jobId: string, speechService: any) { + try { + // Get transcription details + const batchEndpoint = speechService.endpoint.replace( + '/transcriptions:transcribe', + '/v3.1/transcriptions' + ); + const response = await fetch(`${batchEndpoint}/${jobId}`, { + method: 'GET', + headers: { + 'Ocp-Apim-Subscription-Key': speechService.key, + 'Content-Type': 'application/json', + }, + }); + + if (!response.ok) { + return 'Could not retrieve error details'; + } + + const result = await response.json(); + + // Check for error information in different places + if (result.properties?.error) { + return result.properties.error; + } + + if (result.properties?.message) { + return result.properties.message; + } + + if (result.statusMessage) { + return result.statusMessage; + } + + // Try to get error from the transcription files + if (result.links?.files) { + const filesResponse = await fetch(result.links.files, { + method: 'GET', + headers: { + 'Ocp-Apim-Subscription-Key': speechService.key, + }, + }); + + if (filesResponse.ok) { + const filesResult = await filesResponse.json(); + + // Look for error files + const errorFile = filesResult.values.find( + (file: any) => file.kind === 'TranscriptionError' + ); + + if (errorFile && errorFile.links?.contentUrl) { + const errorContentResponse = await fetch(errorFile.links.contentUrl, { + method: 'GET', + headers: { + 'Ocp-Apim-Subscription-Key': speechService.key, + }, + }); + + if (errorContentResponse.ok) { + return await errorContentResponse.json(); + } + } + } + } + + return 'No specific error details available'; + } catch (error) { + this.logger.error('Error getting transcription error details:', error); + return 'Error retrieving error details'; + } + } + + async processAudioFromStorage( + audioPath: string, + userId: string, + spaceId?: string, + recordingLanguages?: string[], + token?: string, + memoId?: string, + enableDiarization?: boolean + ) { + try { + this.logger.log(`Downloading audio from storage for batch processing: ${audioPath}`); + + // Download file from Supabase Storage (using service key for batch operations) + const audioBuffer = await this.downloadFromStorage(audioPath); + + this.logger.log(`Downloaded audio: ${audioBuffer.length} bytes`); + + // Use existing processAudio method + const result = await this.processAudio( + audioBuffer, + userId, + spaceId, + recordingLanguages, + enableDiarization, + audioPath + ); + + // Store jobId in memo metadata for recovery tracking + if (result.jobId && result.status === 'processing' && token && memoId) { + try { + await this.storeBatchJobMetadata(memoId, result.jobId, token, userId); + this.logger.log(`Stored batch job metadata for memo ${memoId}, jobId: ${result.jobId}`); + } catch (metadataError) { + this.logger.warn('Failed to store batch job metadata (non-critical):', metadataError); + // Don't fail the entire process if metadata storage fails + } + } + + // Enhanced: Return jobId and other metadata for tracking + if (result.jobId) { + (result as any).memoId = memoId; + } + + return result; + } catch (error) { + this.logger.error('Error in processAudioFromStorage:', error); + throw new Error(`Storage processing failed: ${error.message}`); + } + } + + /** + * Store batch job metadata in memo for recovery tracking + */ + private async storeBatchJobMetadata( + memoId: string, + jobId: string, + token: string, + userId?: string + ): Promise { + const memoroServiceUrl = this.configService.get('MEMORO_SERVICE_URL'); + + if (!memoroServiceUrl) { + this.logger.error('CRITICAL: MEMORO_SERVICE_URL is not configured'); + throw new Error('Missing required configuration: MEMORO_SERVICE_URL'); + } + + try { + // Use service role key for service-to-service authentication + const serviceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'); + + if (!serviceKey) { + this.logger.error( + 'CRITICAL: MEMORO_SUPABASE_SERVICE_KEY is not configured for service-to-service communication' + ); + throw new Error('Missing required configuration: MEMORO_SUPABASE_SERVICE_KEY'); + } + + const response = await fetch(`${memoroServiceUrl}/memoro/service/update-batch-metadata`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${serviceKey}`, + }, + body: JSON.stringify({ + memoId, + jobId, + batchTranscription: true, + userId, // Pass userId for ownership validation + }), + }); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error(`Memoro service error: ${response.status} - ${errorText}`); + } + + const result = await response.json(); + this.logger.log(`Successfully stored batch metadata: ${JSON.stringify(result)}`); + } catch (error) { + this.logger.error('Error storing batch job metadata:', error); + throw error; + } + } + + private async downloadFromStorage(audioPath: string, token?: string): Promise { + try { + const { createClient } = await import('@supabase/supabase-js'); + const supabaseUrl = this.configService.get('MEMORO_SUPABASE_URL'); + const supabaseServiceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'); + const supabaseAnonKey = this.configService.get('MEMORO_SUPABASE_ANON_KEY'); + + if (!supabaseUrl) { + this.logger.error('CRITICAL: MEMORO_SUPABASE_URL is not configured'); + throw new Error('Missing required configuration: MEMORO_SUPABASE_URL'); + } + + this.logger.log(`Supabase URL: ${supabaseUrl}`); + this.logger.log(`Has service key: ${!!supabaseServiceKey}`); + this.logger.log(`Has anon key: ${!!supabaseAnonKey}`); + this.logger.log(`Has user token: ${!!token}`); + + if (!supabaseAnonKey && !supabaseServiceKey) { + this.logger.error( + 'CRITICAL: Neither MEMORO_SUPABASE_ANON_KEY nor MEMORO_SUPABASE_SERVICE_KEY is configured' + ); + throw new Error( + 'Missing required configuration: MEMORO_SUPABASE_ANON_KEY or MEMORO_SUPABASE_SERVICE_KEY' + ); + } + + // Try to use service key first, otherwise use user token + const supabase = supabaseServiceKey + ? createClient(supabaseUrl, supabaseServiceKey) + : createClient(supabaseUrl, supabaseAnonKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + this.logger.log( + `Using ${supabaseServiceKey ? 'service key' : 'user token'} for storage download` + ); + + // First, try to list the bucket to see if it exists and has files + try { + this.logger.log('Testing bucket access...'); + const { data: bucketList, error: bucketListError } = await supabase.storage.listBuckets(); + this.logger.log( + 'Available buckets:', + JSON.stringify(bucketList?.map((b) => b.name) || [], null, 2) + ); + + if (bucketListError) { + this.logger.error('Bucket list error:', JSON.stringify(bucketListError, null, 2)); + } + + // Try to list files in the user directory + const userDir = audioPath.split('/')[0]; + this.logger.log(`Attempting to list files in user directory: ${userDir}`); + const { data: fileList, error: fileListError } = await supabase.storage + .from('user-uploads') + .list(userDir, { limit: 10 }); + + if (fileListError) { + this.logger.error('File list error:', JSON.stringify(fileListError, null, 2)); + } else { + this.logger.log( + `Files in ${userDir}:`, + JSON.stringify(fileList?.map((f) => f.name) || [], null, 2) + ); + } + } catch (debugError) { + this.logger.error('Debug error:', debugError); + } + + const { data: fileData, error: downloadError } = await supabase.storage + .from('user-uploads') + .download(audioPath); + + if (downloadError) { + this.logger.error( + 'Supabase storage download error:', + JSON.stringify(downloadError, null, 2) + ); + this.logger.error('Attempting to download audioPath:', audioPath); + throw new Error( + `Failed to download file from storage: ${downloadError.message || JSON.stringify(downloadError)}` + ); + } + + if (!fileData) { + throw new Error('No file data returned from Supabase storage'); + } + + // Convert blob to buffer + const arrayBuffer = await fileData.arrayBuffer(); + const buffer = Buffer.from(arrayBuffer); + + // Validate buffer size + if (buffer.length < 1000) { + throw new Error(`Downloaded file is too small: ${buffer.length} bytes`); + } + + // Validate audio file header + const isValidAudio = this.validateAudioHeader(buffer); + if (!isValidAudio) { + this.logger.error('Invalid audio file header detected'); + this.logger.error(`First 20 bytes: ${buffer.slice(0, 20).toString('hex')}`); + throw new Error('Downloaded file does not appear to be a valid audio file'); + } + + this.logger.log(`Successfully downloaded and validated file: ${buffer.length} bytes`); + return buffer; + } catch (error) { + this.logger.error('Error downloading from storage:', error); + throw error; + } + } + + /** + * Probe audio file using ffprobe to get detailed metadata and validate + */ + private async probeAudioFile(filePath: string): Promise<{ + valid: boolean; + format?: string; + duration?: number; + codec?: string; + error?: string; + }> { + return new Promise((resolve) => { + ffmpeg.ffprobe(filePath, (err, metadata) => { + if (err) { + this.logger.error(`FFprobe validation failed for ${filePath}: ${err.message}`); + resolve({ + valid: false, + error: err.message, + }); + } else { + const format = metadata.format?.format_name || 'unknown'; + const duration = metadata.format?.duration || 0; + const codec = metadata.streams?.[0]?.codec_name || 'unknown'; + + resolve({ + valid: true, + format, + duration, + codec, + }); + } + }); + }); + } + + /** + * Validate audio file header to ensure it's a valid audio file + * Supports: M4A, MP4, MP3, WAV, OGG, WEBM, FLAC, AAC + */ + private validateAudioHeader(buffer: Buffer): boolean { + if (buffer.length < 12) return false; + + // M4A/MP4: Check for 'ftyp' box at offset 4 + const ftypCheck = buffer.slice(4, 8).toString('utf-8'); + if (ftypCheck === 'ftyp') return true; + + // M4A/MP4: Sometimes 'mdat' appears first + const mdatCheck = buffer.slice(0, 4).toString('utf-8'); + if (mdatCheck === 'mdat') return true; + + // M4A/MP4: Check for 'wide' atom + if (ftypCheck === 'wide') return true; + + // MP3: Check for ID3 tag or MPEG frame sync + const id3Check = buffer.slice(0, 3).toString('utf-8'); + if (id3Check === 'ID3') return true; + + // MP3: Check for MPEG frame sync (0xFFE or 0xFFF at start) + if (buffer[0] === 0xff && (buffer[1] & 0xe0) === 0xe0) return true; + + // WAV: Check for RIFF header + const riffCheck = buffer.slice(0, 4).toString('utf-8'); + const waveCheck = buffer.slice(8, 12).toString('utf-8'); + if (riffCheck === 'RIFF' && waveCheck === 'WAVE') return true; + + // OGG: Check for OggS header + const oggCheck = buffer.slice(0, 4).toString('utf-8'); + if (oggCheck === 'OggS') return true; + + // WEBM: Check for EBML header (0x1A 0x45 0xDF 0xA3) + if (buffer[0] === 0x1a && buffer[1] === 0x45 && buffer[2] === 0xdf && buffer[3] === 0xa3) + return true; + + // FLAC: Check for fLaC header + const flacCheck = buffer.slice(0, 4).toString('utf-8'); + if (flacCheck === 'fLaC') return true; + + // AAC: Check for ADTS header (0xFF 0xF1 or 0xFF 0xF9) + if (buffer[0] === 0xff && (buffer[1] === 0xf1 || buffer[1] === 0xf9)) return true; + + this.logger.warn('Unknown audio format detected'); + return false; + } + + /** + * Detect if a file is a video based on its header + * Supports: MP4, MOV, AVI, MKV, WEBM, FLV, WMV + */ + private isVideoFile(buffer: Buffer, filePath?: string): boolean { + if (buffer.length < 12) return false; + + // Check file extension first + if (filePath) { + const ext = path.extname(filePath).toLowerCase(); + const videoExtensions = [ + '.mp4', + '.mov', + '.m4v', + '.avi', + '.mkv', + '.webm', + '.flv', + '.wmv', + '.mpeg', + '.mpg', + ]; + if (videoExtensions.includes(ext)) { + this.logger.log(`Detected video file by extension: ${ext}`); + return true; + } + } + + // MP4/MOV: Check for 'ftyp' box + const ftypCheck = buffer.slice(4, 8).toString('utf-8'); + if (ftypCheck === 'ftyp') { + // Check for video-specific brand types + const brandCheck = buffer.slice(8, 12).toString('utf-8'); + const videoBrands = ['mp41', 'mp42', 'isom', 'qt ', 'm4v ']; + if (videoBrands.some((brand) => brandCheck.startsWith(brand))) { + return true; + } + } + + // AVI: Check for RIFF + AVI header + const riffCheck = buffer.slice(0, 4).toString('utf-8'); + const aviCheck = buffer.slice(8, 12).toString('utf-8'); + if (riffCheck === 'RIFF' && aviCheck === 'AVI ') return true; + + // MKV/WEBM: Check for EBML header + if (buffer[0] === 0x1a && buffer[1] === 0x45 && buffer[2] === 0xdf && buffer[3] === 0xa3) { + // Further check for Matroska signature + if (buffer.length >= 20) { + const docType = buffer.slice(16, 20).toString('utf-8'); + if (docType === 'webm' || docType.startsWith('matroska')) return true; + } + return true; // Likely video EBML file + } + + // FLV: Check for FLV header (0x46 0x4C 0x56) + if (buffer[0] === 0x46 && buffer[1] === 0x4c && buffer[2] === 0x56) return true; + + return false; + } + + /** + * Extract audio from video file using FFmpeg + * Converts video to high-quality audio suitable for transcription + * @param videoBuffer - The video file buffer + * @param videoPath - Optional path hint for format detection + * @returns Extracted audio as Buffer in WAV format + */ + async extractAudioFromVideo(videoBuffer: Buffer, videoPath?: string): Promise { + this.logger.log('[extractAudioFromVideo] Starting video-to-audio extraction'); + + const tempDir = os.tmpdir(); + const fileExt = videoPath ? path.extname(videoPath) : '.mp4'; + const inputFile = path.join(tempDir, `video_input_${Date.now()}${fileExt}`); + const outputFile = path.join(tempDir, `audio_output_${Date.now()}.wav`); + + const cleanup = async () => { + try { + await Promise.all([ + fs.promises.unlink(inputFile).catch(() => {}), + fs.promises.unlink(outputFile).catch(() => {}), + ]); + } catch (error) { + this.logger.warn('Cleanup warning:', error); + } + }; + + try { + // Write video buffer to temporary file + await fs.promises.writeFile(inputFile, videoBuffer); + this.logger.log(`[extractAudioFromVideo] Video file written: ${videoBuffer.length} bytes`); + + // Extract audio using FFmpeg + return new Promise((resolve, reject) => { + ffmpeg(inputFile) + .noVideo() // Remove video stream + .audioCodec('pcm_s16le') // PCM 16-bit for best quality + .audioFrequency(16000) // 16kHz sample rate (Azure optimal) + .audioChannels(1) // Mono for speech recognition + .format('wav') // WAV format + .audioFilters([ + 'highpass=f=80', // Remove very low frequencies + 'lowpass=f=8000', // Remove frequencies above 8kHz + 'volume=1.5', // Slight volume boost for better transcription + 'afftdn=nf=-20', // Noise reduction + ]) + .outputOptions([ + '-y', // Overwrite output file + '-loglevel', + 'warning', // Reduce FFmpeg output verbosity + ]) + .on('start', (commandLine) => { + this.logger.log(`[extractAudioFromVideo] FFmpeg command: ${commandLine}`); + }) + .on('progress', (progress) => { + if (progress.percent) { + this.logger.log(`[extractAudioFromVideo] Progress: ${Math.round(progress.percent)}%`); + } + }) + .on('end', async () => { + try { + const audioBuffer = await fs.promises.readFile(outputFile); + await cleanup(); + this.logger.log( + `[extractAudioFromVideo] Successfully extracted audio: ${audioBuffer.length} bytes` + ); + resolve(audioBuffer); + } catch (error) { + await cleanup(); + reject(new Error(`Failed to read extracted audio: ${error.message}`)); + } + }) + .on('error', async (err) => { + await cleanup(); + this.logger.error(`[extractAudioFromVideo] FFmpeg error: ${err.message}`); + reject(new Error(`Video-to-audio extraction failed: ${err.message}`)); + }) + .save(outputFile); + }); + } catch (error) { + await cleanup(); + this.logger.error('[extractAudioFromVideo] Extraction error:', error); + throw error; + } + } + + /** + * Process video file: extract audio then transcribe + * This is the main entry point for video file processing + */ + async processVideoFile( + videoPath: string, + memoId: string, + userId: string, + spaceId?: string, + recordingLanguages?: string[], + token?: string, + enableDiarization?: boolean, + isAppend?: boolean, + recordingIndex?: number + ) { + try { + this.logger.log(`[processVideoFile] Processing video file: ${videoPath}`); + + // Download video from storage + const videoBuffer = await this.downloadFromStorage(videoPath, token); + this.logger.log(`[processVideoFile] Downloaded video: ${videoBuffer.length} bytes`); + + // Verify it's actually a video file + if (!this.isVideoFile(videoBuffer, videoPath)) { + throw new Error('File does not appear to be a valid video file'); + } + + // Extract audio from video + const audioBuffer = await this.extractAudioFromVideo(videoBuffer, videoPath); + this.logger.log(`[processVideoFile] Audio extracted: ${audioBuffer.length} bytes`); + + // Get audio duration for routing decision + const duration = await this.getAudioDuration(audioBuffer); + const durationMinutes = duration / 60; + this.logger.log(`[processVideoFile] Audio duration: ${durationMinutes.toFixed(2)} minutes`); + + // Route to fast or batch transcription based on duration + if (durationMinutes < this.batchThresholdMinutes) { + this.logger.log('[processVideoFile] Using fast transcription route'); + + // Convert to Azure-compatible format + const convertedAudio = await this.convertAudioForAzure(audioBuffer, 'extracted_audio.wav'); + + // Perform real-time transcription + const transcriptionResult = await this.performRealtimeTranscription( + convertedAudio, + recordingLanguages, + enableDiarization + ); + + // Send appropriate callback + if (isAppend) { + await this.notifyAppendTranscriptionComplete( + memoId, + userId, + transcriptionResult, + 'fast', + token, + recordingIndex + ); + } else { + await this.notifyTranscriptionComplete( + memoId, + userId, + transcriptionResult, + 'fast', + token + ); + } + + return { + success: true, + route: 'fast', + source: 'video', + memoId, + message: 'Video processed and transcribed successfully via fast route', + }; + } else { + this.logger.log('[processVideoFile] Using batch transcription route'); + + // Process through batch pipeline + const processedAudio = await this.convertAudioForAzure(audioBuffer, 'extracted_audio.wav'); + const blobUrl = await this.uploadToAzureBlob(processedAudio, userId); + const jobId = await this.createBatchJob( + blobUrl, + userId, + recordingLanguages, + enableDiarization + ); + + // Store batch metadata + if (token && memoId) { + await this.storeBatchJobMetadata(memoId, jobId, token, userId); + } + + return { + success: true, + route: 'batch', + source: 'video', + jobId, + memoId, + userId, + duration, + message: 'Video processed - batch transcription started', + }; + } + } catch (error) { + this.logger.error('[processVideoFile] Error processing video:', error); + throw new Error(`Video processing failed: ${error.message}`); + } + } +} diff --git a/apps/memoro/apps/audio-backend/src/dto/index.ts b/apps/memoro/apps/audio-backend/src/dto/index.ts new file mode 100644 index 000000000..4529ff196 --- /dev/null +++ b/apps/memoro/apps/audio-backend/src/dto/index.ts @@ -0,0 +1,4 @@ +export * from './transcribe-realtime.dto'; +export * from './transcribe-from-storage.dto'; +export * from './process-video.dto'; +export * from './transcription-response.dto'; diff --git a/apps/memoro/apps/audio-backend/src/dto/process-video.dto.ts b/apps/memoro/apps/audio-backend/src/dto/process-video.dto.ts new file mode 100644 index 000000000..7f630d296 --- /dev/null +++ b/apps/memoro/apps/audio-backend/src/dto/process-video.dto.ts @@ -0,0 +1,71 @@ +import { IsString, IsNotEmpty, IsOptional, IsArray, IsBoolean, IsNumber } from 'class-validator'; +import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger'; + +export class ProcessVideoDto { + @ApiProperty({ + description: 'Path to the video file in cloud storage (gs:// or supabase path)', + example: 'gs://bucket-name/videos/recording.mp4', + }) + @IsString() + @IsNotEmpty() + videoPath: string; + + @ApiProperty({ + description: 'Unique identifier for the memo', + example: '123e4567-e89b-12d3-a456-426614174000', + }) + @IsString() + @IsNotEmpty() + memoId: string; + + @ApiProperty({ + description: 'User ID who owns this transcription', + example: 'user-123', + }) + @IsString() + @IsNotEmpty() + userId: string; + + @ApiPropertyOptional({ + description: 'Space/workspace ID for organization', + example: 'space-456', + }) + @IsString() + @IsOptional() + spaceId?: string; + + @ApiPropertyOptional({ + description: 'Array of language codes for transcription (e.g., ["de-DE", "en-US"])', + example: ['de-DE', 'en-US'], + type: [String], + }) + @IsArray() + @IsOptional() + recordingLanguages?: string[]; + + @ApiPropertyOptional({ + description: 'Enable speaker diarization (speaker separation)', + example: true, + default: false, + }) + @IsBoolean() + @IsOptional() + enableDiarization?: boolean; + + @ApiPropertyOptional({ + description: 'Append to existing transcription instead of replacing', + example: false, + default: false, + }) + @IsBoolean() + @IsOptional() + isAppend?: boolean; + + @ApiPropertyOptional({ + description: 'Index of the recording in a multi-recording session', + example: 0, + }) + @IsNumber() + @IsOptional() + recordingIndex?: number; +} diff --git a/apps/memoro/apps/audio-backend/src/dto/transcribe-from-storage.dto.ts b/apps/memoro/apps/audio-backend/src/dto/transcribe-from-storage.dto.ts new file mode 100644 index 000000000..1dd3f8dc5 --- /dev/null +++ b/apps/memoro/apps/audio-backend/src/dto/transcribe-from-storage.dto.ts @@ -0,0 +1,54 @@ +import { IsString, IsNotEmpty, IsOptional, IsArray, IsBoolean } from 'class-validator'; +import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger'; + +export class TranscribeFromStorageDto { + @ApiProperty({ + description: 'Path to the audio file in cloud storage', + example: 'gs://bucket-name/audio/recording.mp3', + }) + @IsString() + @IsNotEmpty() + audioPath: string; + + @ApiProperty({ + description: 'User ID who owns this transcription', + example: 'user-123', + }) + @IsString() + @IsNotEmpty() + userId: string; + + @ApiPropertyOptional({ + description: 'Space/workspace ID for organization', + example: 'space-456', + }) + @IsString() + @IsOptional() + spaceId?: string; + + @ApiPropertyOptional({ + description: 'Array of language codes for transcription (e.g., ["de-DE", "en-US"])', + example: ['de-DE', 'en-US'], + type: [String], + }) + @IsArray() + @IsOptional() + recordingLanguages?: string[]; + + @ApiPropertyOptional({ + description: 'Unique identifier for the memo (optional for this endpoint)', + example: '123e4567-e89b-12d3-a456-426614174000', + }) + @IsString() + @IsOptional() + memoId?: string; + + @ApiPropertyOptional({ + description: 'Enable speaker diarization (speaker separation)', + example: true, + default: false, + }) + @IsBoolean() + @IsOptional() + enableDiarization?: boolean; +} diff --git a/apps/memoro/apps/audio-backend/src/dto/transcribe-realtime.dto.ts b/apps/memoro/apps/audio-backend/src/dto/transcribe-realtime.dto.ts new file mode 100644 index 000000000..1a457b1a2 --- /dev/null +++ b/apps/memoro/apps/audio-backend/src/dto/transcribe-realtime.dto.ts @@ -0,0 +1,71 @@ +import { IsString, IsNotEmpty, IsOptional, IsArray, IsBoolean, IsNumber } from 'class-validator'; +import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger'; + +export class TranscribeRealtimeDto { + @ApiProperty({ + description: 'Path to the audio file in cloud storage (gs:// or supabase path)', + example: 'gs://bucket-name/audio/recording.mp3', + }) + @IsString() + @IsNotEmpty() + audioPath: string; + + @ApiProperty({ + description: 'Unique identifier for the memo', + example: '123e4567-e89b-12d3-a456-426614174000', + }) + @IsString() + @IsNotEmpty() + memoId: string; + + @ApiProperty({ + description: 'User ID who owns this transcription', + example: 'user-123', + }) + @IsString() + @IsNotEmpty() + userId: string; + + @ApiPropertyOptional({ + description: 'Space/workspace ID for organization', + example: 'space-456', + }) + @IsString() + @IsOptional() + spaceId?: string; + + @ApiPropertyOptional({ + description: 'Array of language codes for transcription (e.g., ["de-DE", "en-US"])', + example: ['de-DE', 'en-US'], + type: [String], + }) + @IsArray() + @IsOptional() + recordingLanguages?: string[]; + + @ApiPropertyOptional({ + description: 'Enable speaker diarization (speaker separation)', + example: true, + default: false, + }) + @IsBoolean() + @IsOptional() + enableDiarization?: boolean; + + @ApiPropertyOptional({ + description: 'Append to existing transcription instead of replacing', + example: false, + default: false, + }) + @IsBoolean() + @IsOptional() + isAppend?: boolean; + + @ApiPropertyOptional({ + description: 'Index of the recording in a multi-recording session', + example: 0, + }) + @IsNumber() + @IsOptional() + recordingIndex?: number; +} diff --git a/apps/memoro/apps/audio-backend/src/dto/transcription-response.dto.ts b/apps/memoro/apps/audio-backend/src/dto/transcription-response.dto.ts new file mode 100644 index 000000000..5322cb7d5 --- /dev/null +++ b/apps/memoro/apps/audio-backend/src/dto/transcription-response.dto.ts @@ -0,0 +1,105 @@ +import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger'; + +export class TranscriptionSegment { + @ApiProperty({ + description: 'Text content of the segment', + example: 'Hello, this is a test recording.', + }) + text: string; + + @ApiPropertyOptional({ + description: 'Start time of the segment in seconds', + example: 0.5, + }) + start?: number; + + @ApiPropertyOptional({ + description: 'End time of the segment in seconds', + example: 3.2, + }) + end?: number; + + @ApiPropertyOptional({ + description: 'Speaker identifier (when diarization is enabled)', + example: 'Speaker 1', + }) + speaker?: string; + + @ApiPropertyOptional({ + description: 'Confidence score of the transcription', + example: 0.95, + }) + confidence?: number; +} + +export class TranscriptionResponseDto { + @ApiProperty({ + description: 'Full transcription text', + example: 'Hello, this is a test recording. How are you today?', + }) + text: string; + + @ApiPropertyOptional({ + description: 'Individual transcription segments with timing', + type: [TranscriptionSegment], + }) + segments?: TranscriptionSegment[]; + + @ApiPropertyOptional({ + description: 'Detected language of the audio', + example: 'de-DE', + }) + language?: string; + + @ApiPropertyOptional({ + description: 'Duration of the audio in seconds', + example: 125.5, + }) + duration?: number; + + @ApiProperty({ + description: 'Status of the transcription', + example: 'success', + enum: ['success', 'processing', 'failed'], + }) + status: string; + + @ApiPropertyOptional({ + description: 'Job ID for batch transcriptions (for long audio files)', + example: 'batch-job-12345', + }) + jobId?: string; + + @ApiPropertyOptional({ + description: 'Error message if transcription failed', + example: 'Audio file not found', + }) + error?: string; +} + +export class BatchStatusResponseDto { + @ApiProperty({ + description: 'Current status of the batch job', + example: 'Succeeded', + enum: ['NotStarted', 'Running', 'Succeeded', 'Failed'], + }) + status: string; + + @ApiPropertyOptional({ + description: 'Transcription result (available when status is Succeeded)', + type: TranscriptionResponseDto, + }) + transcription?: TranscriptionResponseDto; + + @ApiPropertyOptional({ + description: 'Error details if the job failed', + example: 'Transcription service timeout', + }) + error?: string; + + @ApiPropertyOptional({ + description: 'Progress percentage (0-100)', + example: 75, + }) + progress?: number; +} diff --git a/apps/memoro/apps/audio-backend/src/main.ts b/apps/memoro/apps/audio-backend/src/main.ts new file mode 100644 index 000000000..89674626f --- /dev/null +++ b/apps/memoro/apps/audio-backend/src/main.ts @@ -0,0 +1,98 @@ +import { NestFactory } from '@nestjs/core'; +import { AppModule } from './app.module'; +import { json, urlencoded } from 'express'; +import { Logger, ValidationPipe } from '@nestjs/common'; +import { DocumentBuilder, SwaggerModule } from '@nestjs/swagger'; +import helmet from 'helmet'; + +async function bootstrap() { + const app = await NestFactory.create(AppModule); + const logger = new Logger('Bootstrap'); + + // Add security headers with Helmet + app.use( + helmet({ + contentSecurityPolicy: { + directives: { + defaultSrc: ["'self'"], + styleSrc: ["'self'", "'unsafe-inline'"], // For Swagger UI + scriptSrc: ["'self'", "'unsafe-inline'"], // For Swagger UI + imgSrc: ["'self'", 'data:', 'https:'], // For Swagger UI + }, + }, + crossOriginEmbedderPolicy: false, // Disable for Swagger UI compatibility + }) + ); + + // Add request size logging middleware + app.use((req, res, next) => { + const contentLength = req.headers['content-length']; + if (contentLength && parseInt(contentLength) > 100000) { + // Log requests > 100KB + logger.log(`Large request detected: ${contentLength} bytes to ${req.url}`); + } + next(); + }); + + // Configure body parser limits for large JSON payloads + app.use( + json({ + limit: '50mb', + verify: (req, res, buf, encoding) => { + if (buf.length > 50 * 1024 * 1024) { + logger.error(`JSON payload too large: ${buf.length} bytes`); + throw new Error('Payload too large'); + } + }, + }) + ); + app.use(urlencoded({ extended: true, limit: '50mb' })); + + // Enable CORS + app.enableCors({ + origin: process.env.ALLOWED_ORIGINS?.split(',') || '*', + methods: ['GET', 'POST'], + credentials: true, + }); + + // Enable global validation pipe + app.useGlobalPipes( + new ValidationPipe({ + whitelist: true, // Strip properties that don't have decorators + forbidNonWhitelisted: true, // Throw error if non-whitelisted properties are present + transform: true, // Automatically transform payloads to DTO instances + transformOptions: { + enableImplicitConversion: true, // Allow automatic type conversion + }, + }) + ); + + // Swagger API Documentation + const config = new DocumentBuilder() + .setTitle('Audio Transcription API') + .setDescription( + 'Professional API for audio and video transcription with Azure Speech Services. Supports real-time and batch processing, speaker diarization, and multi-language detection.' + ) + .setVersion('1.0') + .addBearerAuth({ + type: 'http', + scheme: 'bearer', + bearerFormat: 'JWT', + description: 'Enter your Bearer token', + }) + .addTag('Audio Transcription', 'Endpoints for audio and video transcription') + .build(); + + const document = SwaggerModule.createDocument(app, config); + SwaggerModule.setup('api-docs', app, document, { + customSiteTitle: 'Audio Transcription API - Documentation', + customCss: '.swagger-ui .topbar { display: none }', + }); + + const port = process.env.PORT || 1337; + await app.listen(port, '0.0.0.0'); + + console.log(`🎵 Audio Transcription Microservice running on port ${port}`); +} + +bootstrap(); diff --git a/apps/memoro/apps/audio-backend/storage_service_role_policy.sql b/apps/memoro/apps/audio-backend/storage_service_role_policy.sql new file mode 100644 index 000000000..7c3177736 --- /dev/null +++ b/apps/memoro/apps/audio-backend/storage_service_role_policy.sql @@ -0,0 +1,9 @@ +-- Storage policy to allow service role to download audio files for processing +-- This is needed for the audio microservice to access user-uploaded files + +-- Allow service role to SELECT (download) files from user-uploads bucket +CREATE POLICY "Service role can download files for processing" +ON storage.objects +FOR SELECT +TO service_role +USING (bucket_id = 'user-uploads'); \ No newline at end of file diff --git a/apps/memoro/apps/audio-backend/tsconfig.json b/apps/memoro/apps/audio-backend/tsconfig.json new file mode 100644 index 000000000..b6acd82cd --- /dev/null +++ b/apps/memoro/apps/audio-backend/tsconfig.json @@ -0,0 +1,23 @@ +{ + "compilerOptions": { + "module": "commonjs", + "declaration": true, + "removeComments": true, + "emitDecoratorMetadata": true, + "experimentalDecorators": true, + "allowSyntheticDefaultImports": true, + "target": "ES2021", + "sourceMap": true, + "outDir": "./dist", + "baseUrl": "./", + "incremental": true, + "skipLibCheck": true, + "strictNullChecks": false, + "noImplicitAny": false, + "strictBindCallApply": false, + "forceConsistentCasingInFileNames": false, + "noFallthroughCasesInSwitch": false + }, + "include": ["src/**/*"], + "exclude": ["node_modules", "dist"] +} diff --git a/apps/memoro/apps/audio-backend/update-env.sh b/apps/memoro/apps/audio-backend/update-env.sh new file mode 100644 index 000000000..f296559f6 --- /dev/null +++ b/apps/memoro/apps/audio-backend/update-env.sh @@ -0,0 +1,11 @@ +#!/bin/bash + +# Update audio-microservice environment variables with correct Supabase credentials +echo "🔧 Updating audio-microservice environment variables..." + +gcloud run services update audio-microservice \ + --region=europe-west3 \ + --set-env-vars=MEMORO_SUPABASE_URL=https://npgifbrwhftlbrbaglmi.supabase.co,MEMORO_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im5wZ2lmYnJ3aGZ0bGJyYmFnbG1pIiwicm9sZSI6ImFub24iLCJpYXQiOjE3MTMxODA4MTcsImV4cCI6MjAyODc1NjgxN30.xfBwgNLkgwW0aJkUCIQM9FBwbqWE8K7ynI-zUY0oOr8,MEMORO_SERVICE_URL=https://memoro-service-111768794939.europe-west3.run.app + +echo "✅ Environment variables updated!" +echo "🚀 Audio microservice should now be able to access Supabase Storage" \ No newline at end of file diff --git a/apps/memoro/apps/backend/.dockerignore b/apps/memoro/apps/backend/.dockerignore new file mode 100644 index 000000000..fb3381494 --- /dev/null +++ b/apps/memoro/apps/backend/.dockerignore @@ -0,0 +1,39 @@ +# Dependencies +node_modules +npm-debug.log* +yarn-debug.log* +yarn-error.log* + +# Environment files - these should come from Cloud Run secrets +.env +.env.* +env.example + +# Test files +*.spec.ts +*.spec.js +test +jest.config.js + +# Development files +.git +.gitignore +README.md +*.md + +# Build artifacts +dist + +# IDE files +.vscode +.idea +*.swp +*.swo + +# OS files +.DS_Store +Thumbs.db + +# Temporary files +*.tmp +*.temp \ No newline at end of file diff --git a/apps/memoro/apps/backend/.env.backup b/apps/memoro/apps/backend/.env.backup new file mode 100644 index 000000000..d70ec71fb --- /dev/null +++ b/apps/memoro/apps/backend/.env.backup @@ -0,0 +1,24 @@ + +# Server Configuration +PORT=3001 +NODE_ENV=development + +# Service URLs +#MANA_SERVICE_URL=https://mana-core-middleware-111768794939.europe-west3.run.app +MANA_SERVICE_URL=http://localhost:3000 +BATCH_TRANSCRIPTION_SERVICE_URL=http://localhost:1337 +AUDIO_MICROSERVICE_URL=http://localhost:1337 + +# App Configuration +MEMORO_APP_ID=973da0c1-b479-4dac-a1b0-ed09c72caca8 + +# Memoro Supabase Configuration +MEMORO_SUPABASE_URL=https://npgifbrwhftlbrbaglmi.supabase.co +MEMORO_SUPABASE_ANON_KEY=sb_publishable_HlAZpB4BxXaMcfOCNx6VJA_-64NTxu4 +MEMORO_SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im5wZ2lmYnJ3aGZ0bGJyYmFnbG1pIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTc0NTg1MTQxNiwiZXhwIjoyMDYxNDI3NDE2fQ.-6hArOVoEgGwIwdjclLQCTOAu13BFYnp9hPxQks4JPM + +# Also accept SUPABASE_SERVICE_KEY for compatibility with audio microservice +SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im5wZ2lmYnJ3aGZ0bGJyYmFnbG1pIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTc0NTg1MTQxNiwiZXhwIjoyMDYxNDI3NDE2fQ.-6hArOVoEgGwIwdjclLQCTOAu13BFYnp9hPxQks4JPM + +# Mana Core service key for service-to-service auth +MANA_SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6InNtZW51ZWx6c2twaG5waGFhZXRwIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTc0MjA3NzYwMiwiZXhwIjoyMDU3NjUzNjAyfQ.guxCZQNZo4jM8M9kDA2MxDc1o78VSOuCLmVULnDCVnQ diff --git a/apps/memoro/apps/backend/.gitignore b/apps/memoro/apps/backend/.gitignore new file mode 100644 index 000000000..9e3e6c7b9 --- /dev/null +++ b/apps/memoro/apps/backend/.gitignore @@ -0,0 +1,9 @@ +node_modules +dist + +.env + +# Testing +coverage +.nyc_output +*.lcov \ No newline at end of file diff --git a/apps/memoro/apps/backend/BRANDING_INFO.md b/apps/memoro/apps/backend/BRANDING_INFO.md new file mode 100644 index 000000000..26a79a5b7 --- /dev/null +++ b/apps/memoro/apps/backend/BRANDING_INFO.md @@ -0,0 +1,143 @@ +# Memoro Service - Branding Configuration + +**Updated**: 2025-11-05 + +--- + +## Hardcoded Memoro Branding + +The Memoro service has **hardcoded branding** that is automatically applied to all signup confirmation emails. This ensures consistent branding across all Memoro signups without needing environment variables. + +### Branding Details + +**Location**: `src/auth-proxy/auth-proxy.service.ts:113-123` + +```typescript +const memoroBranding: BrandingConfig = { + appName: 'Memoro', + logoUrl: 'memoro-logo.png', + primaryColor: '#F8D62B', + secondaryColor: '#f5c500', + websiteUrl: 'https://memoro.ai', + taglineDe: 'Sprechen statt Tippen', + taglineEn: 'Speak Instead of Type', + copyright: '© 2025 Memoro · Made with 💛 in Germany' +}; +``` + +### Logo + +**File**: `memoro-logo.png` +**Storage URL**: https://smenuelzskphnphaaetp.supabase.co/storage/v1/object/public/satellites-logos/memoro-logo.png + +**Note**: PNG format is required for email compatibility. Gmail and most email clients block SVG images for security reasons. + +The logo is stored in Supabase Storage and referenced by filename only. Mana Core automatically builds the full URL. + +### Redirect URL + +**URL**: https://app.manacore.ai/welcome?appName=memoro + +After email confirmation, users are redirected to the centralized welcome page with Memoro-specific branding (blue theme, voice recording features). + +--- + +## How It Works + +1. **Every signup** automatically includes Memoro branding +2. **No configuration needed** - branding is built into the code +3. **Can be overridden** - If needed, pass `metadata.branding` in signup payload +4. **Merges with custom** - If partial branding provided, merges with defaults + +### Merging Behavior + +```typescript +// Standard signup - uses all Memoro defaults +POST /auth/signup +{ email, password, deviceInfo } +→ Email has full Memoro branding + +// Partial override - merges with defaults +POST /auth/signup +{ + email, password, deviceInfo, + metadata: { branding: { logoUrl: 'special-logo.svg' } } +} +→ Email has special logo, but keeps Memoro colors, taglines, etc. + +// Full override - replaces all branding +POST /auth/signup +{ + email, password, deviceInfo, + metadata: { branding: { /* complete custom branding */ } } +} +→ Email uses completely custom branding +``` + +--- + +## Why Hardcoded? + +✅ **Consistency** - All Memoro signups look the same +✅ **Simplicity** - No environment variables to manage +✅ **Reliability** - Can't accidentally break branding with config errors +✅ **Version Control** - Branding changes are tracked in git + +--- + +## To Change Branding + +If you need to update Memoro branding: + +1. **Edit the file**: `src/auth-proxy/auth-proxy.service.ts` +2. **Update the values**: Lines 113-123 +3. **Rebuild and deploy**: `npm run build && deploy` + +**Example**: +```typescript +// Update copyright year +copyright: '© 2026 Memoro · Made with 💛 in Germany' + +// Update colors +primaryColor: '#FF5733', +secondaryColor: '#C70039', +``` + +--- + +## Testing + +To test branding locally: + +```bash +# Start services +cd mana-core-middleware && npm run start:dev # Port 3003 +cd memoro-service && npm run start:dev # Port 3001 + +# Test signup +curl -X POST 'http://localhost:3001/auth/signup' \ + -H 'Content-Type: application/json' \ + -d '{ + "email": "test@example.com", + "password": "SecurePass123!", + "deviceInfo": { + "deviceId": "test-1", + "deviceName": "Test", + "deviceType": "web" + } + }' + +# Check confirmation email for Memoro branding +``` + +See `LOCAL_SIGNUP_TEST_GUIDE.md` for detailed testing instructions. + +--- + +## Related Files + +- **Branding Interface**: `src/auth-proxy/interfaces/branding.interface.ts` +- **Auth Service**: `src/auth-proxy/auth-proxy.service.ts` +- **Auth Controller**: `src/auth-proxy/auth-proxy.controller.ts` +- **Documentation**: `SIGNUP_BRANDING.md` +- **Test Guide**: `../LOCAL_SIGNUP_TEST_GUIDE.md` diff --git a/apps/memoro/apps/backend/CLAUDE.md b/apps/memoro/apps/backend/CLAUDE.md new file mode 100644 index 000000000..2dc7af4d1 --- /dev/null +++ b/apps/memoro/apps/backend/CLAUDE.md @@ -0,0 +1,331 @@ +# Memoro Service - Claude Development Notes + +## Enhanced Audio Processing Architecture + +### Direct Storage Upload Strategy +- Audio files are uploaded directly to Supabase Storage from the frontend +- This bypasses Cloud Run's 32MB file size limit +- Memoro service then processes the uploaded file via `POST /memoro/process-uploaded-audio` + +### Dual-Path Transcription System +**Smart Routing based on duration:** +- **Fast Transcription** (<115 minutes): Real-time Azure Speech Service +- **Batch Transcription** (≥115 minutes): Azure Speech Service with enhanced processing + +### Enhanced Audio Format Fallback Strategy +The service implements a robust 4-tier fallback strategy with comprehensive error handling: + +1. **Fast Transcribe (Primary)** - Direct transcription via Azure Speech Service +2. **Format Conversion + Retry** - Auto-detects format errors and converts via audio-microservice +3. **Batch Processing Fallback** - Uses enhanced batch processing if conversion fails +4. **Intelligent Error Detection** - Automatically identifies Azure Speech format issues + +### Speaker Diarization Fix (2025-06-09) +**Critical Issue Resolved:** +- **Problem**: Azure Fast Transcription API diarization configuration was incorrect, causing 0/149 phrases to have speaker data +- **Root Cause**: Used incorrect `diarization.speakers.maxCount` instead of `diarization.maxSpeakers` +- **Solution**: Updated to correct Azure API format: `diarization: { enabled: true, maxSpeakers: 10 }` +- **Result**: Now 216/216 phrases have proper speaker data with complete utterances, speakers, and speakerMap +- **Request Size Fix**: Increased body parser limit to 200MB to handle very large transcriptions with extensive speaker data (fixed 413 errors) + +### Batch Transcription Enhancements (NEW) +**Advanced Features:** +- **Enhanced Diarization**: Up to 10 speakers (vs 2 in basic mode) +- **Multi-language Detection**: Automatic identification from user preferences +- **Complete Speaker Data**: Same structure as fast transcription (utterances, speakers, speakerMap) +- **Recovery Tracking**: Stores Azure jobId for webhook failure recovery +- **Language Consistency**: Primary language detection and multi-language support + +**Recovery System Foundation:** +- **Metadata Storage**: Each batch job stores jobId in memo metadata via `/update-batch-metadata` +- **Memo ID Based Lookup**: Direct memo ID lookup for reliable metadata updates (fixed 2025-06-08) +- **Authentication Fixed**: Proper JWT token passing between services (fixed 2025-06-08) +- **Recovery Ready**: Infrastructure for cron-based recovery system +- **Webhook Failure Handling**: Planned automatic recovery for stuck transcriptions + +### Error Detection Patterns +The system detects audio format errors by checking for: +- "audio format", "audio stream could not be decoded" +- "InvalidAudioFormat", "UnprocessableEntity" +- "audio/x-m4a", "422" status codes +- Azure Speech Service specific error messages + +### Processing Routes +- `fast_transcribe` - Direct success +- `fast_transcribe_converted` - Success after format conversion +- `batch_transcribe` - Enhanced batch processing for long files (NEW) +- `batch_transcribe_fallback` - Success via batch processing fallback + +## Memo Creation Flow (Updated 2025-06-26) + +### Enhanced Memo Response +The `createMemoFromUploadedFile` method now returns the complete memo object: +```typescript +{ + memo: { /* full memo object */ }, + memoId: string, + audioPath: string +} +``` + +### Recording Time Preservation +- **recordingStartedAt** is stored in memo metadata +- Frontend uses this for accurate timestamp display +- Preserved through all real-time updates + +### Processing State Management +Memo metadata structure: +```typescript +metadata: { + processing: { + transcription: { status: 'pending' | 'processing' | 'completed' | 'failed' }, + headline_and_intro: { status: 'pending' | 'processing' | 'completed' | 'failed' } + }, + recordingStartedAt?: string, // ISO timestamp of actual recording start + location?: any +} +``` + +## Authentication Proxy Architecture (NEW - 2025-01-07) + +### Purpose +The auth-proxy module routes all authentication requests through memoro-service to hide mana-core-middleware from the frontend. This provides a single entry point for all backend services. + +### Auth Proxy Endpoints +All endpoints mirror the mana-core-middleware auth endpoints: +- `POST /auth/signin` - Email/password sign-in +- `POST /auth/signup` - User registration +- `POST /auth/google-signin` - Google OAuth sign-in +- `POST /auth/apple-signin` - Apple OAuth sign-in +- `POST /auth/refresh` - Token refresh +- `POST /auth/logout` - User logout +- `POST /auth/forgot-password` - Password reset +- `POST /auth/validate` - Token validation +- `GET /auth/credits` - Get user credits (proxies `/users/credits` from mana-core) +- `GET /auth/devices` - Get user devices + +### Implementation Details +- **Module**: `auth-proxy` module separate from existing auth module +- **No OAuth Redirects**: Social sign-ins use token exchange, not redirects +- **Error Preservation**: Original error responses passed through +- **App ID Injection**: Automatically adds `appId` query parameter +- **Header Forwarding**: Authorization headers passed through for authenticated endpoints + +## Append Transcription Feature (NEW - 2025-01-07) + +### Purpose +Allows adding additional audio recordings to existing memos and transcribing them, storing results in the `source.additional_recordings` array. + +### Endpoint +`POST /memoro/append-transcription` + +### Request Body +```typescript +{ + memoId: string; // ID of existing memo + filePath: string; // Audio file path in storage + duration: number; // Duration in seconds + recordingIndex?: number; // Optional: index to update specific recording + recordingLanguages?: string[]; + enableDiarization?: boolean; +} +``` + +### Features +- **Smart Routing**: Uses same fast (<115min) vs batch (≥115min) logic as main transcription +- **Credit Management**: Validates and consumes credits like main transcription +- **Access Control**: Validates user owns memo or has access through space +- **Preserves Original**: Keeps original transcript intact, only appends to additional_recordings +- **Speaker Diarization**: Full support for speaker detection in appended audio +- **Error Handling**: Comprehensive fallback strategy matching main transcription flow + +### Additional Recordings Structure +```typescript +source: { + // Original transcript and speaker data preserved + transcript: string; + speakers: {...}; + utterances: [...]; + + // Appended recordings array + additional_recordings: [ + { + path: string; + transcript: string; + languages: string[]; + primary_language: string; + speakers: object; + speakerMap: object; + utterances: array; + status: 'completed' | 'processing' | 'error'; + timestamp: string; + updated_at: string; + } + ] +} +``` + +## Audio Cleanup System (Auto-Delete Old Audio Files) + +### Overview +Automatically deletes audio files older than 30 days for users who have opted in. This helps users manage storage and comply with data retention preferences. + +### How It Works + +1. **GCP Cloud Scheduler** triggers `POST /cleanup/trigger-from-cron` daily at 3:00 AM UTC +2. **memoro-service** calls mana-core-middleware to get users with cleanup enabled +3. For each user, queries Supabase storage for files older than 30 days +4. Deletes files in batches (100 files per batch, 200ms delay between batches) +5. Updates memo `source` field to mark audio as deleted +6. Logs results to `audio_cleanup_logs` table + +### Architecture + +``` +┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐ +│ GCP Cloud │ │ memoro-service │ │ mana-core- │ +│ Scheduler │────>│ /cleanup/ │────>│ middleware │ +│ (3:00 AM UTC) │ │ trigger-from-cron │ │ /internal/users/ │ +└─────────────────────┘ └─────────────────────┘ │ audio-cleanup- │ + │ │ enabled │ + │ └─────────────────────┘ + │ + v + ┌─────────────────────┐ + │ Supabase Storage │ + │ (user-uploads) │ + │ - Delete old files │ + │ - Update memos │ + └─────────────────────┘ +``` + +### Enabling Auto-Delete for a User + +Add `autoDeleteAudiosAfter30Days: true` to the user's `app_settings.memoro` object in the `users` table: + +```json +{ + "memoro": { + "autoDeleteAudiosAfter30Days": true, + "dataUsageAcceptance": true, + "emailNewsletterOptIn": false + } +} +``` + +### SQL Query to Enable for a User +```sql +UPDATE users +SET app_settings = jsonb_set( + COALESCE(app_settings, '{}'::jsonb), + '{memoro,autoDeleteAudiosAfter30Days}', + 'true' +) +WHERE id = 'USER_UUID_HERE'; +``` + +### SQL Query to Enable for Multiple Users (by email) +```sql +WITH user_emails AS ( + SELECT unnest(ARRAY[ + 'user1@example.com', + 'user2@example.com', + 'user3@example.com' + ]::text[]) AS email +) +UPDATE users u +SET app_settings = jsonb_set( + jsonb_set( + COALESCE(u.app_settings, '{}'::jsonb), + '{memoro}', + COALESCE(u.app_settings->'memoro', '{}'::jsonb) + ), + '{memoro,autoDeleteAudiosAfter30Days}', + 'true' +) +FROM user_emails +WHERE u.email = user_emails.email; +``` + +### SQL Query to Check Users with Cleanup Enabled +```sql +SELECT id, email, app_settings->'memoro'->'autoDeleteAudiosAfter30Days' +FROM users +WHERE app_settings->'memoro'->>'autoDeleteAudiosAfter30Days' = 'true'; +``` + +### Configuration + +| Setting | Value | Location | +|---------|-------|----------| +| Retention period | 30 days | `audio-cleanup.service.ts` | +| Batch size | 100 files | `audio-cleanup.service.ts` | +| Batch delay | 200ms | `audio-cleanup.service.ts` | +| Storage bucket | `user-uploads` | `audio-cleanup.service.ts` | +| Schedule | `0 3 * * *` (daily 3 AM UTC) | GCP Cloud Scheduler | +| Timeout | 1800s (30 min) | GCP Cloud Scheduler | + +### GCP Cloud Scheduler Jobs + +**Dev:** +```bash +gcloud scheduler jobs describe audio-cleanup-daily --project=mana-core-dev --location=europe-west3 +``` + +**Prod:** +```bash +gcloud scheduler jobs describe audio-cleanup-daily --project=mana-core-prod --location=europe-west3 +``` + +### Endpoints + +| Endpoint | Method | Description | +|----------|--------|-------------| +| `/cleanup/trigger-from-cron` | POST | Called by Cloud Scheduler | +| `/cleanup/trigger-manual` | POST | Manual trigger for testing | +| `/cleanup/process-old-audios` | POST | Process specific user IDs | + +All endpoints require `X-Internal-API-Key` header. + +### What Happens When Audio is Deleted + +1. Audio file removed from Supabase Storage +2. Memo `source` field updated: + ```json + { + "audio_path": null, + "audio_deleted": true, + "audio_deleted_at": "2026-01-26T06:47:02.000Z", + "transcript": "...", + "utterances": [...] + } + ``` +3. Transcript and other data remain intact + +### Monitoring + +Check cleanup logs: +```sql +SELECT * FROM audio_cleanup_logs ORDER BY started_at DESC LIMIT 10; +``` + +### Files + +- `memoro_middleware/src/cleanup/audio-cleanup.service.ts` - Main cleanup logic +- `memoro_middleware/src/cleanup/audio-cleanup.controller.ts` - HTTP endpoints +- `memoro_middleware/src/cleanup/cleanup.module.ts` - NestJS module +- `mana-core-token-middleware/src/modules/users/controllers/user-cleanup.controller.ts` - User query endpoint +- `mana-core-token-middleware/src/modules/users/services/user-settings.service.ts` - User settings queries + +## Development Commands +- `npm run start:dev` - Development server with hot reload +- `npm run build` - Production build +- `npm run start:prod` - Production server + +## Key Implementation Details +- Audio format conversion handled via audio-microservice +- Credit validation before processing +- Automatic fallback without user intervention +- Detailed logging for debugging each fallback step +- Full memo object returned on creation for immediate frontend sync +- Auth proxy provides single backend entry point for frontend \ No newline at end of file diff --git a/apps/memoro/apps/backend/DEPLOY_MANUAL.md b/apps/memoro/apps/backend/DEPLOY_MANUAL.md new file mode 100644 index 000000000..41092dfde --- /dev/null +++ b/apps/memoro/apps/backend/DEPLOY_MANUAL.md @@ -0,0 +1,208 @@ +# Memoro Service Deployment Manual + +## Prerequisites + +1. **Google Cloud SDK** installed and authenticated: + ```bash + gcloud auth login + gcloud config set project memo-2c4c4 + ``` + +2. **Docker** installed (for local testing) + +3. **Access to** `memo-2c4c4` project with Cloud Build and Cloud Run permissions + +## Step-by-Step Deployment Process + +### Step 1: Prepare for Deployment + +Navigate to the memoro-service directory: +```bash +cd memoro-service +``` + +Check current version in `cloudbuild-memoro.yaml`: +```bash +cat cloudbuild-memoro.yaml +``` + +### Step 2: Update Version (Optional) + +If you want to increment the version, update the tag in `cloudbuild-memoro.yaml`: +```yaml +# Change v4.0.0 to v4.1.0 (or next version) +args: ['build', '-t', 'europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.4.4', '.'] +``` + +### Step 3: Build and Push Docker Image + +Run the Cloud Build process: +```bash +gcloud builds submit --project=memo-2c4c4 --config=cloudbuild-memoro.yaml . +``` + +**Expected output:** +- ✅ Source uploaded to Cloud Storage +- ✅ Docker build steps execute +- ✅ Image pushed to Artifact Registry +- ✅ Build completes with "SUCCESS" status + +### Step 4: Deploy to Cloud Run + +Use the image version from the build output: +```bash +gcloud run deploy memoro-service \ + --project=memo-2c4c4 \ + --image europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.9.6 \ + --platform managed \ + --region europe-west3 \ + --allow-unauthenticated \ + --memory 1Gi +``` + +**Deployment will prompt:** +- Service configuration questions (usually accept defaults) +- Traffic allocation (usually 100% to new revision) + +### Step 5: Verify Deployment + +1. **Get service URL:** + ```bash + SERVICE_URL=$(gcloud run services describe memoro-service --platform managed --region europe-west3 --format 'value(status.url)') + echo "Service URL: $SERVICE_URL" + ``` + +2. **Test health endpoint:** + ```bash + curl $SERVICE_URL/health + ``` + +3. **Test with authentication (optional):** + ```bash + curl -H "Authorization: Bearer YOUR_JWT_TOKEN" $SERVICE_URL/memoro/spaces + ``` + +## Environment Variables & Secrets + +The deployment preserves existing environment variables and secrets. Current secrets include: + +- `MEMORO_SUPABASE_URL` +- `MEMORO_SUPABASE_ANON_KEY` +- `MEMORO_SUPABASE_SERVICE_KEY` +- `MANA_SERVICE_URL` +- `BATCH_TRANSCRIPTION_SERVICE_URL` +- `MEMORO_APP_ID` + +To update environment variables: +```bash +gcloud run services update memoro-service \ + --region europe-west3 \ + --set-env-vars="NEW_VAR=value" +``` + +## Troubleshooting + +### Build Issues + +1. **Authentication errors:** + ```bash + gcloud auth login + gcloud auth configure-docker europe-west3-docker.pkg.dev + ``` + +2. **Project access issues:** + ```bash + gcloud config set project memo-2c4c4 + gcloud projects get-iam-policy memo-2c4c4 + ``` + +### Deployment Issues + +1. **Check service logs:** + ```bash + gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=memoro-service" --limit 10 + ``` + +2. **Check service status:** + ```bash + gcloud run services describe memoro-service --region europe-west3 + ``` + +3. **Memory issues (increase if needed):** + ```bash + gcloud run services update memoro-service \ + --region europe-west3 \ + --memory 1Gi + ``` + +### Runtime Issues + +1. **Test specific endpoints:** + ```bash + # Health check + curl $SERVICE_URL/health + + # Batch upload (requires valid JWT and audio file) + curl -X POST \ + -H "Authorization: Bearer $JWT_TOKEN" \ + -F "file=@test-audio.mp3" \ + $SERVICE_URL/memoro/upload-audio + ``` + +2. **Check environment variables:** + ```bash + gcloud run services describe memoro-service \ + --region europe-west3 \ + --format="export" | grep env + ``` + +## Quick Reference Commands + +```bash +# Build only +gcloud builds submit --project=memo-2c4c4 --config=cloudbuild-memoro.yaml . + +# Deploy latest version +gcloud run deploy memoro-service \ + --image europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.4.0 \ + --region europe-west3 + +# Get service URL +gcloud run services describe memoro-service --region europe-west3 --format 'value(status.url)' + +# View logs +gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=memoro-service" --limit 10 + +# Update environment variable +gcloud run services update memoro-service --region europe-west3 --set-env-vars="VAR=value" +``` + +## File Structure Reference + +``` +memoro-service/ +├── cloudbuild-memoro.yaml # Build configuration +├── Dockerfile # Container definition +├── package.json # Dependencies +├── src/ # Source code +│ ├── memoro/ +│ │ ├── memoro.controller.ts # Updated with batch jobId storage +│ │ └── memoro.service.ts # Updated with batch logic +│ └── ... +└── DEPLOY_MANUAL.md # This file +``` + +## Recent Updates + +**v4.0.0 includes:** +- ✅ Fixed batch upload jobId storage in memo metadata +- ✅ Updated duration threshold to 1h55m for batch processing +- ✅ Added `updateMemoWithJobId` method for webhook callback support +- ✅ Improved error handling for batch transcription flow + +--- + +**Last Updated:** $(date) +**Current Version:** v4.0.0 +**Deployment Region:** europe-west3 +**Project:** memo-2c4c4 \ No newline at end of file diff --git a/apps/memoro/apps/backend/Dockerfile b/apps/memoro/apps/backend/Dockerfile new file mode 100644 index 000000000..cb76e6e58 --- /dev/null +++ b/apps/memoro/apps/backend/Dockerfile @@ -0,0 +1,13 @@ +FROM node:18-alpine + +WORKDIR /app + +COPY package*.json ./ +RUN npm ci + +COPY . . +RUN npm run build + +EXPOSE 3001 + +CMD ["npm", "run", "start:prod"] \ No newline at end of file diff --git a/apps/memoro/apps/backend/Dockerfile.debug b/apps/memoro/apps/backend/Dockerfile.debug new file mode 100644 index 000000000..78b6e23b5 --- /dev/null +++ b/apps/memoro/apps/backend/Dockerfile.debug @@ -0,0 +1,21 @@ +FROM node:18-alpine + +WORKDIR /app + +COPY package*.json ./ +RUN npm ci + +COPY . . + +# Debug: Check what files are present before build +RUN ls -la + +# Run build with verbose output +RUN npm run build + +# Debug: Check if dist was created +RUN ls -la dist/ + +EXPOSE 3001 + +CMD ["npm", "run", "start:prod"] \ No newline at end of file diff --git a/apps/memoro/apps/backend/README.md b/apps/memoro/apps/backend/README.md new file mode 100644 index 000000000..3a85353b3 --- /dev/null +++ b/apps/memoro/apps/backend/README.md @@ -0,0 +1,153 @@ +# Memoro Microservice + +This is a standalone microservice for the Memoro component of the Mana Core system. It was extracted from the monolithic mana-core-middleware to enable independent scaling and deployment. + +## Architecture + +This microservice: +- Handles all Memoro-specific functionality +- Communicates with Auth service for authentication/authorization +- Communicates with Spaces service for space management +- Connects directly to the Memoro Supabase instance +- Implements mana cost validation for AI operations + +## Mana Cost System + +The service implements a backend-driven credit validation system: + +- **Transcription**: 120 credits per hour / 2 credits per minute (base cost: 10 credits minimum) +- **Question Processing**: 5 mana per question asked to memos +- **Memo Combination**: 5 mana per memo when combining multiple memos +- **Headline Generation**: 10 credits for title/summary generation +- **Memory Creation**: 10 credits for AI-generated memories +- **Blueprint Processing**: 5 credits for blueprint application +- **Memo Sharing**: 1 credit for sharing operations +- **Space Operations**: 2 credits for space-related operations +- **Early Validation**: Credits are checked before expensive AI operations +- **Real-time Updates**: Frontend mana counter updates immediately after operations + +All AI processing endpoints validate sufficient mana credits before processing and consume credits upon successful completion. + +## API Endpoints + +### Core Memoro Endpoints +- `GET /memoro/spaces` - Get all Memoro spaces for the authenticated user +- `POST /memoro/spaces` - Create a new Memoro space +- `GET /memoro/spaces/:id` - Get details for a specific Memoro space +- `DELETE /memoro/spaces/:id` - Delete a Memoro space +- `POST /memoro/link-memo` - Link a memo to a space +- `POST /memoro/unlink-memo` - Unlink a memo from a space +- `GET /memoro/spaces/:id/memos` - Get all memos for a specific space +- `POST /memoro/spaces/:id/leave` - Leave a space + +### Space Invitation Management +- `GET /memoro/spaces/:id/invites` - Get space invitations +- `POST /memoro/spaces/:id/invite` - Invite user to space +- `POST /memoro/spaces/invites/:inviteId/resend` - Resend invitation +- `DELETE /memoro/spaces/invites/:inviteId` - Cancel invitation +- `GET /memoro/invites/pending` - Get user's pending invites +- `POST /memoro/spaces/invites/accept` - Accept invitation +- `POST /memoro/spaces/invites/decline` - Decline invitation + +### Audio Processing +- `POST /memoro/process-uploaded-audio` - Process uploaded audio with intelligent fallback strategy and credit validation +- `POST /memoro/update-batch-metadata` - Update batch transcription metadata for recovery tracking (improved with memo ID lookup, 2025-06-08) +- `POST /memoro/retry-transcription` - Retry failed transcription +- `POST /memoro/retry-headline` - Retry failed headline generation + +#### Enhanced Audio Processing System +The service implements a sophisticated dual-path transcription system with comprehensive fallback strategies: + +**Transcription Paths:** +1. **Fast Transcription** (<115 minutes) - Real-time processing via Supabase Edge Function +2. **Batch Transcription** (≥115 minutes) - Azure Speech Service batch processing with webhook callbacks + +**Enhanced Fallback Strategy:** +1. **Fast Transcribe** - Attempts fast transcription via edge function +2. **Format Conversion + Retry** - If audio format error detected, converts file via audio-microservice and retries +3. **Batch Processing Fallback** - Falls back to batch processing if conversion fails +4. **Intelligent Error Detection** - Automatically detects Azure Speech Service format compatibility issues + +**Batch Transcription Enhancements:** +- **Advanced Diarization**: Supports up to 10 speakers (vs 2 in basic mode) +- **Multi-language Detection**: Automatic language identification from user preferences +- **Complete Data Consistency**: Same speaker data structure as fast transcription +- **Recovery Tracking**: Stores Azure jobId for webhook failure recovery +- **Graceful Degradation**: Falls back to text-only if speaker processing fails + +**Supported Processing Routes:** +- `fast_transcribe` - Direct fast transcription success +- `fast_transcribe_converted` - Success after format conversion +- `batch_transcribe` - Regular batch processing for long files +- `batch_transcribe_fallback` - Success via batch processing fallback + +**Data Structure Consistency:** +Both fast and batch transcription now save identical data: +- `transcript` - Transcribed text +- `primary_language` - Detected primary language +- `languages` - All detected languages +- `utterances` - Speaker segments with timestamps +- `speakers` - Speaker labels +- `speakerMap` - Speaker-grouped utterances + +### AI Processing Endpoints (with Credit Validation) +- `POST /memoro/question-memo` - Ask questions about memos (5 mana cost) +- `POST /memoro/combine-memos` - Combine multiple memos with AI processing (5 mana per memo) + +### Credit Management +- `POST /memoro/credits/check-transcription` - Check credits before transcription +- `POST /memoro/credits/consume-transcription` - Consume transcription credits +- `POST /memoro/credits/consume-operation` - Consume operation credits + +### User Settings Management +- `GET /settings` - Get all user settings +- `GET /settings/memoro` - Get Memoro-specific settings +- `PATCH /settings/memoro` - Update Memoro settings +- `PATCH /settings/memoro/data-usage` - Update data usage acceptance +- `PATCH /settings/memoro/email-newsletter` - Update email newsletter opt-in +- `PATCH /settings/profile` - Update user profile (firstName, lastName, avatarUrl) + +## Environment Variables + +Required environment variables: + +```env +# Server Configuration +PORT=3001 + +# Service URLs +MANA_SERVICE_URL=http://localhost:3000 +AUDIO_MICROSERVICE_URL=https://audio-microservice-111768794939.europe-west3.run.app + +# Supabase Configuration +MEMORO_SUPABASE_URL=https://your-memoro-project.supabase.co +MEMORO_SUPABASE_ANON_KEY=your-memoro-anon-key +MEMORO_SUPABASE_SERVICE_KEY=your-memoro-service-key + +# App Configuration +MEMORO_APP_ID=973da0c1-b479-4dac-a1b0-ed09c72caca8 +``` + +## Development + +```bash +# Install dependencies +npm install + +# Run in development mode +npm run start:dev + +# Build for production +npm run build + +# Run in production mode +npm run start:prod +``` + +## Deployment + +For Cloud Run deployment instructions, see `cloud-run-deploy.md`. + + + +testing prod deployment 30.juli 2025 03:30 \ No newline at end of file diff --git a/apps/memoro/apps/backend/SIGNUP_BRANDING.md b/apps/memoro/apps/backend/SIGNUP_BRANDING.md new file mode 100644 index 000000000..2a1168139 --- /dev/null +++ b/apps/memoro/apps/backend/SIGNUP_BRANDING.md @@ -0,0 +1,145 @@ +# Memoro Service - Signup Branding Support + +**Updated**: 2025-11-05 + +--- + +## Overview + +The signup endpoint automatically applies **Memoro branding** to all confirmation emails. The branding is hardcoded in the service and includes: + +- **App Name**: Memoro +- **Logo**: memoro-logo.png +- **Primary Color**: #F8D62B (Yellow) +- **Secondary Color**: #f5c500 (Golden Yellow) +- **Tagline DE**: "Sprechen statt Tippen" +- **Tagline EN**: "Speak Instead of Type" +- **Website**: https://memoro.ai +- **Redirect URL**: https://app.manacore.ai/welcome?appName=memoro +- **Copyright**: "© 2025 Memoro · Made with 💛 in Germany" + +You can optionally override specific branding fields per signup if needed. + +## Simple Usage + +### Standard Signup (Automatic Memoro Branding) + +```bash +POST /auth/signup +{ + "email": "user@memoro.ai", + "password": "SecurePass123!", + "deviceInfo": { + "deviceId": "web-123", + "deviceName": "Chrome", + "deviceType": "web" + } +} +``` + +**Result**: Email automatically uses Memoro branding (yellow colors, Memoro logo, German/English taglines). + +--- + +### Custom Branding (Optional) + +```bash +POST /auth/signup +{ + "email": "user@example.com", + "password": "SecurePass123!", + "deviceInfo": { + "deviceId": "web-123", + "deviceName": "Chrome", + "deviceType": "web" + }, + "metadata": { + "branding": { + "logoUrl": "custom-logo.svg", + "primaryColor": "#FF5733" + } + } +} +``` + +**Result**: Email uses custom logo and color, other fields use Memoro defaults. + +--- + +### Full Custom Branding + +```bash +POST /auth/signup +{ + "email": "user@example.com", + "password": "SecurePass123!", + "deviceInfo": {...}, + "metadata": { + "branding": { + "appName": "Custom App", + "logoUrl": "custom-logo.svg", + "primaryColor": "#2C3E50", + "secondaryColor": "#34495E", + "websiteUrl": "https://custom-app.com", + "taglineDe": "Ihre Lösung", + "taglineEn": "Your Solution", + "copyright": "© 2025 Custom App" + } + } +} +``` + +--- + +## Branding Fields + +All fields are **optional**: + +| Field | Type | Description | Example | +|-------|------|-------------|---------| +| `appName` | string | App display name | `"My App"` | +| `logoUrl` | string | Logo filename (from Supabase Storage) | `"app-logo.png"` | +| `primaryColor` | string | Primary color (hex) | `"#F8D62B"` | +| `secondaryColor` | string | Secondary color (hex) | `"#f5c500"` | +| `websiteUrl` | string | Website URL | `"https://app.com"` | +| `taglineDe` | string | German tagline | `"Sprechen statt Tippen"` | +| `taglineEn` | string | English tagline | `"Speak Instead of Type"` | +| `copyright` | string | Footer text | `"© 2025 My App"` | + +--- + +## TypeScript Types + +```typescript +import { BrandingConfig } from './auth-proxy/interfaces/branding.interface'; + +// Example +const branding: BrandingConfig = { + logoUrl: 'custom-logo.svg', + primaryColor: '#FF5733' +}; + +await authProxy.signup({ + email: 'user@example.com', + password: 'pass123', + deviceInfo: {...}, + metadata: { branding } +}); +``` + +--- + +## How It Works + +1. **No metadata** → Mana Core uses default branding for your app +2. **With metadata.branding** → Mana Core merges your branding with defaults +3. **Any missing fields** → Filled in by Mana Core defaults + +--- + +## That's It! + +- ✅ Backward compatible - existing signups work unchanged +- ✅ Simple - just add `metadata.branding` when you want custom branding +- ✅ Flexible - override any or all branding fields +- ✅ No new endpoints - just use `POST /auth/signup` diff --git a/apps/memoro/apps/backend/TECHNICAL_DOCUMENTATION.md b/apps/memoro/apps/backend/TECHNICAL_DOCUMENTATION.md new file mode 100644 index 000000000..cdff4031b --- /dev/null +++ b/apps/memoro/apps/backend/TECHNICAL_DOCUMENTATION.md @@ -0,0 +1,1321 @@ +# Memoro Service - Detailed Technical Documentation + +## Table of Contents + +1. [Project Overview](#project-overview) +2. [Architecture](#architecture) +3. [Installation & Setup](#installation--setup) +4. [Module Deep Dive](#module-deep-dive) +5. [API Reference](#api-reference) +6. [Data Models](#data-models) +7. [Service Integrations](#service-integrations) +8. [Error Handling](#error-handling) +9. [Testing](#testing) +10. [Performance](#performance) +11. [Security](#security) +12. [Deployment](#deployment) +13. [Monitoring & Logging](#monitoring--logging) +14. [Troubleshooting](#troubleshooting) + +## Project Overview + +### Purpose +Memoro Service is a specialized microservice that handles all Memoro-specific functionality in the Mana ecosystem. It serves as the primary backend for the Memoro mobile and web applications, orchestrating audio processing, AI operations, and collaborative features. + +### Tech Stack +- **Framework**: NestJS 10.x +- **Language**: TypeScript 5.x +- **Runtime**: Node.js 18.x +- **Database**: Supabase (PostgreSQL) +- **Package Manager**: npm + +### Key Dependencies +```json +{ + "@nestjs/common": "^10.0.0", + "@nestjs/core": "^10.0.0", + "@nestjs/platform-express": "^10.0.0", + "@supabase/supabase-js": "^2.39.0", + "axios": "^1.6.0", + "class-validator": "^0.14.0", + "rxjs": "^7.8.1" +} +``` + +## Architecture + +### Service Architecture Diagram +``` +┌─────────────────────────────────────────────────┐ +│ Memoro Service │ +├─────────────────────────────────────────────────┤ +│ ┌───────────┐ ┌───────────┐ ┌───────────┐ │ +│ │Auth Proxy │ │ Credits │ │ Memoro │ │ +│ │ Module │ │ Module │ │ Module │ │ +│ └─────┬─────┘ └─────┬─────┘ └─────┬─────┘ │ +│ │ │ │ │ +│ ┌─────▼──────────────▼──────────────▼─────┐ │ +│ │ Common Services Layer │ │ +│ │ - Guards - Decorators - Filters │ │ +│ └───────────────────┬──────────────────────┘ │ +│ │ │ +│ ┌───────────────────▼──────────────────────┐ │ +│ │ External Service Clients │ │ +│ │ - Mana Core - Audio Micro - Supabase │ │ +│ └──────────────────────────────────────────┘ │ +└─────────────────────────────────────────────────┘ +``` + +### Request Flow +```typescript +// Typical request flow through the service +Request → Guards → Interceptors → Controller → Service → Repository → External Services + ↓ +Response ← Filters ← Interceptors ← Response +``` + +## Installation & Setup + +### Prerequisites +```bash +# Required software +- Node.js >= 18.0.0 +- npm >= 9.0.0 +- Docker (optional, for containerized deployment) +``` + +### Local Development Setup +```bash +# 1. Clone repository +git clone +cd memoro-service + +# 2. Install dependencies +npm install + +# 3. Configure environment +cp .env.example .env +# Edit .env with your configuration + +# 4. Run database migrations (if any) +npm run migrate + +# 5. Start development server +npm run start:dev +``` + +### Environment Configuration +```env +# Server Configuration +PORT=3001 +NODE_ENV=development + +# Service URLs +MANA_SERVICE_URL=http://localhost:3000 +AUDIO_MICROSERVICE_URL=https://audio-microservice.run.app + +# Supabase Configuration +MEMORO_SUPABASE_URL=https://your-project.supabase.co +MEMORO_SUPABASE_ANON_KEY=your-anon-key +MEMORO_SUPABASE_SERVICE_KEY=your-service-key + +# Application Configuration +MEMORO_APP_ID=973da0c1-b479-4dac-a1b0-ed09c72caca8 + +# Feature Flags +ENABLE_BATCH_TRANSCRIPTION=true +ENABLE_SPEAKER_DIARIZATION=true +MAX_AUDIO_DURATION_MINUTES=180 + +# Logging +LOG_LEVEL=debug +LOG_FORMAT=json +``` + +## Module Deep Dive + +### 1. Auth Proxy Module + +#### Purpose +Routes authentication requests through Memoro Service to hide backend complexity. + +#### Structure +``` +auth-proxy/ +├── auth-proxy.controller.ts +├── auth-proxy.service.ts +├── auth-proxy.module.ts +└── dto/ + ├── signin.dto.ts + ├── signup.dto.ts + └── refresh.dto.ts +``` + +#### Implementation Details +```typescript +@Controller('auth') +export class AuthProxyController { + constructor( + private readonly authProxyService: AuthProxyService, + ) {} + + @Post('signin') + async signIn(@Body() signInDto: SignInDto) { + // Adds appId automatically + const appId = process.env.MEMORO_APP_ID; + return this.authProxyService.forwardRequest( + '/auth/signin', + { ...signInDto }, + { appId } + ); + } + + @Post('refresh') + @UseGuards(OptionalAuthGuard) + async refresh( + @Body() refreshDto: RefreshDto, + @Headers('authorization') auth?: string + ) { + return this.authProxyService.forwardRequest( + '/auth/refresh', + refreshDto, + { appId: process.env.MEMORO_APP_ID }, + auth + ); + } +} +``` + +### 2. Credits Module + +#### Credit Calculation Service +```typescript +@Injectable() +export class CreditConsumptionService { + private readonly PRICING = { + TRANSCRIPTION_PER_MINUTE: 2, + TRANSCRIPTION_MINIMUM: 10, + QUESTION_PROCESSING: 5, + MEMO_COMBINATION: 5, + HEADLINE_GENERATION: 10, + MEMORY_CREATION: 10, + BLUEPRINT_PROCESSING: 5, + }; + + calculateTranscriptionCost(durationSeconds: number): number { + const minutes = Math.ceil(durationSeconds / 60); + const cost = minutes * this.PRICING.TRANSCRIPTION_PER_MINUTE; + return Math.max(cost, this.PRICING.TRANSCRIPTION_MINIMUM); + } + + async validateAndConsume( + userId: string, + amount: number, + operation: string + ): Promise { + // Check balance + const balance = await this.creditClient.getBalance(userId); + if (balance < amount) { + throw new InsufficientCreditsError(amount, balance, operation); + } + + // Consume credits + await this.creditClient.consumeCredits(userId, amount, operation); + } +} +``` + +### 3. Memoro Module + +#### Audio Processing Service +```typescript +@Injectable() +export class MemoroService { + async processUploadedAudio( + userId: string, + filePath: string, + duration: number, + options: ProcessingOptions + ): Promise { + // 1. Validate credits + const cost = this.creditService.calculateTranscriptionCost(duration); + await this.creditService.validateCredits(userId, cost); + + // 2. Create memo record + const memo = await this.createMemoFromUploadedFile( + userId, + filePath, + options.metadata + ); + + // 3. Route to appropriate processing path + if (duration < 115 * 60) { // Less than 115 minutes + return this.processFastTranscription(memo, filePath, options); + } else { + return this.processBatchTranscription(memo, filePath, options); + } + } + + private async processFastTranscription( + memo: Memo, + filePath: string, + options: ProcessingOptions + ): Promise { + try { + // Attempt fast transcription + const result = await this.edgeFunctionClient.transcribe({ + audioPath: filePath, + languages: options.languages, + enableDiarization: true, + maxSpeakers: 2 + }); + + // Update memo with results + await this.updateMemoWithTranscript(memo.id, result); + + return { memo, status: 'completed', route: 'fast_transcribe' }; + } catch (error) { + if (this.isFormatError(error)) { + // Attempt format conversion + return this.processWithConversion(memo, filePath, options); + } + throw error; + } + } + + private async processBatchTranscription( + memo: Memo, + filePath: string, + options: ProcessingOptions + ): Promise { + // Submit to batch processing + const jobId = await this.audioMicroservice.submitBatchJob({ + audioPath: filePath, + memoId: memo.id, + languages: options.languages, + enableDiarization: true, + maxSpeakers: 10 + }); + + // Store job ID for recovery + await this.updateMemoMetadata(memo.id, { + processing: { + transcription: { + status: 'processing', + jobId: jobId, + startedAt: new Date().toISOString() + } + } + }); + + return { memo, status: 'processing', route: 'batch_transcribe' }; + } +} +``` + +## API Reference + +### Authentication Endpoints + +#### POST /auth/signin +```typescript +// Request +{ + "email": "user@example.com", + "password": "secure-password" +} + +// Response +{ + "manaToken": "eyJhbGc...", + "appToken": "eyJhbGc...", + "refreshToken": "refresh_token_here", + "user": { + "id": "user-uuid", + "email": "user@example.com", + "credits": 1500 + } +} +``` + +#### POST /auth/refresh +```typescript +// Request +{ + "refreshToken": "current_refresh_token" +} + +// Response +{ + "appToken": "new_app_token", + "refreshToken": "new_refresh_token" +} +``` + +### Audio Processing Endpoints + +#### POST /memoro/process-uploaded-audio +```typescript +// Request +{ + "filePath": "audio/2024/01/recording.m4a", + "duration": 3600, // seconds + "metadata": { + "recordingStartedAt": "2024-01-15T10:00:00Z", + "location": { "lat": 52.52, "lng": 13.405 } + }, + "recordingLanguages": ["en-US", "de-DE"], + "enableDiarization": true +} + +// Response +{ + "memo": { + "id": "memo-uuid", + "title": "Processing...", + "source": { + "audio_path": "audio/2024/01/recording.m4a", + "duration": 3600 + }, + "metadata": { + "processing": { + "transcription": { "status": "processing" } + }, + "recordingStartedAt": "2024-01-15T10:00:00Z" + } + }, + "processingRoute": "batch_transcribe" +} +``` + +#### POST /memoro/append-transcription +```typescript +// Request +{ + "memoId": "existing-memo-id", + "filePath": "audio/additional.m4a", + "duration": 300, + "recordingIndex": 0 // Optional: update specific recording +} + +// Response +{ + "success": true, + "additionalRecording": { + "index": 0, + "path": "audio/additional.m4a", + "status": "processing" + } +} +``` + +## Data Models + +### Memo Model +```typescript +interface Memo { + id: string; + user_id: string; + title?: string; + source: { + audio_path: string; + duration: number; + transcript?: string; + primary_language?: string; + languages?: string[]; + utterances?: Utterance[]; + speakers?: SpeakerMap; + speakerMap?: GroupedUtterances; + additional_recordings?: AdditionalRecording[]; + }; + metadata: { + processing?: ProcessingStatus; + recordingStartedAt?: string; + location?: Location; + stats?: MemoStats; + }; + created_at: string; + updated_at: string; +} + +interface ProcessingStatus { + transcription?: { + status: 'pending' | 'processing' | 'completed' | 'failed'; + error?: string; + jobId?: string; + attempts?: number; + }; + headline_and_intro?: { + status: 'pending' | 'processing' | 'completed' | 'failed'; + error?: string; + }; + blueprint?: { + status: 'pending' | 'processing' | 'completed' | 'failed'; + blueprintId?: string; + }; +} +``` + +### Speaker Data Models +```typescript +interface Utterance { + speaker: string; + text: string; + offset: number; + duration: number; + words?: Word[]; +} + +interface Word { + text: string; + offset: number; + duration: number; + confidence?: number; +} + +interface SpeakerMap { + [speakerId: string]: { + name: string; + label?: string; + }; +} + +interface GroupedUtterances { + [speakerId: string]: Utterance[]; +} +``` + +## Service Integrations + +### Mana Core Middleware Integration +```typescript +class ManaClientService { + private readonly client: AxiosInstance; + + constructor() { + this.client = axios.create({ + baseURL: process.env.MANA_SERVICE_URL, + timeout: 30000, + headers: { + 'Content-Type': 'application/json', + } + }); + } + + async getUserCredits(token: string): Promise { + const response = await this.client.get('/users/credits', { + headers: { Authorization: `Bearer ${token}` } + }); + return response.data; + } + + async consumeCredits( + token: string, + amount: number, + description: string + ): Promise { + await this.client.post( + '/users/credits/consume', + { amount, description }, + { headers: { Authorization: `Bearer ${token}` }} + ); + } +} +``` + +### Audio Microservice Integration +```typescript +class AudioMicroserviceClient { + async submitBatchTranscription(params: BatchParams): Promise { + const response = await axios.post( + `${this.baseUrl}/audio/transcribe-from-storage`, + { + filePath: params.filePath, + memoId: params.memoId, + languages: params.languages, + diarization: { + enabled: true, + maxSpeakers: params.maxSpeakers || 10 + } + } + ); + return response.data.jobId; + } + + async convertAndTranscribe(params: ConvertParams): Promise { + const response = await axios.post( + `${this.baseUrl}/audio/convert-and-transcribe-from-storage`, + params + ); + return response.data; + } +} +``` + +### Supabase Integration +```typescript +class SupabaseService { + private readonly client: SupabaseClient; + + constructor() { + this.client = createClient( + process.env.MEMORO_SUPABASE_URL, + process.env.MEMORO_SUPABASE_SERVICE_KEY + ); + } + + async createMemo(data: Partial): Promise { + const { data: memo, error } = await this.client + .from('memos') + .insert(data) + .select() + .single(); + + if (error) throw error; + return memo; + } + + async updateMemo(id: string, updates: Partial): Promise { + const { data: memo, error } = await this.client + .from('memos') + .update(updates) + .eq('id', id) + .select() + .single(); + + if (error) throw error; + return memo; + } + + subscribeToMemoUpdates(memoId: string, callback: (payload: any) => void) { + return this.client + .channel(`memo:${memoId}`) + .on('postgres_changes', + { event: 'UPDATE', schema: 'public', table: 'memos', filter: `id=eq.${memoId}` }, + callback + ) + .subscribe(); + } +} +``` + +## Error Handling + +### Custom Error Classes +```typescript +export class InsufficientCreditsError extends HttpException { + constructor( + public readonly required: number, + public readonly available: number, + public readonly operation: string + ) { + super( + { + statusCode: HttpStatus.BAD_REQUEST, + error: 'InsufficientCredits', + message: `Insufficient credits for ${operation}`, + details: { + required, + available, + operation + } + }, + HttpStatus.BAD_REQUEST + ); + } +} + +export class AudioFormatError extends HttpException { + constructor(filePath: string, format: string) { + super( + { + statusCode: HttpStatus.UNPROCESSABLE_ENTITY, + error: 'AudioFormatError', + message: 'Unsupported audio format', + details: { filePath, format } + }, + HttpStatus.UNPROCESSABLE_ENTITY + ); + } +} +``` + +### Global Exception Filter +```typescript +@Catch() +export class GlobalExceptionFilter implements ExceptionFilter { + catch(exception: any, host: ArgumentsHost) { + const ctx = host.switchToHttp(); + const response = ctx.getResponse(); + const request = ctx.getRequest(); + + const status = exception instanceof HttpException + ? exception.getStatus() + : HttpStatus.INTERNAL_SERVER_ERROR; + + const errorResponse = { + statusCode: status, + timestamp: new Date().toISOString(), + path: request.url, + method: request.method, + message: exception.message || 'Internal server error', + ...(exception instanceof HttpException ? exception.getResponse() as object : {}) + }; + + // Log error + Logger.error( + `${request.method} ${request.url}`, + JSON.stringify(errorResponse), + 'GlobalExceptionFilter' + ); + + response.status(status).json(errorResponse); + } +} +``` + +## Testing + +### Unit Testing +```typescript +// memoro.service.spec.ts +describe('MemoroService', () => { + let service: MemoroService; + let creditService: jest.Mocked; + let supabaseService: jest.Mocked; + + beforeEach(async () => { + const module = await Test.createTestingModule({ + providers: [ + MemoroService, + { + provide: CreditConsumptionService, + useValue: createMock() + }, + { + provide: SupabaseService, + useValue: createMock() + } + ] + }).compile(); + + service = module.get(MemoroService); + creditService = module.get(CreditConsumptionService); + supabaseService = module.get(SupabaseService); + }); + + describe('processUploadedAudio', () => { + it('should route short audio to fast transcription', async () => { + const duration = 60 * 30; // 30 minutes + creditService.calculateTranscriptionCost.mockReturnValue(60); + creditService.validateCredits.mockResolvedValue(undefined); + + const result = await service.processUploadedAudio( + 'user-id', + 'audio/file.m4a', + duration, + { languages: ['en-US'] } + ); + + expect(result.processingRoute).toBe('fast_transcribe'); + }); + + it('should route long audio to batch processing', async () => { + const duration = 60 * 120; // 120 minutes + + const result = await service.processUploadedAudio( + 'user-id', + 'audio/file.m4a', + duration, + { languages: ['en-US'] } + ); + + expect(result.processingRoute).toBe('batch_transcribe'); + }); + }); +}); +``` + +### Integration Testing +```typescript +// test/integration/audio-processing.e2e-spec.ts +describe('Audio Processing E2E', () => { + let app: INestApplication; + + beforeAll(async () => { + const moduleFixture = await Test.createTestingModule({ + imports: [AppModule], + }).compile(); + + app = moduleFixture.createNestApplication(); + await app.init(); + }); + + it('POST /memoro/process-uploaded-audio', async () => { + const response = await request(app.getHttpServer()) + .post('/memoro/process-uploaded-audio') + .set('Authorization', 'Bearer valid-token') + .send({ + filePath: 'test/audio.m4a', + duration: 300, + recordingLanguages: ['en-US'] + }) + .expect(201); + + expect(response.body).toHaveProperty('memo'); + expect(response.body.memo).toHaveProperty('id'); + expect(response.body.processingRoute).toBeDefined(); + }); +}); +``` + +### Load Testing +```javascript +// k6-load-test.js +import http from 'k6/http'; +import { check } from 'k6'; + +export let options = { + stages: [ + { duration: '2m', target: 100 }, // Ramp up + { duration: '5m', target: 100 }, // Stay at 100 users + { duration: '2m', target: 0 }, // Ramp down + ], +}; + +export default function() { + const params = { + headers: { + 'Content-Type': 'application/json', + 'Authorization': 'Bearer ${__ENV.TEST_TOKEN}' + }, + }; + + const payload = JSON.stringify({ + filePath: 'test/audio.m4a', + duration: 300, + }); + + const res = http.post( + 'http://localhost:3001/memoro/process-uploaded-audio', + payload, + params + ); + + check(res, { + 'status is 201': (r) => r.status === 201, + 'response time < 500ms': (r) => r.timings.duration < 500, + }); +} +``` + +## Performance + +### Optimization Strategies + +#### 1. Connection Pooling +```typescript +// Database connection pooling +const supabaseClient = createClient(url, key, { + db: { + poolSize: 10, + connectionTimeoutMillis: 5000, + idleTimeoutMillis: 30000, + } +}); +``` + +#### 2. Caching +```typescript +@Injectable() +export class CacheService { + private cache = new Map(); + + async get(key: string, factory: () => Promise, ttl = 30000): Promise { + const cached = this.cache.get(key); + + if (cached && cached.expiry > Date.now()) { + return cached.value as T; + } + + const value = await factory(); + this.cache.set(key, { + value, + expiry: Date.now() + ttl + }); + + return value; + } +} +``` + +#### 3. Request Batching +```typescript +class BatchProcessor { + private queue: Request[] = []; + private timer: NodeJS.Timeout; + + add(request: Request): Promise { + return new Promise((resolve, reject) => { + this.queue.push({ ...request, resolve, reject }); + + if (!this.timer) { + this.timer = setTimeout(() => this.flush(), 100); + } + }); + } + + private async flush() { + const batch = this.queue.splice(0); + const results = await this.processBatch(batch); + + batch.forEach((req, i) => { + req.resolve(results[i]); + }); + + this.timer = null; + } +} +``` + +### Performance Metrics + +| Metric | Target | Current | Notes | +|--------|--------|---------|-------| +| API Response Time (p50) | < 100ms | 85ms | ✅ | +| API Response Time (p95) | < 500ms | 420ms | ✅ | +| API Response Time (p99) | < 1000ms | 890ms | ✅ | +| Transcription Start Time | < 5s | 3.2s | ✅ | +| Credit Check Time | < 50ms | 35ms | ✅ | +| Database Query Time | < 100ms | 75ms | ✅ | +| Memory Usage | < 1GB | 650MB | ✅ | +| CPU Usage (avg) | < 70% | 45% | ✅ | + +## Security + +### Authentication & Authorization + +#### JWT Validation +```typescript +@Injectable() +export class JwtAuthGuard implements CanActivate { + async canActivate(context: ExecutionContext): Promise { + const request = context.switchToHttp().getRequest(); + const token = this.extractToken(request); + + if (!token) { + throw new UnauthorizedException('No token provided'); + } + + try { + const payload = await this.validateToken(token); + request.user = payload; + return true; + } catch (error) { + throw new UnauthorizedException('Invalid token'); + } + } + + private extractToken(request: Request): string | null { + const auth = request.headers.authorization; + if (!auth) return null; + + const [type, token] = auth.split(' '); + return type === 'Bearer' ? token : null; + } +} +``` + +#### Service Authentication +```typescript +@Injectable() +export class ServiceAuthGuard implements CanActivate { + canActivate(context: ExecutionContext): boolean { + const request = context.switchToHttp().getRequest(); + const serviceKey = request.headers['x-service-key']; + + if (!serviceKey) { + throw new UnauthorizedException('Service key required'); + } + + // Validate service key + const validKeys = [ + process.env.MANA_SERVICE_KEY, + process.env.AUDIO_SERVICE_KEY, + ]; + + if (!validKeys.includes(serviceKey)) { + throw new UnauthorizedException('Invalid service key'); + } + + return true; + } +} +``` + +### Input Validation +```typescript +// DTOs with validation +export class ProcessAudioDto { + @IsString() + @IsNotEmpty() + filePath: string; + + @IsNumber() + @Min(1) + @Max(10800) // 3 hours max + duration: number; + + @IsArray() + @IsString({ each: true }) + @ArrayMaxSize(10) + recordingLanguages: string[]; + + @IsBoolean() + @IsOptional() + enableDiarization?: boolean; + + @IsObject() + @IsOptional() + metadata?: Record; +} +``` + +### Security Headers +```typescript +// main.ts +app.use(helmet({ + contentSecurityPolicy: { + directives: { + defaultSrc: ["'self'"], + styleSrc: ["'self'", "'unsafe-inline'"], + scriptSrc: ["'self'"], + imgSrc: ["'self'", "data:", "https:"], + }, + }, + hsts: { + maxAge: 31536000, + includeSubDomains: true, + preload: true, + }, +})); +``` + +## Deployment + +### Docker Configuration +```dockerfile +# Dockerfile +FROM node:18-alpine AS builder + +WORKDIR /app + +# Copy package files +COPY package*.json ./ +RUN npm ci --only=production + +# Copy source +COPY . . +RUN npm run build + +# Production image +FROM node:18-alpine + +WORKDIR /app + +# Copy built application +COPY --from=builder /app/dist ./dist +COPY --from=builder /app/node_modules ./node_modules +COPY --from=builder /app/package.json ./ + +# Health check +HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ + CMD node -e "require('http').get('http://localhost:3001/health', (r) => {r.statusCode === 200 ? process.exit(0) : process.exit(1)})" + +EXPOSE 3001 + +CMD ["node", "dist/main"] +``` + +### Cloud Run Deployment +```yaml +# cloudbuild.yaml +steps: + # Build the container image + - name: 'gcr.io/cloud-builders/docker' + args: ['build', '-t', 'gcr.io/$PROJECT_ID/memoro-service:$COMMIT_SHA', '.'] + + # Push to Container Registry + - name: 'gcr.io/cloud-builders/docker' + args: ['push', 'gcr.io/$PROJECT_ID/memoro-service:$COMMIT_SHA'] + + # Deploy to Cloud Run + - name: 'gcr.io/cloud-builders/gcloud' + args: + - 'run' + - 'deploy' + - 'memoro-service' + - '--image=gcr.io/$PROJECT_ID/memoro-service:$COMMIT_SHA' + - '--region=europe-west3' + - '--platform=managed' + - '--memory=1Gi' + - '--cpu=1' + - '--min-instances=1' + - '--max-instances=10' + - '--set-env-vars-from-file=.env.prod' +``` + +### Environment Management +```bash +# Production deployment +gcloud run deploy memoro-service \ + --image gcr.io/PROJECT/memoro-service:latest \ + --region europe-west3 \ + --set-env-vars="NODE_ENV=production,LOG_LEVEL=info" + +# Staging deployment +gcloud run deploy memoro-service-staging \ + --image gcr.io/PROJECT/memoro-service:staging \ + --region europe-west3 \ + --set-env-vars="NODE_ENV=staging,LOG_LEVEL=debug" +``` + +## Monitoring & Logging + +### Structured Logging +```typescript +@Injectable() +export class LoggerService { + private logger = new Logger('MemoroService'); + + log(message: string, context?: any) { + this.logger.log({ + timestamp: new Date().toISOString(), + level: 'info', + message, + ...context + }); + } + + error(message: string, error: Error, context?: any) { + this.logger.error({ + timestamp: new Date().toISOString(), + level: 'error', + message, + error: { + name: error.name, + message: error.message, + stack: error.stack + }, + ...context + }); + } +} +``` + +### Metrics Collection +```typescript +@Injectable() +export class MetricsService { + private metrics = { + requestCount: 0, + errorCount: 0, + creditUsage: 0, + processingTime: [] + }; + + recordRequest(endpoint: string, duration: number, status: number) { + this.metrics.requestCount++; + + if (status >= 400) { + this.metrics.errorCount++; + } + + this.metrics.processingTime.push({ + endpoint, + duration, + timestamp: Date.now() + }); + } + + getMetrics() { + return { + ...this.metrics, + avgProcessingTime: this.calculateAverage(this.metrics.processingTime), + errorRate: this.metrics.errorCount / this.metrics.requestCount + }; + } +} +``` + +### Health Checks +```typescript +@Controller('health') +export class HealthController { + constructor( + private health: HealthCheckService, + private db: TypeOrmHealthIndicator, + private http: HttpHealthIndicator, + ) {} + + @Get() + @HealthCheck() + check() { + return this.health.check([ + () => this.db.pingCheck('database'), + () => this.http.pingCheck('mana-core', process.env.MANA_SERVICE_URL), + () => this.checkDiskSpace(), + () => this.checkMemoryUsage(), + ]); + } + + private checkMemoryUsage() { + const used = process.memoryUsage(); + const limit = 1024 * 1024 * 1024; // 1GB + + return { + memory: { + status: used.heapUsed < limit ? 'up' : 'down', + used: Math.round(used.heapUsed / 1024 / 1024), + limit: limit / 1024 / 1024 + } + }; + } +} +``` + +## Troubleshooting + +### Common Issues + +#### 1. Authentication Failures +```bash +# Check JWT token +curl -H "Authorization: Bearer $TOKEN" http://localhost:3001/auth/validate + +# Common causes: +- Token expired (check exp claim) +- Wrong app_id in token +- Service key not configured +``` + +#### 2. Credit Insufficient Errors +```typescript +// Debug credit issues +async debugCredits(userId: string) { + const balance = await this.creditService.getBalance(userId); + const pendingOps = await this.getPendingOperations(userId); + + console.log({ + currentBalance: balance, + pendingConsumption: pendingOps.reduce((sum, op) => sum + op.cost, 0), + availableCredits: balance - pendingConsumption + }); +} +``` + +#### 3. Transcription Failures +```bash +# Check audio format +ffprobe audio/file.m4a + +# Common issues: +- Unsupported codec (use AAC) +- File too large (>180 minutes) +- Corrupted audio file +- Network timeout for large files +``` + +#### 4. Real-time Updates Not Working +```typescript +// Debug Supabase subscriptions +const channel = supabase.channel('debug') + .on('*', (payload) => console.log('Event:', payload)) + .subscribe((status) => { + console.log('Subscription status:', status); + }); + +// Common issues: +- JWT not passed to Supabase client +- RLS policies blocking access +- WebSocket connection issues +``` + +### Debug Mode +```typescript +// Enable debug logging +if (process.env.NODE_ENV === 'development') { + app.useLogger(['debug', 'error', 'warn', 'log', 'verbose']); + + // Log all requests + app.use((req, res, next) => { + console.log(`[${req.method}] ${req.url}`, { + headers: req.headers, + body: req.body + }); + next(); + }); +} +``` + +### Performance Profiling +```typescript +// CPU profiling +import * as v8Profiler from 'v8-profiler-next'; + +export class ProfilingService { + startProfiling(title: string) { + v8Profiler.startProfiling(title, true); + } + + stopProfiling(title: string) { + const profile = v8Profiler.stopProfiling(title); + profile.export((error, result) => { + fs.writeFileSync(`${title}.cpuprofile`, result); + profile.delete(); + }); + } +} +``` + +## Appendices + +### A. Environment Variables Reference + +| Variable | Description | Default | Required | +|----------|-------------|---------|----------| +| PORT | Service port | 3001 | No | +| NODE_ENV | Environment | development | No | +| MANA_SERVICE_URL | Mana Core URL | - | Yes | +| AUDIO_MICROSERVICE_URL | Audio service URL | - | Yes | +| MEMORO_SUPABASE_URL | Supabase URL | - | Yes | +| MEMORO_SUPABASE_ANON_KEY | Anon key | - | Yes | +| MEMORO_SUPABASE_SERVICE_KEY | Service key | - | Yes | +| MEMORO_APP_ID | App identifier | - | Yes | +| LOG_LEVEL | Logging level | info | No | +| ENABLE_BATCH_TRANSCRIPTION | Feature flag | true | No | + +### B. Error Codes Reference + +| Code | Description | HTTP Status | +|------|-------------|-------------| +| AUTH001 | Invalid token | 401 | +| AUTH002 | Token expired | 401 | +| AUTH003 | Insufficient permissions | 403 | +| CREDIT001 | Insufficient credits | 402 | +| CREDIT002 | Credit check failed | 500 | +| AUDIO001 | Invalid audio format | 422 | +| AUDIO002 | Audio too long | 413 | +| AUDIO003 | Transcription failed | 500 | +| SPACE001 | Space not found | 404 | +| SPACE002 | Not space member | 403 | + +### C. API Rate Limits + +| Endpoint | Rate Limit | Window | +|----------|------------|--------| +| /auth/* | 10 req | 1 min | +| /memoro/process-uploaded-audio | 5 req | 1 min | +| /memoro/question-memo | 10 req | 1 min | +| /memoro/spaces/* | 30 req | 1 min | +| Default | 100 req | 1 min | \ No newline at end of file diff --git a/apps/memoro/apps/backend/cloud-run-deploy.md b/apps/memoro/apps/backend/cloud-run-deploy.md new file mode 100644 index 000000000..efae83152 --- /dev/null +++ b/apps/memoro/apps/backend/cloud-run-deploy.md @@ -0,0 +1,132 @@ +# Memoro Microservice Cloud Run Deployment Guide + +## 1. Set up environment secrets + +```bash +# Step 1: Authenticate with Google Cloud if needed +gcloud auth login + +# Step 2: Set your project ID +gcloud config set project memo-2c4c4 + +# Step 3: Create or update GCP Secret Manager secrets for Memoro service +# If you're using existing secrets from the main service, you can reference those +# Otherwise, create new secrets for Memoro-specific configuration +gcloud secrets create MEMORO_SUPABASE_URL --data-file=/path/to/secret/value.txt +gcloud secrets create MEMORO_SUPABASE_ANON_KEY --data-file=/path/to/secret/value.txt +gcloud secrets create MANA_SERVICE_URL --data-file=/path/to/secret/value.txt +gcloud secrets create MEMORO_APP_ID --data-file=/path/to/secret/value.txt +``` + +## 2. Build and push Docker image + +```bash +# Navigate to the Memoro service directory +cd memoro-service + +gcloud builds submit --project=memo-2c4c4 --config=cloudbuild-memoro.yaml . + +## 3. Deploy to Cloud Run + +```bash +gcloud run deploy memoro-service \ + --image europe-west3-docker.pkg.dev/mana-core-453821/memoro-service/memoro-service:v1.0.0 \ + --platform managed \ + --region europe-west3 \ + --allow-unauthenticated \ + --memory 512Mi \ + --set-secrets=MEMORO_SUPABASE_URL=MEMORO_SUPABASE_URL:latest,MEMORO_SUPABASE_ANON_KEY=MEMORO_SUPABASE_ANON_KEY:latest,MANA_SERVICE_URL=MANA_SERVICE_URL:latest,MEMORO_APP_ID=MEMORO_APP_ID:latest +``` +gcloud run deploy memoro-service \ + --source . \ + --platform managed \ + --region europe-west3 \ + --allow-unauthenticated \ + --memory 512Mi \ + --set-secrets=MEMORO_SUPABASE_URL=MEMORO_SUPABASE_URL:latest,MEMORO_SUPABASE_ANON_KEY=MEMORO_SUPABASE_ANON_KEY:latest,MANA_SERVICE_URL=MANA_SERVICE_URL:latest,MEMORO_APP_ID=MEMORO_APP_ID:latest + + +## 4. Update Main Middleware Environment Variables + +After deploying the Memoro microservice, you need to update the main middleware service's environment to point to the new Memoro service URL. + +```bash +# Get the Memoro service URL +MEMORO_SERVICE_URL=$(gcloud run services describe memoro-service --platform managed --region europe-west3 --format 'value(status.url)') + +# Update the main middleware's MEMORO_SERVICE_URL environment variable +gcloud run services update mana-core-middleware-dev \ + --region europe-west3 \ + --platform managed \ + --set-env-vars=MEMORO_SERVICE_URL=$MEMORO_SERVICE_URL +``` + +## 5. Testing the deployment + +```bash +# Get the service URL +SERVICE_URL=$(gcloud run services describe memoro-service --platform managed --region europe-west3 --format 'value(status.url)') + +# Test the API (requires authentication) +curl -H "Authorization: Bearer YOUR_JWT_TOKEN" $SERVICE_URL/memoro/spaces +``` + +## 6. Monitoring and Logging + +After deployment, you can monitor your service through: + +- **Cloud Run Dashboard**: For service health, traffic, and resource usage +- **Cloud Logging**: For application logs +- **Cloud Monitoring**: For setting up alerts and dashboards + +```bash +# View logs +gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=memoro-service" --limit 10 +``` + +## 7. Troubleshooting + +If you encounter issues with your deployment: + +1. Check application logs in Cloud Logging +2. Verify that all environment secrets are correctly set +3. Ensure that your service has sufficient memory and CPU +4. Check that the service account has the necessary permissions +5. Verify that the service can communicate with Auth and Spaces services +6. Check for CORS issues if calling from frontend applications + +## 8. Continuous Deployment (optional) + +You can set up continuous deployment using Cloud Build: + +```bash +# Create a Cloud Build trigger +gcloud builds triggers create github \ + --repo-name=your-repo-name \ + --branch-pattern=main \ + --build-config=cloudbuild.yaml +``` + +Example `cloudbuild.yaml`: + +```yaml +steps: + - name: 'gcr.io/cloud-builders/docker' + args: ['build', '-t', 'europe-west3-docker.pkg.dev/mana-core-453821/memoro-service/memoro-service:$COMMIT_SHA', '.'] + - name: 'gcr.io/cloud-builders/docker' + args: ['push', 'europe-west3-docker.pkg.dev/mana-core-453821/memoro-service/memoro-service:$COMMIT_SHA'] + - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk' + entrypoint: gcloud + args: + - 'run' + - 'deploy' + - 'memoro-service' + - '--image' + - 'europe-west3-docker.pkg.dev/mana-core-453821/memoro-service/memoro-service:$COMMIT_SHA' + - '--region' + - 'europe-west3' + - '--platform' + - 'managed' +images: + - 'europe-west3-docker.pkg.dev/mana-core-453821/memoro-service/memoro-service:$COMMIT_SHA' +``` \ No newline at end of file diff --git a/apps/memoro/apps/backend/cloudbuild-memoro.yaml b/apps/memoro/apps/backend/cloudbuild-memoro.yaml new file mode 100644 index 000000000..fad6ca433 --- /dev/null +++ b/apps/memoro/apps/backend/cloudbuild-memoro.yaml @@ -0,0 +1,8 @@ +# cloudbuild-memoro.yaml +steps: + - name: 'gcr.io/cloud-builders/docker' + args: ['build', '-t', 'europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.9.8', '.'] # Assumes Dockerfile is in ./memoro-service + - name: 'gcr.io/cloud-builders/docker' + args: ['push', 'europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.9.8'] +images: + - 'europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.9.8' \ No newline at end of file diff --git a/apps/memoro/apps/backend/docs/SERVICE_AUTH_IMPLEMENTATION.md b/apps/memoro/apps/backend/docs/SERVICE_AUTH_IMPLEMENTATION.md new file mode 100644 index 000000000..575e1f8a1 --- /dev/null +++ b/apps/memoro/apps/backend/docs/SERVICE_AUTH_IMPLEMENTATION.md @@ -0,0 +1,146 @@ +# Service-to-Service Authentication Implementation + +## Overview +This document describes the implementation of service role key authentication between the audio microservice and memoro service, replacing the previous user JWT token passthrough approach. + +## Problem Statement +The audio microservice was experiencing 401 authentication errors when calling back to the memoro service because: +- User JWT tokens were expiring during long-running transcription processes +- The audio service needed to make callbacks even after the user's session ended +- Service-to-service communication should not depend on user authentication + +## Solution Architecture + +### 1. Service Authentication Guard +Created `src/guards/service-auth.guard.ts` that: +- Validates requests using Supabase service role keys +- Accepts both `MEMORO_SUPABASE_SERVICE_KEY` and `SUPABASE_SERVICE_KEY` for compatibility +- Marks authenticated requests with `isServiceAuth` flag + +### 2. Dedicated Service Endpoints +Created `src/memoro/memoro-service.controller.ts` with service-specific endpoints: +- `/memoro/service/transcription-completed` +- `/memoro/service/append-transcription-completed` +- `/memoro/service/update-batch-metadata` + +These endpoints: +- Use `ServiceAuthGuard` instead of regular `AuthGuard` +- Call existing service methods with `token: null` +- Pass userId for ownership validation + +### 3. Ownership Validation +Updated service methods to validate memo ownership when using service auth: +- `handleTranscriptionCompleted`: Validates memo.user_id matches provided userId +- `handleAppendTranscriptionCompleted`: Validates memo.user_id matches provided userId +- `updateBatchMetadataByMemoId`: Validates memo.user_id matches provided userId (when userId provided) + +### 4. Supabase Client Configuration +Fixed JWT parsing errors by conditionally creating Supabase clients: +```typescript +const authClient = isServiceAuth + ? createClient(this.memoroUrl, this.memoroServiceKey) + : createClient(this.memoroUrl, this.memoroServiceKey, { + global: { headers: { Authorization: `Bearer ${token}` } } + }); +``` + +## Audio Microservice Changes + +### 1. Updated Callback URLs +All callbacks now use `/service/` endpoints: +- `notifyTranscriptionComplete`: Uses `/memoro/service/transcription-completed` +- `notifyAppendTranscriptionComplete`: Uses `/memoro/service/append-transcription-completed` +- `storeBatchJobMetadata`: Uses `/memoro/service/update-batch-metadata` + +### 2. Service Key Authentication +Updated to use service role key instead of user tokens: +```typescript +const serviceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY') || + this.configService.get('SUPABASE_SERVICE_KEY'); +``` + +### 3. UserId Parameter +Added userId parameter to batch metadata updates for ownership validation + +## Environment Variables + +### Memoro Service +```bash +# Primary service key +MEMORO_SUPABASE_SERVICE_KEY= + +# Also accepts for compatibility +SUPABASE_SERVICE_KEY= +``` + +### Audio Microservice +```bash +# Primary service key (for memoro callbacks) +MEMORO_SUPABASE_SERVICE_KEY= + +# Original service key (for Supabase operations) +SUPABASE_SERVICE_KEY= +``` + +## Deployment Steps + +### 1. Deploy Memoro Service +```bash +# Add environment variable +gcloud run services update memoro-service \ + --project=memo-2c4c4 \ + --region=europe-west3 \ + --update-env-vars="SUPABASE_SERVICE_KEY=" + +# Build and deploy new code +gcloud builds submit --config=cloudbuild-memoro.yaml +gcloud run deploy memoro-service \ + --project=memo-2c4c4 \ + --image=europe-west3-docker.pkg.dev/memo-2c4c4/memoro-service/memoro-service:v4.9.6 \ + --platform=managed \ + --region=europe-west3 \ + --allow-unauthenticated \ + --memory=1Gi +``` + +### 2. Deploy Audio Microservice +```bash +# Add environment variable +gcloud run services update audio-microservice \ + --project=memo-2c4c4 \ + --region=europe-west3 \ + --update-env-vars="MEMORO_SUPABASE_SERVICE_KEY=" + +# Build and deploy new code +# (Follow standard audio microservice deployment process) +``` + +## Security Considerations + +1. **Service Role Key Protection**: Service role keys bypass RLS, so they must be: + - Stored as environment variables only + - Never exposed to clients + - Rotated periodically + +2. **Ownership Validation**: Even with service auth, the system validates: + - User owns the memo being updated + - Prevents unauthorized access across users + +3. **Network Security**: Both services run on Google Cloud Run with: + - HTTPS encryption in transit + - Network isolation + - IAM-based access control + +## Benefits + +1. **Reliability**: No more 401 errors from expired user tokens +2. **Consistency**: Service-to-service auth independent of user sessions +3. **Performance**: Direct service authentication without token validation overhead +4. **Maintainability**: Clear separation between user and service endpoints + +## Future Improvements + +1. **mTLS**: Implement mutual TLS between services +2. **Service Accounts**: Use Google Cloud service accounts instead of API keys +3. **Rate Limiting**: Add rate limiting to service endpoints +4. **Audit Logging**: Enhanced logging for service-to-service calls \ No newline at end of file diff --git a/apps/memoro/apps/backend/docs/SIGNUP_IMPLEMENTATION_PLAN.md b/apps/memoro/apps/backend/docs/SIGNUP_IMPLEMENTATION_PLAN.md new file mode 100644 index 000000000..36a39a625 --- /dev/null +++ b/apps/memoro/apps/backend/docs/SIGNUP_IMPLEMENTATION_PLAN.md @@ -0,0 +1,460 @@ +# Memoro Service - New Signup Implementation Plan + +## Overview + +This plan outlines the steps to integrate the Memoro backend service with the new Mana Core authentication system that includes dynamic email branding and enhanced device tracking. + +## Current State Analysis + +### Existing Implementation +- **Location**: `src/auth-proxy/auth-proxy.service.ts` +- **Current App ID**: `973da0c1-b479-4dac-a1b0-ed09c72caca8` (in .env) +- **Mana Core App ID**: `edde080c-3882-46bd-9867-72bdf3cbd99c` (in mana-core config) +- **Current Flow**: Simple proxy to Mana Core with redirect URL override + +### Current Signup Code (Line 111-118) +```typescript +async signup(payload: any) { + // Add custom redirect URL for Memoro + const enhancedPayload = { + ...payload, + redirectUrl: 'https://memoro.ai/de/welcome/' + }; + return this.proxyPost('/auth/signup', enhancedPayload); +} +``` + +### Issues to Address +1. ❌ No TypeScript types/interfaces (uses `any`) +2. ❌ App ID mismatch between .env and mana-core config +3. ❌ Missing logo metadata for custom branding +4. ❌ No validation of required fields (deviceInfo) +5. ❌ No DTO classes for request/response + +--- + +## Implementation Plan + +### Phase 1: Create TypeScript Interfaces & DTOs + +#### 1.1 Device Info Interface +**File**: `src/auth-proxy/dto/device-info.dto.ts` + +```typescript +import { IsString, IsEnum, IsOptional } from 'class-validator'; + +export enum DeviceType { + WEB = 'web', + IOS = 'ios', + ANDROID = 'android', + DESKTOP = 'desktop', +} + +export class DeviceInfoDto { + @IsString() + deviceId: string; + + @IsString() + deviceName: string; + + @IsEnum(DeviceType) + deviceType: DeviceType; + + @IsOptional() + @IsString() + userAgent?: string; +} +``` + +#### 1.2 Signup Request DTO +**File**: `src/auth-proxy/dto/signup-request.dto.ts` + +```typescript +import { IsEmail, IsString, MinLength, ValidateNested, IsOptional } from 'class-validator'; +import { Type } from 'class-transformer'; +import { DeviceInfoDto } from './device-info.dto'; + +export class SignupRequestDto { + @IsEmail() + email: string; + + @IsString() + @MinLength(8) + password: string; + + @ValidateNested() + @Type(() => DeviceInfoDto) + deviceInfo: DeviceInfoDto; + + @IsOptional() + metadata?: { + [key: string]: any; + }; + + @IsOptional() + @IsString() + redirectUrl?: string; +} +``` + +#### 1.3 Signup Response Interface +**File**: `src/auth-proxy/interfaces/signup-response.interface.ts` + +```typescript +export interface SignupResponse { + message: string; + confirmationRequired: boolean; + manaToken?: string; + appToken?: string; + refreshToken?: string; + deviceId?: string; + user: { + id: string; + email: string; + created_at?: string; + }; +} +``` + +#### 1.4 Auth Metadata Interface +**File**: `src/auth-proxy/interfaces/auth-metadata.interface.ts` + +```typescript +export interface AuthMetadata { + logoUrl?: string; + userName?: string; + [key: string]: any; +} +``` + +--- + +### Phase 2: Update Environment Configuration + +#### 2.1 Verify App ID +**Action**: Check which App ID is correct +- Option A: Update `.env` to use `edde080c-3882-46bd-9867-72bdf3cbd99c` (from mana-core) +- Option B: Update mana-core config to use `973da0c1-b479-4dac-a1b0-ed09c72caca8` + +**Recommendation**: Use the App ID that's configured in mana-core (`edde080c-3882-46bd-9867-72bdf3cbd99c`) + +#### 2.2 Add Logo Configuration +**File**: `.env` + +```bash +# Add to .env +MEMORO_LOGO_FILENAME=memoro-logo.svg +``` + +**File**: `env.example` +```bash +# Add to env.example +MEMORO_LOGO_FILENAME=memoro-logo.svg +``` + +--- + +### Phase 3: Update Auth Proxy Service + +#### 3.1 Enhanced Signup Method +**File**: `src/auth-proxy/auth-proxy.service.ts` + +```typescript +import { SignupRequestDto } from './dto/signup-request.dto'; +import { SignupResponse } from './interfaces/signup-response.interface'; +import { AuthMetadata } from './interfaces/auth-metadata.interface'; + +export class AuthProxyService { + private memoroLogoFilename: string; + + constructor( + private httpService: HttpService, + private configService: ConfigService, + ) { + this.manaServiceUrl = this.configService.get('MANA_SERVICE_URL', 'http://localhost:3000'); + this.memoroAppId = this.configService.get('MEMORO_APP_ID'); + this.memoroLogoFilename = this.configService.get('MEMORO_LOGO_FILENAME', 'memoro-logo.svg'); + } + + async signup(payload: SignupRequestDto): Promise { + // Validate device info is present + if (!payload.deviceInfo) { + throw new HttpException( + 'Device information is required for signup', + HttpStatus.BAD_REQUEST + ); + } + + // Prepare metadata with logo for custom email branding + const metadata: AuthMetadata = { + ...payload.metadata, + logoUrl: this.memoroLogoFilename, // Just the filename + }; + + // Enhanced payload with Memoro-specific branding + const enhancedPayload = { + email: payload.email, + password: payload.password, + deviceInfo: payload.deviceInfo, + metadata, + redirectUrl: payload.redirectUrl || 'https://memoro.ai/de/welcome/', + }; + + console.log('[AuthProxy] Signup with enhanced payload:', { + email: enhancedPayload.email, + hasDeviceInfo: !!enhancedPayload.deviceInfo, + logoUrl: metadata.logoUrl, + redirectUrl: enhancedPayload.redirectUrl, + }); + + return this.proxyPost('/auth/signup', enhancedPayload); + } +} +``` + +--- + +### Phase 4: Update Auth Proxy Controller + +#### 4.1 Add Validation Pipe +**File**: `src/auth-proxy/auth-proxy.controller.ts` + +```typescript +import { + Controller, + Post, + Get, + Body, + Headers, + HttpCode, + HttpException, + HttpStatus, + UsePipes, + ValidationPipe +} from '@nestjs/common'; +import { SignupRequestDto } from './dto/signup-request.dto'; +import { SignupResponse } from './interfaces/signup-response.interface'; + +@Controller('auth') +export class AuthProxyController { + constructor(private readonly authProxyService: AuthProxyService) {} + + @Post('signup') + @UsePipes(new ValidationPipe({ + whitelist: true, + forbidNonWhitelisted: true, + transform: true + })) + async signup(@Body() payload: SignupRequestDto): Promise { + return this.authProxyService.signup(payload); + } + + // Other methods remain similar but can be typed + @Post('signin') + async signin(@Body() payload: any) { + // Validate device info + if (!payload.deviceInfo) { + throw new HttpException( + 'Device information is required for signin', + HttpStatus.BAD_REQUEST + ); + } + return this.authProxyService.signin(payload); + } +} +``` + +--- + +### Phase 5: Install Required Dependencies + +```bash +cd memoro-service +npm install class-validator class-transformer +``` + +--- + +### Phase 6: Testing + +#### 6.1 Unit Tests +**File**: `src/auth-proxy/auth-proxy.service.spec.ts` + +Add tests for: +- ✅ Signup with valid deviceInfo +- ✅ Signup includes logo metadata +- ✅ Signup includes redirect URL +- ✅ Error when deviceInfo is missing + +#### 6.2 Integration Tests + +**Test 1: Signup with All Fields** +```bash +curl -X POST http://localhost:3001/auth/signup \ + -H 'Content-Type: application/json' \ + -d '{ + "email": "test@memoro.ai", + "password": "Test123456!", + "deviceInfo": { + "deviceId": "web-test-device-1", + "deviceName": "Chrome on MacBook", + "deviceType": "web", + "userAgent": "Mozilla/5.0..." + } + }' +``` + +**Expected Response:** +```json +{ + "message": "Sign up successful. Please check your email to confirm your account.", + "confirmationRequired": true, + "user": { + "id": "...", + "email": "test@memoro.ai" + } +} +``` + +**Test 2: Check Email Branding** +- Email should show Memoro logo +- Email should use yellow color scheme (#F8D62B) +- Email should show German/English taglines +- Email should include Memoro features + +**Test 3: Missing DeviceInfo (Should Fail)** +```bash +curl -X POST http://localhost:3001/auth/signup \ + -H 'Content-Type: application/json' \ + -d '{ + "email": "test@memoro.ai", + "password": "Test123456!" + }' +``` + +**Expected:** 400 Bad Request with validation error + +--- + +### Phase 7: Documentation + +#### 7.1 Update README +**File**: `README.md` + +Add section: +```markdown +## Authentication + +Memoro uses the Mana Core authentication system with custom branding. + +### Signup Flow + +When users sign up via Memoro: +1. Frontend calls `/auth/signup` with email, password, and device info +2. Memoro backend adds Memoro logo metadata +3. Mana Core creates account and sends branded email +4. User confirms email and can log in + +See [docs/AUTH_INTEGRATION.md](./docs/AUTH_INTEGRATION.md) for details. +``` + +#### 7.2 Create Integration Doc +**File**: `docs/AUTH_INTEGRATION.md` + +Document: +- How Memoro integrates with Mana Core +- Required environment variables +- Device info requirements +- Custom branding flow +- Error handling + +--- + +## Migration Checklist + +### Pre-Deployment +- [ ] Verify App ID is correct in both services +- [ ] Upload `memoro-logo.svg` to Mana Core Supabase bucket +- [ ] Update `.env` with correct `MEMORO_APP_ID` +- [ ] Add `MEMORO_LOGO_FILENAME=memoro-logo.svg` to `.env` +- [ ] Install dependencies: `class-validator`, `class-transformer` +- [ ] Run tests locally + +### Code Changes +- [ ] Create DTOs in `src/auth-proxy/dto/` +- [ ] Create interfaces in `src/auth-proxy/interfaces/` +- [ ] Update `auth-proxy.service.ts` with new signup method +- [ ] Update `auth-proxy.controller.ts` with validation +- [ ] Add unit tests +- [ ] Update documentation + +### Deployment +- [ ] Deploy to staging environment +- [ ] Test signup flow end-to-end +- [ ] Verify email branding looks correct +- [ ] Check device tracking works +- [ ] Deploy to production +- [ ] Monitor for errors + +### Post-Deployment +- [ ] Verify production signup emails show Memoro branding +- [ ] Test all auth flows (signin, google, apple) +- [ ] Update frontend to include deviceInfo if not already +- [ ] Document any issues/learnings + +--- + +## Timeline Estimate + +- **Phase 1-2** (Types & Config): 1 hour +- **Phase 3-4** (Service & Controller): 2 hours +- **Phase 5** (Dependencies): 15 minutes +- **Phase 6** (Testing): 2 hours +- **Phase 7** (Documentation): 1 hour + +**Total**: ~6-7 hours + +--- + +## Risk Assessment + +### Low Risk +✅ Adding types/interfaces (backward compatible) +✅ Adding logo metadata (optional field) +✅ Documentation updates + +### Medium Risk +⚠️ Changing App ID (requires coordination) +⚠️ Adding validation (could break existing clients) + +### Mitigation +- Test thoroughly in staging +- Deploy during low-traffic period +- Have rollback plan ready +- Monitor error rates after deployment + +--- + +## Questions to Resolve + +1. **App ID**: Which App ID should be used? + - Current in memoro-service: `973da0c1-b479-4dac-a1b0-ed09c72caca8` + - Current in mana-core: `edde080c-3882-46bd-9867-72bdf3cbd99c` + +2. **Breaking Changes**: Should we enforce validation immediately or phase it in? + - Option A: Enforce now (could break old clients) + - Option B: Log warnings first, enforce later + +3. **Logo Location**: Is `memoro-logo.svg` already uploaded to satellites-logos bucket? + +--- + +## Success Criteria + +✅ Signup creates account successfully +✅ Email shows Memoro branding (yellow, logo, features) +✅ DeviceInfo is properly tracked +✅ All auth tests pass +✅ No breaking changes to existing clients +✅ Documentation is complete +✅ Production deployment successful diff --git a/apps/memoro/apps/backend/docs/append-transcription-usage.md b/apps/memoro/apps/backend/docs/append-transcription-usage.md new file mode 100644 index 000000000..d4d7e1f85 --- /dev/null +++ b/apps/memoro/apps/backend/docs/append-transcription-usage.md @@ -0,0 +1,154 @@ +# Append Transcription Usage Example + +## Overview +The append-transcription endpoint allows you to add additional audio recordings to an existing memo and have them transcribed. This is useful when users want to add follow-up thoughts or additional content to a memo without creating a new one. + +## Frontend Integration Example + +```typescript +// Example: Adding an additional recording to an existing memo + +async function appendAudioToMemo( + memoId: string, + audioFile: File, + recordingDuration: number +) { + try { + // 1. Upload audio file to Supabase storage (similar to main recording) + const filePath = `${userId}/recordings/${Date.now()}_append.webm`; + const { error: uploadError } = await supabase.storage + .from('user-uploads') + .upload(filePath, audioFile); + + if (uploadError) { + throw uploadError; + } + + // 2. Call the append-transcription endpoint + const response = await fetch(`${MEMORO_SERVICE_URL}/memoro/append-transcription`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'Authorization': `Bearer ${token}` + }, + body: JSON.stringify({ + memoId: memoId, + filePath: filePath, + duration: recordingDuration, + recordingLanguages: ['de-DE', 'en-US'], // Optional: user's selected languages + enableDiarization: true // Optional: enable speaker detection + }) + }); + + if (!response.ok) { + const error = await response.json(); + throw new Error(error.message || 'Failed to append transcription'); + } + + const result = await response.json(); + console.log('Append transcription started:', result); + + // The memo will be updated asynchronously + // You can listen to real-time updates or poll for status + + return result; + } catch (error) { + console.error('Error appending audio to memo:', error); + throw error; + } +} +``` + +## Response Format + +### Success Response +```json +{ + "success": true, + "memoId": "uuid-here", + "filePath": "userId/recordings/timestamp_append.webm", + "status": "processing", + "estimatedDuration": 5, + "message": "Append transcription in progress.", + "estimatedCredits": 10 +} +``` + +### Error Responses + +#### Insufficient Credits +```json +{ + "statusCode": 403, + "message": "Insufficient credits for transcription. Required: 10, Available: 5 (user credits)" +} +``` + +#### Memo Not Found +```json +{ + "statusCode": 404, + "message": "Memo not found or access denied" +} +``` + +## Accessing Appended Recordings + +Once transcription is complete, the additional recordings will be available in the memo's source: + +```typescript +// Fetch updated memo +const { data: memo } = await supabase + .from('memos') + .select('*') + .eq('id', memoId) + .single(); + +// Access additional recordings +const additionalRecordings = memo.source.additional_recordings || []; + +additionalRecordings.forEach((recording, index) => { + console.log(`Recording ${index + 1}:`); + console.log(`- Transcript: ${recording.transcript}`); + console.log(`- Language: ${recording.primary_language}`); + console.log(`- Speakers: ${Object.keys(recording.speakers || {}).length}`); + console.log(`- Status: ${recording.status}`); +}); +``` + +## Real-time Updates + +You can subscribe to memo updates to know when the transcription is complete: + +```typescript +const subscription = supabase + .channel(`memo-${memoId}`) + .on('postgres_changes', + { + event: 'UPDATE', + schema: 'public', + table: 'memos', + filter: `id=eq.${memoId}` + }, + (payload) => { + const updatedMemo = payload.new; + // Check if the last additional recording is now completed + const recordings = updatedMemo.source?.additional_recordings || []; + const lastRecording = recordings[recordings.length - 1]; + + if (lastRecording?.status === 'completed') { + console.log('Transcription completed!', lastRecording); + // Update UI with new transcription + } + } + ) + .subscribe(); +``` + +## Notes + +1. **Credit Requirements**: Append transcription consumes credits the same way as main transcription (2 mana per minute, minimum 10 mana) +2. **Access Control**: Users can only append to memos they own or have access to through spaces +3. **Smart Routing**: Short recordings (<115 min) use fast transcription, longer ones use batch processing +4. **Recording Index**: You can optionally specify a `recordingIndex` to update a specific recording instead of appending a new one +5. **Error Handling**: The service includes comprehensive error handling and fallback strategies matching the main transcription flow \ No newline at end of file diff --git a/apps/memoro/apps/backend/docs/auth-proxy-grace-period-notes.md b/apps/memoro/apps/backend/docs/auth-proxy-grace-period-notes.md new file mode 100644 index 000000000..10c43a7f2 --- /dev/null +++ b/apps/memoro/apps/backend/docs/auth-proxy-grace-period-notes.md @@ -0,0 +1,62 @@ +# Auth Proxy Grace Period Implementation Notes + +## Overview + +The auth-proxy module in memoro-service acts as a pass-through to mana-core-middleware. With the new grace period implementation, the proxy doesn't need significant changes but should be aware of the new behavior. + +## Current Implementation Status + +The auth proxy already: +- ✅ Validates device info is present for refresh requests +- ✅ Forwards all requests to mana-core-middleware +- ✅ Preserves error responses from the backend +- ✅ Logs requests for debugging + +## Grace Period Behavior + +When a refresh request is made: + +1. **Normal Case**: New tokens are returned +2. **Grace Period Case**: If the same old token is used within 5 minutes: + - Backend returns the previously generated new token + - Response includes `gracePeriodUsed: true` flag + - This is NOT an error - it's a successful response + +## No Changes Required + +The auth proxy doesn't need modifications because: +- It already forwards all responses transparently +- Error handling is done by the backend +- Retry logic should be implemented in the frontend + +## Logging Recommendations + +Consider adding logs for grace period usage: + +```typescript +async refresh(payload: any) { + const response = await this.proxyPost('/auth/refresh', payload); + + // Optional: Log grace period usage for monitoring + if (response.gracePeriodUsed) { + console.log('[AuthProxy] Refresh used grace period for device:', payload.deviceInfo?.deviceId); + } + + return response; +} +``` + +## Monitoring + +Track these metrics to understand grace period effectiveness: +- How often grace period is used +- Which devices/users trigger grace period most +- Correlation with network conditions + +## Frontend Integration + +The frontend calling memoro-service should: +1. Always save the returned refresh token +2. Implement retry logic with exponential backoff +3. Handle both success and error responses appropriately +4. Not treat grace period usage as an error \ No newline at end of file diff --git a/apps/memoro/apps/backend/docs/broadcast-trigger-payload-fix.md b/apps/memoro/apps/backend/docs/broadcast-trigger-payload-fix.md new file mode 100644 index 000000000..9bc41cf15 --- /dev/null +++ b/apps/memoro/apps/backend/docs/broadcast-trigger-payload-fix.md @@ -0,0 +1,196 @@ +# Broadcast Trigger Payload Size Fix - July 2025 + +## Timeline of Events + +### Background +- **Before July 5, 2025**: Transcription updates worked perfectly +- **July 5, 2025**: New broadcast triggers added to enhance real-time updates +- **July 8, 2025**: "Payload string too long" errors started occurring during transcription completion + +## The Error + +### Symptoms +``` +Error: Failed to update memo: payload string too long +PostgreSQL Error Code: 22023 +``` + +### Affected Operations +- Transcription completion updates failing for memos with: + - Text length: 46,465 characters + - Utterances: 377 items + - Request payload sizes: 55KB - 121KB + +### Error Logs +From memoro-service: +``` +[handleTranscriptionCompleted] Error updating memo: { + code: '22023', + details: null, + hint: null, + message: 'payload string too long' +} +``` + +From Supabase API Gateway: +```json +{ + "event_message": "PATCH | 400 | ... | https://npgifbrwhftlbrbaglmi.supabase.co/rest/v1/memos", + "content_length": "121057", + "status_code": 400 +} +``` + +## Initial (Wrong) Assumptions + +### Assumption 1: Supabase Realtime NOTIFY Limit +**What we thought**: The existing replica identity fix from the `realtime-payload-limit-fix.md` wasn't working properly. + +**Why this seemed logical**: +- Same error code (22023) +- Same error message ("payload string too long") +- PostgreSQL NOTIFY has an 8KB limit +- We had fixed this exact issue before + +**Why we were wrong**: The replica identity was correctly set and working. The issue was elsewhere. + +### Assumption 2: Database Column Limits +**What we thought**: Maybe the jsonb/text columns had size constraints. + +**Why this seemed possible**: +- Large payloads were being stored +- Error occurred during UPDATE operations + +**Why we were wrong**: PostgreSQL jsonb and text columns can store much larger data (up to 1GB). + +### Assumption 3: HTTP Request Size Limits +**What we thought**: The Supabase REST API might have payload limits. + +**Why we considered this**: +- Request sizes were 55KB-121KB +- Error happened during HTTP PATCH requests + +**Why we were wrong**: Supabase supports payloads up to 1GB via HTTP. + +## The Real Problem + +### Discovery Process +1. Checked replica identity: ✓ Correctly set to INDEX (only sends ID) +2. Investigated table triggers: Found new broadcast triggers added July 5 +3. Examined trigger function: Found the culprit! + +### Root Cause +The `broadcast_memo_changes()` trigger function added on July 5, 2025 was using: +```sql +PERFORM pg_notify( + 'realtime:broadcast', + json_build_object( + 'payload', json_build_object( + 'new', row_to_json(NEW), -- ENTIRE row data! + 'old', row_to_json(OLD), -- ENTIRE row data! + ... + ) + )::text +); +``` + +This trigger was attempting to send the ENTIRE memo data (including large transcripts and utterances) through PostgreSQL's NOTIFY mechanism, which has a hard 8KB limit. + +### Why It Wasn't Caught Earlier +- The trigger was added recently (July 5) +- Initial testing likely used smaller memos +- The error only occurs with transcriptions > ~6KB total size + +## The Fix + +### Solution Applied +Modified the `broadcast_memo_changes()` function to send minimal data: + +```sql +CREATE OR REPLACE FUNCTION public.broadcast_memo_changes() +RETURNS trigger +LANGUAGE plpgsql +SECURITY DEFINER +AS $$ +BEGIN + -- Broadcast only essential information to avoid payload size limits + PERFORM pg_notify( + 'realtime:broadcast', + json_build_object( + 'type', 'broadcast', + 'event', 'postgres_changes', + 'payload', json_build_object( + 'event', TG_OP, + 'schema', TG_TABLE_SCHEMA, + 'table', TG_TABLE_NAME, + 'id', CASE + WHEN TG_OP = 'DELETE' THEN OLD.id + ELSE NEW.id + END, + 'eventTs', to_char(current_timestamp, 'YYYY-MM-DD"T"HH24:MI:SS.MS"Z"') + ) + )::text + ); + + RETURN NEW; +END; +$$; +``` + +### What Changed +- **Before**: Sent entire row data (`row_to_json(NEW/OLD)`) +- **After**: Sends only the memo ID +- **Result**: Payload size reduced from 55KB+ to < 200 bytes + +### Impact on Frontend +- Frontend still receives real-time notifications +- Must fetch full memo data using the provided ID +- No breaking changes to the notification structure + +## Key Learnings + +### 1. Multiple Systems Can Hit NOTIFY Limits +- **Supabase Realtime**: Uses replica identity (already fixed) +- **Custom Triggers**: Can also use pg_notify (new issue) +- Both must respect the 8KB NOTIFY limit + +### 2. Error Messages Can Be Misleading +- Same error (22023) can have different causes +- Important to check ALL uses of NOTIFY, not just Supabase Realtime + +### 3. Trigger Side Effects +- New triggers can break existing functionality +- Always consider payload sizes when using pg_notify +- Test with realistic data sizes, not just small test cases + +### 4. Debugging Approach +1. Check recent changes (migrations, triggers) +2. Examine all NOTIFY usage, not just obvious ones +3. Use Supabase API logs to see actual request sizes +4. Don't assume the first similar fix applies + +## Prevention Guidelines + +### For Future Triggers +1. **Never send full row data through NOTIFY** +2. **Always send minimal identifiers only** +3. **Test with large, realistic payloads** +4. **Document payload size considerations** + +### For Broadcast Mechanisms +1. **Use ID-only patterns**: Send identifiers, let clients fetch data +2. **Consider payload sizes**: NOTIFY limit is 8000 bytes total +3. **Monitor for 22023 errors**: Set up alerts for this specific error +4. **Review all NOTIFY usage**: Both Supabase and custom triggers + +## Resolution Timeline +- **Issue Reported**: July 8, 2025, 14:59 CEST +- **Investigation Started**: July 8, 2025, 15:00 CEST +- **Root Cause Found**: Broadcast trigger sending full row data +- **Fix Applied**: Modified trigger to send ID only +- **Resolution Confirmed**: Transcriptions now complete successfully + +## Related Documentation +- [Realtime Payload Limit Fix](./realtime-payload-limit-fix.md) - Original NOTIFY limit issue +- [PostgreSQL NOTIFY Documentation](https://www.postgresql.org/docs/current/sql-notify.html) +- Migration: `20250705022315_add_memo_update_broadcast_trigger` \ No newline at end of file diff --git a/apps/memoro/apps/backend/docs/memo-sharing-fix.md b/apps/memoro/apps/backend/docs/memo-sharing-fix.md new file mode 100644 index 000000000..adef4c475 --- /dev/null +++ b/apps/memoro/apps/backend/docs/memo-sharing-fix.md @@ -0,0 +1,178 @@ +# Memoro Space Sharing Fix + +This document describes the implementation of space-based memo sharing in the Memoro application, including the solution to the "infinite recursion" issue that was occurring with Row-Level Security (RLS) policies. + +## Problem Description + +Users were unable to directly access memos created by other users in shared spaces, receiving the following error: + +``` +Error fetching memo: infinite recursion detected in policy for relation "memos" +``` + +This happened because: + +1. The RLS policies required complex joins between multiple tables +2. PostgreSQL couldn't efficiently resolve these joins during policy evaluation +3. The recursive nature of the policies caused infinite recursion + +## Solution: Denormalized Access Control + +We implemented a database design pattern called "denormalization for access control" to solve this issue. + +### Step 1: Add a Direct Access Column to Memos Table + +```sql +-- Add a direct helper column to the memos table to simplify RLS +ALTER TABLE memos ADD COLUMN IF NOT EXISTS shared_with_users UUID[] DEFAULT '{}'::uuid[]; +``` + +This array column directly stores the UUIDs of all users who should have access to each memo, eliminating the need for complex joins in RLS policies. + +### Step 2: Create Triggers to Maintain the Access Array + +First, create a function to update the `shared_with_users` array when a memo is linked to a space: + +```sql +-- Create an update function that will maintain this column +CREATE OR REPLACE FUNCTION update_memo_shared_with_users() +RETURNS TRIGGER AS $$ +BEGIN + -- Update the shared_with_users array for the affected memo + UPDATE memos + SET shared_with_users = ( + SELECT array_agg(DISTINCT sm.user_id) + FROM memo_spaces ms + JOIN space_members sm ON ms.space_id = sm.space_id + WHERE ms.memo_id = NEW.memo_id + ) + WHERE id = NEW.memo_id; + + RETURN NEW; +END; +$$ LANGUAGE plpgsql; + +-- Create triggers for memo_spaces table changes +DROP TRIGGER IF EXISTS memo_spaces_insert_update_trigger ON memo_spaces; +CREATE TRIGGER memo_spaces_insert_update_trigger +AFTER INSERT OR UPDATE ON memo_spaces +FOR EACH ROW +EXECUTE FUNCTION update_memo_shared_with_users(); + +DROP TRIGGER IF EXISTS memo_spaces_delete_trigger ON memo_spaces; +CREATE TRIGGER memo_spaces_delete_trigger +AFTER DELETE ON memo_spaces +FOR EACH ROW +EXECUTE FUNCTION update_memo_shared_with_users(); +``` + +Then, create a function and trigger to update the access arrays when space membership changes: + +```sql +-- Create trigger for space_members changes +CREATE OR REPLACE FUNCTION update_all_memos_for_space() +RETURNS TRIGGER AS $$ +BEGIN + -- For each memo in the space, update its shared_with_users array + UPDATE memos m + SET shared_with_users = ( + SELECT array_agg(DISTINCT sm.user_id) + FROM memo_spaces ms + JOIN space_members sm ON ms.space_id = sm.space_id + WHERE ms.memo_id = m.id + AND ms.space_id = NEW.space_id OR ms.space_id = OLD.space_id + ) + WHERE m.id IN ( + SELECT memo_id FROM memo_spaces WHERE space_id = NEW.space_id OR space_id = OLD.space_id + ); + + RETURN NEW; +END; +$$ LANGUAGE plpgsql; + +DROP TRIGGER IF EXISTS space_members_trigger ON space_members; +CREATE TRIGGER space_members_trigger +AFTER INSERT OR UPDATE OR DELETE ON space_members +FOR EACH ROW +EXECUTE FUNCTION update_all_memos_for_space(); +``` + +### Step 3: Initialize the Column for Existing Data + +```sql +-- Populate the shared_with_users column for all existing memos +UPDATE memos m +SET shared_with_users = ( + SELECT array_agg(DISTINCT sm.user_id) + FROM memo_spaces ms + JOIN space_members sm ON ms.space_id = sm.space_id + WHERE ms.memo_id = m.id +); +``` + +### Step 4: Create Simplified RLS Policies + +```sql +-- Drop existing policies on memos +DO $$ +BEGIN + EXECUTE ( + SELECT string_agg('DROP POLICY IF EXISTS "' || policyname || '" ON memos;', ' ') + FROM pg_policies + WHERE tablename = 'memos' + ); +END $$; + +-- Create simplified policies that use the denormalized column +CREATE POLICY "Users can access own memos" +ON memos FOR ALL +USING (user_id = auth.uid()::text); + +CREATE POLICY "Users can view shared memos" +ON memos FOR SELECT +USING (auth.uid()::uuid = ANY(shared_with_users)); +``` + +## How This Solution Works + +1. When a memo is linked to a space, the trigger automatically adds all space members to the memo's `shared_with_users` array +2. When space membership changes (users added/removed), the trigger updates all affected memos +3. The RLS policies are now simple and non-recursive: + - Users can always access their own memos + - Users can view memos where their UUID is in the `shared_with_users` array + +## Benefits + +1. **No More Recursion**: The simple policies avoid complex joins that caused the infinite recursion +2. **Better Performance**: Array lookups are much faster than multiple table joins +3. **Automatic Maintenance**: The triggers keep everything in sync without requiring code changes +4. **Same Functionality**: Users still get the same sharing behavior, just implemented more efficiently + +## Verification + +You can verify the solution is working by checking: + +```sql +-- Check the data in our helper column for a specific memo +SELECT id, title, user_id, shared_with_users +FROM memos +WHERE id = 'your-memo-id'; +``` + +This should show the memo with a list of user IDs in the `shared_with_users` array, including both the memo owner and all members of spaces the memo is shared with. + +## Troubleshooting + +If you encounter issues with the sharing functionality: + +1. Check if the triggers are properly updating the `shared_with_users` array +2. Verify that the `space_members` table is correctly populated +3. Ensure the `memo_spaces` table correctly links memos to spaces + +You can manually update the `shared_with_users` array for testing: + +```sql +UPDATE memos +SET shared_with_users = array_append(shared_with_users, 'user-uuid-here') +WHERE id = 'memo-id-here'; +``` diff --git a/apps/memoro/apps/backend/docs/memo-sharing-security-review.md b/apps/memoro/apps/backend/docs/memo-sharing-security-review.md new file mode 100644 index 000000000..a82dfb6eb --- /dev/null +++ b/apps/memoro/apps/backend/docs/memo-sharing-security-review.md @@ -0,0 +1,186 @@ +# Memoro Space Sharing - Security Review + +This document provides a security review of the denormalized access control solution implemented to fix the infinite recursion issue in Memoro's space sharing functionality. + +## Security Assessment Summary + +**Overall Security Rating: ✅ SECURE** + +The denormalized access control approach maintains the same security model while improving performance and reliability. This approach is commonly used in high-security applications to avoid complex RLS policy joins while maintaining strict access controls. + +## Detailed Security Analysis + +### 1. Access Control Integrity + +✅ **Authorization Logic Preserved** +- The solution maintains the same access rules - users can only access memos they own or that are shared with them through spaces. +- No security bypass vectors were introduced in the implementation. + +✅ **Permission Validation** +- The solution continues to use PostgreSQL's RLS mechanism for enforcing access control policies. +- The `auth.uid()` function ensures that user identity is validated by the database system. + +### 2. Data Exposure Risks + +✅ **No Sensitive Data Leakage** +- The `shared_with_users` array only contains user IDs, not sensitive information. +- No memo content is exposed to unauthorized users. + +✅ **Data Integrity** +- Triggers ensure that the denormalized data (shared_with_users array) stays consistent with the normalized data model. +- All updates to the denormalized column are performed atomically. + +### 3. SQL Injection Protection + +✅ **Parameterized Values** +- All user inputs are properly parameterized through the `auth.uid()` function. +- No user-supplied values are concatenated directly into SQL queries. + +✅ **PL/pgSQL Security** +- The trigger functions use proper SQL constructs without any dynamic SQL. +- All database operations use static, prepared statements. + +### 4. Trigger Implementation Security + +✅ **Atomic Updates** +- Updates are performed atomically, ensuring no inconsistent states. +- PostgreSQL's transaction safety ensures rollbacks on errors. + +✅ **Privilege Control** +- The triggers operate with database-level permissions, not user-level permissions. +- This ensures consistent enforcement of access controls regardless of the user context. + +## Improvements Implemented + +### 1. Error Logging in Triggers + +We've enhanced the trigger functions with comprehensive error logging: + +```sql +CREATE OR REPLACE FUNCTION update_memo_shared_with_users() +RETURNS TRIGGER AS $$ +DECLARE + affected_rows integer; + error_message text; +BEGIN + -- Handle NULL memo_id + IF NEW.memo_id IS NULL THEN + RAISE LOG 'update_memo_shared_with_users: memo_id is NULL, skipping update'; + RETURN NEW; + END IF; + + BEGIN + -- Update the shared_with_users array for the affected memo + UPDATE memos + SET shared_with_users = ( + SELECT COALESCE(array_agg(DISTINCT sm.user_id), '{}'::uuid[]) + FROM memo_spaces ms + JOIN space_members sm ON ms.space_id = sm.space_id + WHERE ms.memo_id = NEW.memo_id + ) + WHERE id = NEW.memo_id; + + GET DIAGNOSTICS affected_rows = ROW_COUNT; + RAISE LOG 'update_memo_shared_with_users: Updated memo %, affected % rows', NEW.memo_id, affected_rows; + + EXCEPTION WHEN OTHERS THEN + GET STACKED DIAGNOSTICS error_message = MESSAGE_TEXT; + RAISE LOG 'update_memo_shared_with_users error: %', error_message; + -- Don't re-raise the exception to avoid breaking functionality + END; + + RETURN NEW; +END; +$$ LANGUAGE plpgsql; +``` + +### 2. NULL Handling in Triggers + +We've added explicit NULL handling to prevent errors when processing NULL values: + +```sql +CREATE OR REPLACE FUNCTION update_all_memos_for_space() +RETURNS TRIGGER AS $$ +DECLARE + affected_rows integer; + error_message text; + space_id_value uuid; +BEGIN + -- Handle NULL space_id in both NEW and OLD + IF (TG_OP = 'DELETE' AND OLD.space_id IS NULL) OR + (TG_OP IN ('INSERT', 'UPDATE') AND NEW.space_id IS NULL) THEN + RAISE LOG 'update_all_memos_for_space: space_id is NULL, skipping update'; + RETURN COALESCE(NEW, OLD); + END IF; + + -- Determine which space_id to use + IF TG_OP = 'DELETE' THEN + space_id_value := OLD.space_id; + ELSE + space_id_value := NEW.space_id; + END IF; + + RAISE LOG 'update_all_memos_for_space: Processing space_id %', space_id_value; + + BEGIN + -- For each memo in the space, update its shared_with_users array + UPDATE memos m + SET shared_with_users = ( + SELECT COALESCE(array_agg(DISTINCT sm.user_id), '{}'::uuid[]) + FROM memo_spaces ms + JOIN space_members sm ON ms.space_id = sm.space_id + WHERE ms.memo_id = m.id + AND ms.space_id = space_id_value + ) + WHERE m.id IN ( + SELECT memo_id FROM memo_spaces WHERE space_id = space_id_value + ); + + GET DIAGNOSTICS affected_rows = ROW_COUNT; + RAISE LOG 'update_all_memos_for_space: Updated memos for space %, affected % rows', + space_id_value, affected_rows; + + EXCEPTION WHEN OTHERS THEN + GET STACKED DIAGNOSTICS error_message = MESSAGE_TEXT; + RAISE LOG 'update_all_memos_for_space error: %', error_message; + -- Don't re-raise the exception to avoid breaking functionality + END; + + RETURN COALESCE(NEW, OLD); +END; +$$ LANGUAGE plpgsql; +``` + +## Additional Security Considerations + +### 1. Public Memo Access + +For full feature parity, consider adding a policy for public memos: + +```sql +CREATE POLICY "Users can view public memos" +ON memos FOR SELECT +USING (is_public = true); +``` + +### 2. Admin Access Policy + +If needed, consider adding an administrative access policy: + +```sql +CREATE POLICY "Admins can access all memos" +ON memos FOR ALL +USING (auth.uid() IN (SELECT id FROM admin_users)); +``` + +### 3. Monitoring Considerations + +- **Log Review**: Regularly review PostgreSQL logs for trigger errors using the new logging functionality +- **Performance Monitoring**: Monitor the performance of the array-based policy evaluation +- **Access Auditing**: Consider implementing an audit log for sensitive memo access + +## Conclusion + +The denormalized access control solution is secure and follows database security best practices. The improvements made to error logging and NULL handling further enhance the robustness of the implementation. + +This approach not only resolves the infinite recursion issue but does so in a way that maintains the security integrity of the system while improving its performance and reliability. diff --git a/apps/memoro/apps/backend/docs/realtime-payload-limit-fix.md b/apps/memoro/apps/backend/docs/realtime-payload-limit-fix.md new file mode 100644 index 000000000..8b03c0ba4 --- /dev/null +++ b/apps/memoro/apps/backend/docs/realtime-payload-limit-fix.md @@ -0,0 +1,115 @@ +# Fixing "Payload String Too Long" Error in Supabase Realtime + +## The Problem + +During transcription completion, the memoro service was failing with the following error: + +``` +Error: Failed to update memo: payload string too long +PostgreSQL Error Code: 22023 +``` + +This error occurred when updating memos with transcription results, even for relatively small transcriptions (4-30 minutes of audio). + +## Initial Assumptions (Incorrect) + +### Assumption 1: HTTP Request Payload Limit +**What we thought:** The error was caused by Supabase's HTTP API having a small payload size limit for PATCH requests. + +**Evidence that seemed to support this:** +- Error occurred during database UPDATE operations +- Supabase logs showed PATCH requests with `content_length` of 9.7KB and 28KB +- The error message "payload string too long" seemed to indicate a size limit + +**Why this was wrong:** Supabase's HTTP API actually supports payloads up to 1GB, far exceeding our transcription data size. + +### Assumption 2: Database Column Size Limit +**What we thought:** The PostgreSQL database had column size limits that were being exceeded. + +**Evidence that seemed to support this:** +- Database columns were `text` and `jsonb` types +- Large speaker diarization data (utterances, speakers) was being stored + +**Why this was wrong:** PostgreSQL `text` and `jsonb` columns can store much larger data than we were sending. + +## The Real Issue: PostgreSQL NOTIFY Payload Limit + +### Root Cause +The error was actually caused by **Supabase Realtime's internal use of PostgreSQL's NOTIFY/LISTEN mechanism**, which has a hard limit of **8000 bytes** for payload size. + +### How It Works +1. **Supabase Realtime** uses PostgreSQL's NOTIFY/LISTEN for real-time updates +2. When a row is updated, the **entire row data** is sent through NOTIFY +3. Our transcription data (source with utterances + transcript + metadata) exceeded 8000 bytes +4. PostgreSQL threw error code **22023: "payload string too long"** + +### Key Evidence +- Error code `22023` is specifically related to NOTIFY payload limits +- The error occurred even with small payloads (9.7KB) because NOTIFY limit is only 8KB +- Updates worked fine when not subscribed to realtime + +## The Solution + +### What We Did +Changed the table's **replica identity** to only include the primary key: + +```sql +ALTER TABLE public.memos REPLICA IDENTITY USING INDEX memos_pkey; +``` + +### How This Fixes It +1. **Before:** Realtime notifications included all column data from the updated row +2. **After:** Realtime notifications only include the primary key (`id`) +3. **Result:** NOTIFY payload stays well under the 8000-byte limit + +### Impact on Frontend +- **Realtime notifications now only contain the memo `id`** +- **Frontend must fetch full memo data separately** when receiving notifications +- **More efficient:** Avoids sending large payloads unnecessarily +- **No breaking changes:** Frontend can handle this gracefully + +## Alternative Solutions Considered + +### Option 1: Split Updates +**Approach:** Break large updates into multiple smaller PATCH requests +**Why rejected:** Wouldn't solve the NOTIFY payload issue + +### Option 2: Disable Realtime +**Approach:** Remove memos table from `supabase_realtime` publication +**Why rejected:** Frontend needs realtime updates for user experience + +### Option 3: Column-Specific Publication +**Approach:** Only publish specific columns to realtime +**Why rejected:** Complex to maintain and still risky with metadata growth + +## Prevention for Future + +### Database Design +- **Consider realtime payload size** when designing tables with large columns +- **Separate large data** into different tables if realtime is needed +- **Use replica identity wisely** to control what data is sent via NOTIFY + +### Development Process +- **Test with realistic data sizes** including speaker diarization data +- **Monitor Supabase logs** for realtime-related errors +- **Understand the difference** between HTTP payload limits and NOTIFY limits + +## Key Learnings + +1. **Supabase Realtime uses PostgreSQL NOTIFY** with an 8000-byte limit +2. **Error code 22023** specifically indicates NOTIFY payload issues +3. **Replica identity controls** what data is sent in realtime notifications +4. **HTTP API limits and NOTIFY limits are completely different** systems +5. **Real-time efficiency** often benefits from sending only IDs, not full data + +## Documentation References + +- [PostgreSQL NOTIFY Documentation](https://www.postgresql.org/docs/current/sql-notify.html) +- [Supabase Realtime Quotas](https://supabase.com/docs/guides/realtime/quotas) +- [PostgreSQL Replica Identity](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY) + +## Resolution Status + +✅ **Fixed**: Transcription completion now works without payload errors +✅ **Tested**: Updates to large transcript and source data work correctly +✅ **Verified**: Realtime notifications still function (with ID-only payloads) \ No newline at end of file diff --git a/apps/memoro/apps/backend/docs/settings-guide.md b/apps/memoro/apps/backend/docs/settings-guide.md new file mode 100644 index 000000000..f47e8291c --- /dev/null +++ b/apps/memoro/apps/backend/docs/settings-guide.md @@ -0,0 +1,582 @@ +# Memoro Settings Management Guide + +The Memoro service provides comprehensive user settings management through integration with the Mana Core Middleware. This allows users to manage both Memoro-specific settings and general profile information. + +## Overview + +The settings system provides: +- **Memoro-specific settings** (data usage acceptance, preferences) +- **General profile management** (name, avatar) +- **Centralized storage** via Mana Core's `app_settings` JSONB field +- **JWT-authenticated access** with user isolation + +## Architecture + +``` +Frontend → Memoro Service → Mana Core Middleware → Supabase Database +``` + +1. **Frontend** calls Memoro service settings endpoints +2. **Memoro Service** forwards requests to Mana Core Middleware +3. **Mana Core** updates the `users.app_settings` JSONB field +4. **Response** flows back through the chain + +## API Endpoints + +All endpoints require JWT authentication via `Authorization: Bearer ` header. + +### 1. Get All User Settings + +```http +GET /settings +Authorization: Bearer +``` + +**Response:** +```json +{ + "settings": { + "memoro": { + "dataUsageAcceptance": true + }, + "other_apps": { + "theme": "dark" + } + } +} +``` + +### 2. Get Memoro-Specific Settings + +```http +GET /settings/memoro +Authorization: Bearer +``` + +**Response:** +```json +{ + "settings": { + "dataUsageAcceptance": true, + "emailNewsletterOptIn": false, + "language": "en", + "defaultSpaceId": "uuid-here" + } +} +``` + +### 3. Update Memoro Settings + +```http +PATCH /settings/memoro +Authorization: Bearer +Content-Type: application/json + +{ + "dataUsageAcceptance": true, + "language": "en", + "customSetting": "value" +} +``` + +**Response:** +```json +{ + "success": true, + "settings": { + "memoro": { + "dataUsageAcceptance": true, + "language": "en", + "customSetting": "value" + } + }, + "message": "Memoro settings updated successfully" +} +``` + +### 4. Update Data Usage Acceptance (Convenience Endpoint) + +```http +PATCH /settings/memoro/data-usage +Authorization: Bearer +Content-Type: application/json + +{ + "accepted": true +} +``` + +**Response:** +```json +{ + "success": true, + "settings": { + "memoro": { + "dataUsageAcceptance": true + } + }, + "message": "Data usage accepted successfully" +} +``` + +### 5. Update Email Newsletter Opt-In (Convenience Endpoint) + +```http +PATCH /settings/memoro/email-newsletter +Authorization: Bearer +Content-Type: application/json + +{ + "optIn": true +} +``` + +**Response:** +```json +{ + "success": true, + "settings": { + "memoro": { + "emailNewsletterOptIn": true + } + }, + "message": "Email newsletter opted in successfully" +} +``` + +### 6. Update User Profile + +```http +PATCH /settings/profile +Authorization: Bearer +Content-Type: application/json + +{ + "firstName": "John", + "lastName": "Doe", + "avatarUrl": "https://example.com/avatar.jpg" +} +``` + +**Response:** +```json +{ + "success": true, + "user": { + "id": "uuid", + "email": "user@example.com", + "first_name": "John", + "last_name": "Doe", + "avatar_url": "https://example.com/avatar.jpg", + "app_settings": { + "memoro": { + "dataUsageAcceptance": true + } + } + }, + "message": "Profile updated successfully" +} +``` + +## Testing Guide + +### Local Development Setup + +1. **Start Services:** +```bash +# Terminal 1 - Mana Core Middleware +cd mana-core-middleware +npm run start:dev # Port 3000 + +# Terminal 2 - Memoro Service +cd memoro-service +npm run start:dev # Port 3001 +``` + +2. **Get JWT Token:** +```bash +export TOKEN=$(curl -s -X POST "http://localhost:3000/auth/signin?appId=973da0c1-b479-4dac-a1b0-ed09c72caca8" \ + -H "Content-Type: application/json" \ + -d '{"email": "nils.weiser@memoro.ai", "password": "Test123!"}' | jq -r '.accessToken') + +echo "Token: $TOKEN" +``` + +### Test Commands + +```bash +# Get all settings +curl -H "Authorization: Bearer $TOKEN" \ + "http://localhost:3001/settings" + +# Get Memoro settings only +curl -H "Authorization: Bearer $TOKEN" \ + "http://localhost:3001/settings/memoro" + +# Accept data usage +curl -X PATCH \ + -H "Authorization: Bearer $TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"accepted": true}' \ + "http://localhost:3001/settings/memoro/data-usage" + +# Opt into email newsletter +curl -X PATCH \ + -H "Authorization: Bearer $TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"optIn": true}' \ + "http://localhost:3001/settings/memoro/email-newsletter" + +# Update multiple Memoro settings +curl -X PATCH \ + -H "Authorization: Bearer $TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"dataUsageAcceptance": false, "emailNewsletterOptIn": true, "language": "de"}' \ + "http://localhost:3001/settings/memoro" + +# Update profile +curl -X PATCH \ + -H "Authorization: Bearer $TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"firstName": "Nils", "lastName": "Weiser"}' \ + "http://localhost:3001/settings/profile" +``` + +### Expected Results + +**Empty settings (first time):** +```json +{ + "settings": {} +} +``` + +**After data usage acceptance:** +```json +{ + "settings": { + "memoro": { + "dataUsageAcceptance": true + } + } +} +``` + +**After multiple updates:** +```json +{ + "settings": { + "memoro": { + "dataUsageAcceptance": false, + "emailNewsletterOptIn": true, + "language": "de" + } + } +} +``` + +## Memoro Settings Schema + +### Core Settings + +| Setting | Type | Default | Description | +|---------|------|---------|-------------| +| `dataUsageAcceptance` | boolean | `false` | Whether user accepts data usage for AI processing | +| `emailNewsletterOptIn` | boolean | `false` | Whether user opts into email newsletter | +| `language` | string | `"en"` | User's preferred language | +| `defaultSpaceId` | string | `null` | Default space for new recordings | + +### Future Settings (Examples) + +| Setting | Type | Default | Description | +|---------|------|---------|-------------| +| `autoTranscribe` | boolean | `true` | Auto-start transcription on upload | +| `notificationPreferences` | object | `{}` | Email/push notification settings | +| `transcriptionSettings` | object | `{}` | Transcription quality, language detection | +| `uiPreferences` | object | `{}` | Theme, layout preferences | + +## Error Handling + +### Common Errors + +**400 Bad Request - Missing fields:** +```json +{ + "message": "At least one setting field is required", + "error": "Bad Request", + "statusCode": 400 +} +``` + +**400 Bad Request - Invalid data type:** +```json +{ + "message": "accepted field must be a boolean", + "error": "Bad Request", + "statusCode": 400 +} +``` + +**401 Unauthorized:** +```json +{ + "message": "Unauthorized", + "statusCode": 401 +} +``` + +### Service Communication Errors + +If Mana Core Middleware is down: +```json +{ + "message": "Failed to update Memoro settings: Failed to connect to Mana Core", + "error": "Bad Request", + "statusCode": 400 +} +``` + +## Frontend Integration Examples + +### React Hook Example + +```typescript +// useSettings.ts +import { useState, useEffect } from 'react'; + +interface MemoroSettings { + dataUsageAcceptance?: boolean; + emailNewsletterOptIn?: boolean; + language?: string; + defaultSpaceId?: string; +} + +export function useSettings() { + const [settings, setSettings] = useState({}); + const [loading, setLoading] = useState(false); + + const getSettings = async () => { + setLoading(true); + try { + const response = await fetch('/settings/memoro', { + headers: { Authorization: `Bearer ${getToken()}` } + }); + const data = await response.json(); + setSettings(data.settings); + } catch (error) { + console.error('Failed to get settings:', error); + } finally { + setLoading(false); + } + }; + + const updateDataUsage = async (accepted: boolean) => { + try { + const response = await fetch('/settings/memoro/data-usage', { + method: 'PATCH', + headers: { + Authorization: `Bearer ${getToken()}`, + 'Content-Type': 'application/json' + }, + body: JSON.stringify({ accepted }) + }); + + if (response.ok) { + await getSettings(); // Refresh settings + } + } catch (error) { + console.error('Failed to update data usage:', error); + } + }; + + const updateEmailNewsletter = async (optIn: boolean) => { + try { + const response = await fetch('/settings/memoro/email-newsletter', { + method: 'PATCH', + headers: { + Authorization: `Bearer ${getToken()}`, + 'Content-Type': 'application/json' + }, + body: JSON.stringify({ optIn }) + }); + + if (response.ok) { + await getSettings(); // Refresh settings + } + } catch (error) { + console.error('Failed to update email newsletter:', error); + } + }; + + return { + settings, + loading, + getSettings, + updateDataUsage, + updateEmailNewsletter + }; +} +``` + +### Data Usage Consent Component + +```typescript +// DataUsageConsent.tsx +import React from 'react'; +import { useSettings } from './useSettings'; + +export function DataUsageConsent() { + const { settings, updateDataUsage, loading } = useSettings(); + + const handleAccept = () => updateDataUsage(true); + const handleDecline = () => updateDataUsage(false); + + if (settings.dataUsageAcceptance === true) { + return
✅ Data usage accepted
; + } + + return ( +
+

Data Usage Consent

+

Do you consent to AI processing of your audio data?

+ +
+ + +
+
+ ); +} +``` + +### Email Newsletter Subscription Component + +```typescript +// EmailNewsletterSubscription.tsx +import React from 'react'; +import { useSettings } from './useSettings'; + +export function EmailNewsletterSubscription() { + const { settings, updateEmailNewsletter, loading } = useSettings(); + + const handleOptIn = () => updateEmailNewsletter(true); + const handleOptOut = () => updateEmailNewsletter(false); + + return ( +
+

Email Newsletter

+

Stay updated with Memoro features and news

+ +
+ {settings.emailNewsletterOptIn ? ( +
+ ✅ Subscribed to newsletter + +
+ ) : ( +
+ 📧 Not subscribed + +
+ )} +
+
+ ); +} +``` + +### Combined Settings Component + +```typescript +// SettingsPage.tsx +import React from 'react'; +import { DataUsageConsent } from './DataUsageConsent'; +import { EmailNewsletterSubscription } from './EmailNewsletterSubscription'; + +export function SettingsPage() { + return ( +
+

Memoro Settings

+ +
+

Privacy & Data

+ +
+ +
+

Communication

+ +
+
+ ); +} +``` + +## Configuration + +### Environment Variables + +Ensure `MANA_SERVICE_URL` is properly configured: + +```env +# memoro-service/.env +MANA_SERVICE_URL=http://localhost:3000 # Local development +# or +MANA_SERVICE_URL=https://mana-core-middleware.run.app # Production +``` + +### Service Dependencies + +The settings endpoints depend on: +1. **Mana Core Middleware** being accessible +2. **Supabase database** connection +3. **JWT authentication** working properly + +## Monitoring + +### Health Checks + +Monitor settings service health: +```bash +# Check if Memoro service can reach Mana Core +curl -H "Authorization: Bearer $TOKEN" \ + "http://localhost:3001/settings/memoro" +``` + +### Logging + +Look for these log patterns: +``` +[SettingsClientService] Error getting user settings: Failed to connect +[SettingsController] Failed to update Memoro settings: User not found +``` + +## Future Enhancements + +1. **Settings Validation**: JSON schema validation for settings +2. **Settings Migration**: Automatic migration for schema changes +3. **Settings Sync**: Real-time sync across devices +4. **Settings Backup**: Export/import functionality +5. **Settings Analytics**: Track which settings are most used \ No newline at end of file diff --git a/apps/memoro/apps/backend/docs/simplified-space-sync-service.md b/apps/memoro/apps/backend/docs/simplified-space-sync-service.md new file mode 100644 index 000000000..fa98bcafd --- /dev/null +++ b/apps/memoro/apps/backend/docs/simplified-space-sync-service.md @@ -0,0 +1,483 @@ +# Simplified SpaceSyncService + +This document outlines a simplified version of the `SpaceSyncService` that leverages the new database-level triggers and denormalized access control approach. + +## Simplified Implementation + +```typescript +import { Injectable, Logger } from '@nestjs/common'; +import { HttpService } from '@nestjs/axios'; +import { ConfigService } from '@nestjs/config'; +import { firstValueFrom } from 'rxjs'; +import { createClient, SupabaseClient } from '@supabase/supabase-js'; +import { v4 as uuidv4 } from 'uuid'; + +@Injectable() +export class SpaceSyncService { + private readonly logger = new Logger(SpaceSyncService.name); + private supabase: SupabaseClient; + private manaApiUrl: string; + private adminToken: string; + + constructor( + private readonly configService: ConfigService, + private readonly httpService: HttpService, + ) { + // Initialize Supabase client + this.supabase = createClient( + this.configService.get('MEMORO_SUPABASE_URL'), + this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'), + ); + this.manaApiUrl = this.configService.get('MANA_CORE_URL'); + this.adminToken = this.configService.get('ADMIN_TOKEN'); + } + + /** + * Create or update a space member record + * This is called when a user is added to a space or their role changes + */ + async syncSpaceMembership( + spaceId: string, + userId: string, + role: string, + addedBy?: string, + ): Promise<{ success: boolean; message: string }> { + try { + // Generate a UUID for the record if it doesn't exist + const id = uuidv4(); + + // Check if the membership already exists + const { data: existingMember } = await this.supabase + .from('space_members') + .select('*') + .eq('space_id', spaceId) + .eq('user_id', userId) + .single(); + + if (existingMember) { + // Update existing membership + const { error } = await this.supabase + .from('space_members') + .update({ + role, + added_by: addedBy || existingMember.added_by, + }) + .eq('space_id', spaceId) + .eq('user_id', userId); + + if (error) throw error; + this.logger.log(`Updated space membership for user ${userId} in space ${spaceId}`); + } else { + // Create new membership + const { error } = await this.supabase + .from('space_members') + .insert({ + id, + space_id: spaceId, + user_id: userId, + role, + added_by: addedBy || userId, + added_at: new Date(), + }); + + if (error) throw error; + this.logger.log(`Added user ${userId} to space ${spaceId}`); + } + + return { success: true, message: 'Space membership synced successfully' }; + } catch (error) { + this.logger.error(`Error syncing space membership: ${error.message}`, error.stack); + return { success: false, message: error.message }; + } + } + + /** + * Remove a user from a space + */ + async removeSpaceMembership( + spaceId: string, + userId: string, + ): Promise<{ success: boolean; message: string }> { + try { + const { error } = await this.supabase + .from('space_members') + .delete() + .eq('space_id', spaceId) + .eq('user_id', userId); + + if (error) throw error; + this.logger.log(`Removed user ${userId} from space ${spaceId}`); + + return { success: true, message: 'Space membership removed successfully' }; + } catch (error) { + this.logger.error(`Error removing space membership: ${error.message}`, error.stack); + return { success: false, message: error.message }; + } + } + + /** + * Sync all members for a specific space + * Used when initializing a space or ensuring all memberships are in sync + */ + async syncSpaceMembers( + spaceId: string, + ): Promise<{ success: boolean; message: string; count?: number }> { + try { + // Fetch space members from middleware + const response = await firstValueFrom( + this.httpService.get(`${this.manaApiUrl}/api/spaces/${spaceId}/members`, { + headers: { Authorization: `Bearer ${this.adminToken}` }, + }), + ); + + const members = response.data.members || []; + + if (members.length === 0) { + return { success: true, message: 'No members found for space', count: 0 }; + } + + // First, delete all existing members for this space to avoid stale records + await this.supabase + .from('space_members') + .delete() + .eq('space_id', spaceId); + + // Then insert all current members + const membersToInsert = members.map((member) => ({ + id: uuidv4(), + space_id: spaceId, + user_id: member.user_id, + role: member.role, + added_by: member.added_by || member.user_id, + added_at: new Date(), + })); + + const { error } = await this.supabase + .from('space_members') + .insert(membersToInsert); + + if (error) throw error; + + this.logger.log(`Synced ${members.length} members for space ${spaceId}`); + + return { + success: true, + message: `Synced ${members.length} members for space ${spaceId}`, + count: members.length + }; + } catch (error) { + this.logger.error(`Error syncing space members: ${error.message}`, error.stack); + return { success: false, message: error.message }; + } + } + + /** + * Sync all spaces for a user + * Used to ensure a user has access to all their spaces + */ + async syncUserSpaces( + userId: string, + ): Promise<{ success: boolean; message: string; count?: number }> { + try { + // Fetch user's spaces from middleware + const response = await firstValueFrom( + this.httpService.get(`${this.manaApiUrl}/api/users/${userId}/spaces`, { + headers: { Authorization: `Bearer ${this.adminToken}` }, + }), + ); + + const spaces = response.data.spaces || []; + + if (spaces.length === 0) { + return { success: true, message: 'No spaces found for user', count: 0 }; + } + + // Process each space the user is a member of + let successCount = 0; + for (const space of spaces) { + const result = await this.syncSpaceMembers(space.id); + if (result.success) { + successCount++; + } + } + + this.logger.log(`Synced ${successCount} spaces for user ${userId}`); + + return { + success: true, + message: `Synced ${successCount} spaces for user ${userId}`, + count: successCount + }; + } catch (error) { + this.logger.error(`Error syncing user spaces: ${error.message}`, error.stack); + return { success: false, message: error.message }; + } + } + + /** + * Run the migration to set up the space_members table and triggers + * Only needs to be run once when setting up a new environment + */ + async runSpaceMembersMigration(): Promise<{ success: boolean; message: string }> { + try { + const { data: tableExists } = await this.supabase.rpc('check_table_exists', { + table_name: 'space_members', + }); + + if (tableExists) { + return { success: true, message: 'Space members table already exists' }; + } + + // Create space_members table + const createTableSQL = ` + -- Create space_members table + CREATE TABLE IF NOT EXISTS public.space_members ( + id UUID PRIMARY KEY, + space_id UUID NOT NULL REFERENCES public.spaces(id) ON DELETE CASCADE, + user_id UUID NOT NULL, + role TEXT NOT NULL, + added_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + added_by UUID, + UNIQUE(space_id, user_id) + ); + + -- Add shared_with_users column to memos table + ALTER TABLE public.memos ADD COLUMN IF NOT EXISTS shared_with_users UUID[] DEFAULT '{}'::uuid[]; + + -- Create function for updating shared_with_users + CREATE OR REPLACE FUNCTION update_memo_shared_with_users() + RETURNS TRIGGER AS $$ + DECLARE + affected_rows integer; + error_message text; + BEGIN + -- Handle NULL memo_id + IF NEW.memo_id IS NULL THEN + RAISE LOG 'update_memo_shared_with_users: memo_id is NULL, skipping update'; + RETURN NEW; + END IF; + + BEGIN + -- Update the shared_with_users array for the affected memo + UPDATE memos + SET shared_with_users = ( + SELECT COALESCE(array_agg(DISTINCT sm.user_id), '{}'::uuid[]) + FROM memo_spaces ms + JOIN space_members sm ON ms.space_id = sm.space_id + WHERE ms.memo_id = NEW.memo_id + ) + WHERE id = NEW.memo_id; + + GET DIAGNOSTICS affected_rows = ROW_COUNT; + RAISE LOG 'update_memo_shared_with_users: Updated memo %, affected % rows', NEW.memo_id, affected_rows; + + EXCEPTION WHEN OTHERS THEN + GET STACKED DIAGNOSTICS error_message = MESSAGE_TEXT; + RAISE LOG 'update_memo_shared_with_users error: %', error_message; + -- Don't re-raise the exception to avoid breaking functionality + END; + + RETURN NEW; + END; + $$ LANGUAGE plpgsql; + + -- Create function for updating all memos in a space + CREATE OR REPLACE FUNCTION update_all_memos_for_space() + RETURNS TRIGGER AS $$ + DECLARE + affected_rows integer; + error_message text; + space_id_value uuid; + BEGIN + -- Handle NULL space_id in both NEW and OLD + IF (TG_OP = 'DELETE' AND OLD.space_id IS NULL) OR + (TG_OP IN ('INSERT', 'UPDATE') AND NEW.space_id IS NULL) THEN + RAISE LOG 'update_all_memos_for_space: space_id is NULL, skipping update'; + RETURN COALESCE(NEW, OLD); + END IF; + + -- Determine which space_id to use + IF TG_OP = 'DELETE' THEN + space_id_value := OLD.space_id; + ELSE + space_id_value := NEW.space_id; + END IF; + + RAISE LOG 'update_all_memos_for_space: Processing space_id %', space_id_value; + + BEGIN + -- For each memo in the space, update its shared_with_users array + UPDATE memos m + SET shared_with_users = ( + SELECT COALESCE(array_agg(DISTINCT sm.user_id), '{}'::uuid[]) + FROM memo_spaces ms + JOIN space_members sm ON ms.space_id = sm.space_id + WHERE ms.memo_id = m.id + AND ms.space_id = space_id_value + ) + WHERE m.id IN ( + SELECT memo_id FROM memo_spaces WHERE space_id = space_id_value + ); + + GET DIAGNOSTICS affected_rows = ROW_COUNT; + RAISE LOG 'update_all_memos_for_space: Updated memos for space %, affected % rows', + space_id_value, affected_rows; + + EXCEPTION WHEN OTHERS THEN + GET STACKED DIAGNOSTICS error_message = MESSAGE_TEXT; + RAISE LOG 'update_all_memos_for_space error: %', error_message; + -- Don't re-raise the exception to avoid breaking functionality + END; + + RETURN COALESCE(NEW, OLD); + END; + $$ LANGUAGE plpgsql; + + -- Create triggers + DROP TRIGGER IF EXISTS memo_spaces_insert_update_trigger ON memo_spaces; + CREATE TRIGGER memo_spaces_insert_update_trigger + AFTER INSERT OR UPDATE ON memo_spaces + FOR EACH ROW + EXECUTE FUNCTION update_memo_shared_with_users(); + + DROP TRIGGER IF EXISTS memo_spaces_delete_trigger ON memo_spaces; + CREATE TRIGGER memo_spaces_delete_trigger + AFTER DELETE ON memo_spaces + FOR EACH ROW + EXECUTE FUNCTION update_memo_shared_with_users(); + + DROP TRIGGER IF EXISTS space_members_trigger ON space_members; + CREATE TRIGGER space_members_trigger + AFTER INSERT OR UPDATE OR DELETE ON space_members + FOR EACH ROW + EXECUTE FUNCTION update_all_memos_for_space(); + + -- Create simplified RLS policies + ALTER TABLE public.memos ENABLE ROW LEVEL SECURITY; + + DO $$ + BEGIN + EXECUTE ( + SELECT string_agg('DROP POLICY IF EXISTS "' || policyname || '" ON memos;', ' ') + FROM pg_policies + WHERE tablename = 'memos' + ); + END $$; + + -- Create simplified policies that use the denormalized column + CREATE POLICY "Users can access own memos" + ON memos FOR ALL + USING (user_id = auth.uid()::text); + + CREATE POLICY "Users can view shared memos" + ON memos FOR SELECT + USING (auth.uid()::uuid = ANY(shared_with_users)); + + -- Add policy for public memos if needed + CREATE POLICY "Users can view public memos" + ON memos FOR SELECT + USING (is_public = true); + `; + + // Run the migration SQL + const { error } = await this.supabase.rpc('run_sql', { sql: createTableSQL }); + + if (error) throw error; + + // Initialize shared_with_users arrays for existing memos + await this.supabase.rpc('run_sql', { + sql: ` + -- Populate the shared_with_users column for all existing memos + UPDATE memos m + SET shared_with_users = ( + SELECT COALESCE(array_agg(DISTINCT sm.user_id), '{}'::uuid[]) + FROM memo_spaces ms + JOIN space_members sm ON ms.space_id = sm.space_id + WHERE ms.memo_id = m.id + ); + ` + }); + + this.logger.log('Space members migration completed successfully'); + + return { success: true, message: 'Space members migration completed successfully' }; + } catch (error) { + this.logger.error(`Error running space members migration: ${error.message}`, error.stack); + return { success: false, message: error.message }; + } + } +} +``` + +## Key Differences from Original Implementation + +1. **Simplified Methods**: + - Removed any complex recursive RLS policy management + - Focuses only on CRUD operations for the `space_members` table + - Leverages database triggers for maintaining the denormalized data + +2. **Reduced Complexity**: + - The service now has a clear, focused purpose: manage space membership data + - All complex access control logic is now handled at the database level + - The migration script includes the triggers and denormalized approach + +3. **Improved Error Handling**: + - More robust error handling and logging throughout + - Better handling of edge cases like missing data + - Includes NULL checks and logging in database triggers + +## Controller Methods + +The corresponding controller methods would be simplified as well: + +```typescript +@Controller('memoro') +export class SpaceSyncController { + constructor(private readonly spaceSyncService: SpaceSyncService) {} + + @Post('spaces/:spaceId/sync-members') + async syncSpaceMembers(@Param('spaceId') spaceId: string) { + return this.spaceSyncService.syncSpaceMembers(spaceId); + } + + @Post('users/:userId/sync-spaces') + async syncUserSpaces(@Param('userId') userId: string) { + return this.spaceSyncService.syncUserSpaces(userId); + } + + @Post('run-space-members-migration') + async runSpaceMembersMigration() { + return this.spaceSyncService.runSpaceMembersMigration(); + } +} +``` + +## Integration with MemoroService + +The MemoroService would need only minimal integration with the SpaceSyncService: + +```typescript +// In MemoroService.ts +async createMemoroSpace(userId: string, spaceName: string, token: string) { + const space = await this.spacesService.createSpace(userId, spaceName, token); + // Only need to maintain the space_members table + await this.spaceSyncService.syncSpaceMembership(space.id, userId, 'owner'); + return space; +} + +async inviteUserToSpace(userId: string, spaceId: string, email: string, role: string, token: string) { + const result = await this.spacesService.addSpaceMember(spaceId, email, role, token); + if (result.invitee_id) { + // Only need to maintain the space_members table when a user is invited + await this.spaceSyncService.syncSpaceMembership(spaceId, result.invitee_id, role, userId); + } + return result; +} + +async removeUserFromSpace(userId: string, spaceId: string, memberId: string, token: string) { + const result = await this.spacesService.removeSpaceMember(spaceId, memberId, token); + // Remove from space_members table + await this.spaceSyncService.removeSpaceMembership(spaceId, memberId); + return result; +} +``` diff --git a/apps/memoro/apps/backend/env.example b/apps/memoro/apps/backend/env.example new file mode 100644 index 000000000..8145adbae --- /dev/null +++ b/apps/memoro/apps/backend/env.example @@ -0,0 +1,25 @@ +# Server Configuration +PORT=3001 +NODE_ENV=development + +# Service URLs +MANA_SERVICE_URL=https://mana-core-middleware-111768794939.europe-west3.run.app +AUDIO_MICROSERVICE_URL=https://audio-microservice-111768794939.europe-west3.run.app + +# App Configuration +MEMORO_APP_ID=973da0c1-b479-4dac-a1b0-ed09c72caca8 + +# JWT Configuration for Service Role Authentication +MANA_JWT_SECRET=your_mana_jwt_secret + +# Mana Core Service Key (for service-to-service credit operations) +MANA_SUPABASE_SECRET_KEY=your_mana_service_role_key + +# Memoro Supabase Configuration +MEMORO_SUPABASE_URL=https://your-memoro-project.supabase.co +MEMORO_SUPABASE_ANON_KEY=your-memoro-anon-key +MEMORO_SUPABASE_SERVICE_KEY=your-memoro-service-key + +# Test Configuration +TEST_EMAIL=your_test_email@example.com +TEST_PASSWORD=your_test_password \ No newline at end of file diff --git a/apps/memoro/apps/backend/jest.config.js b/apps/memoro/apps/backend/jest.config.js new file mode 100644 index 000000000..210cc2e5b --- /dev/null +++ b/apps/memoro/apps/backend/jest.config.js @@ -0,0 +1,21 @@ +module.exports = { + moduleFileExtensions: ['js', 'json', 'ts'], + rootDir: 'src', + testRegex: '.*\\.spec\\.ts$', + transform: { + '^.+\\.(t|j)s$': 'ts-jest', + }, + collectCoverageFrom: [ + '**/*.(t|j)s', + '!**/*.module.ts', + '!**/main.ts', + '!**/*.interface.ts', + '!**/*.dto.ts', + ], + coverageDirectory: '../coverage', + testEnvironment: 'node', + moduleNameMapper: { + '^src/(.*)$': '/$1', + }, + setupFilesAfterEnv: ['/../test/jest-setup.ts'], +}; diff --git a/apps/memoro/apps/backend/package.json b/apps/memoro/apps/backend/package.json new file mode 100644 index 000000000..477b92888 --- /dev/null +++ b/apps/memoro/apps/backend/package.json @@ -0,0 +1,50 @@ +{ + "name": "@memoro/backend", + "version": "0.1.0", + "description": "Memoro microservice for Mana core system", + "main": "dist/main.js", + "scripts": { + "build": "nest build", + "start": "nest start", + "start:dev": "nest start --watch", + "start:debug": "nest start --debug --watch", + "start:prod": "node dist/src/main", + "lint": "eslint \"{src,apps,libs,test}/**/*.ts\" --fix", + "test": "jest", + "test:watch": "jest --watch", + "test:cov": "jest --coverage", + "test:debug": "node --inspect-brk -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand" + }, + "dependencies": { + "@nestjs/axios": "^3.0.0", + "@nestjs/common": "^10.0.0", + "@nestjs/config": "^3.0.0", + "@nestjs/core": "^10.0.0", + "@nestjs/platform-express": "^10.0.0", + "@supabase/supabase-js": "^2.49.5", + "@types/jsonwebtoken": "^9.0.7", + "@types/multer": "^1.4.12", + "@types/uuid": "^10.0.0", + "axios": "^1.9.0", + "jsonwebtoken": "^9.0.2", + "multer": "^2.0.0", + "music-metadata": "^7.14.0", + "reflect-metadata": "^0.1.13", + "rxjs": "^7.8.0", + "uuid": "^11.1.0" + }, + "devDependencies": { + "@nestjs/cli": "^10.0.0", + "@nestjs/testing": "^10.0.0", + "@types/express": "^4.17.17", + "@types/jest": "^29.5.2", + "@types/node": "^20.3.1", + "@types/supertest": "^2.0.12", + "jest": "^29.5.0", + "supertest": "^6.3.3", + "ts-jest": "^29.1.0", + "ts-node": "^10.9.1", + "tsconfig-paths": "^4.2.0", + "typescript": "^5.1.3" + } +} diff --git a/apps/memoro/apps/backend/scripts/check-audio-path-field.ts b/apps/memoro/apps/backend/scripts/check-audio-path-field.ts new file mode 100644 index 000000000..c299fd40a --- /dev/null +++ b/apps/memoro/apps/backend/scripts/check-audio-path-field.ts @@ -0,0 +1,106 @@ +#!/usr/bin/env ts-node + +/** + * Script to analyze and standardize audio path field usage in Memoro production database + * + * STANDARDIZATION GOAL: + * - Standardize all backend services to use 'audio_path' field consistently + * - Handle legacy 'path' field references for backward compatibility + * - Migrate any remaining 'path' fields to 'audio_path' in database + * + * CURRENT STATUS (August 25, 2025): + * - Most memos already use 'audio_path' field (92%) + * - Small subset uses legacy 'path' field (7.3%) + * - Backend services now standardized to use 'audio_path' + * + * MIGRATION APPROACH: + * - Update backend services to prioritize 'audio_path' over 'path' + * - Migrate database records from 'path' to 'audio_path' + * - Maintain backward compatibility during transition + * + * SQL QUERIES USED: + */ + +// Query 1: Overall statistics +const overallStatsQuery = ` +SELECT + COUNT(*) FILTER (WHERE source->>'audio_path' IS NOT NULL) as memos_with_audio_path, + COUNT(*) FILTER (WHERE source->>'path' IS NOT NULL) as memos_with_path, + COUNT(*) as total_memos, + COUNT(*) FILTER (WHERE source->>'audio_path' IS NOT NULL AND source->>'path' IS NULL) as only_audio_path, + COUNT(*) FILTER (WHERE source->>'audio_path' IS NOT NULL AND source->>'path' IS NOT NULL) as both_fields +FROM memos +WHERE source IS NOT NULL; +`; + +// Query 2: Monthly breakdown +const monthlyBreakdownQuery = ` +SELECT + DATE_TRUNC('month', created_at) as month, + COUNT(*) FILTER (WHERE source->>'audio_path' IS NOT NULL) as with_audio_path, + COUNT(*) FILTER (WHERE source->>'path' IS NOT NULL) as with_path, + COUNT(*) as total +FROM memos +WHERE source IS NOT NULL +GROUP BY month +ORDER BY month DESC +LIMIT 12; +`; + +// Query 3: Daily breakdown for transition period +const dailyTransitionQuery = ` +SELECT + DATE_TRUNC('day', created_at) as day, + COUNT(*) FILTER (WHERE source->>'audio_path' IS NOT NULL) as with_audio_path, + COUNT(*) FILTER (WHERE source->>'path' IS NOT NULL) as with_path, + COUNT(*) as total +FROM memos +WHERE source IS NOT NULL + AND created_at >= '2025-05-01' + AND created_at < '2025-07-01' +GROUP BY day +ORDER BY day; +`; + +// Migration query to standardize all memos to use 'audio_path' field +const migrationQuery = ` +-- DRY RUN: Check what would be migrated from 'path' to 'audio_path' +SELECT + id, + source->>'path' as current_path, + source->>'audio_path' as current_audio_path, + created_at +FROM memos +WHERE source->>'path' IS NOT NULL + AND source->>'audio_path' IS NULL +LIMIT 10; + +-- ACTUAL MIGRATION (run with caution): +-- Migrate 'path' field to 'audio_path' field +-- UPDATE memos +-- SET source = jsonb_set( +-- source - 'path', +-- '{audio_path}', +-- source->'path' +-- ) +-- WHERE source->>'path' IS NOT NULL +-- AND source->>'audio_path' IS NULL; +`; + +console.log('Audio Path Field Analysis Script'); +console.log('================================'); +console.log(''); +console.log('This script documents the analysis of the legacy audio_path field usage'); +console.log('in the Memoro production database.'); +console.log(''); +console.log('Key Findings:'); +console.log('- 92% of memos (16,223) already use the audio_path field'); +console.log('- Only 7.3% (1,286) use the legacy path field'); +console.log('- The fields are mutually exclusive (no memo has both)'); +console.log('- Brief transition attempted in May-June 2025 but mostly reverted'); +console.log(''); +console.log('Backend Standardization Complete:'); +console.log('- All backend services now standardized to use "audio_path" field'); +console.log('- Legacy "path" field handling maintained for backward compatibility'); +console.log('- Database migration needed for remaining 7.3% with "path" field'); +console.log('- Edge Functions already use "audio_path" consistently'); diff --git a/apps/memoro/apps/backend/src/ai/ai-model.config.ts b/apps/memoro/apps/backend/src/ai/ai-model.config.ts new file mode 100644 index 000000000..b6f453ccd --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/ai-model.config.ts @@ -0,0 +1,54 @@ +/** + * Zentrale AI-Modell-Konfiguration + * + * Alle Modelle, Endpoints und Presets an einer Stelle. + * Modellwechsel = nur diese Datei ändern. + */ + +export interface GeminiConfig { + model: string; + endpoint: string; + temperature: number; + maxOutputTokens: number; +} + +export interface AzureOpenAIConfig { + endpoint: string; + deployment: string; + apiVersion: string; + temperature: number; + maxTokens: number; +} + +export interface GenerateOptions { + temperature?: number; + maxTokens?: number; +} + +// ── Primary: Google Gemini ── +// Note: gemini-2.0-flash wird Juni 2026 deprecated → gemini-2.0-flash-001 ist stabil +export const GEMINI_DEFAULT: GeminiConfig = { + model: 'gemini-2.0-flash-001', + endpoint: 'https://generativelanguage.googleapis.com/v1beta/models', + temperature: 0.7, + maxOutputTokens: 8192, +}; + +// ── Fallback: Azure OpenAI ── +export const AZURE_DEFAULT: AzureOpenAIConfig = { + endpoint: 'https://memoroseopenai.openai.azure.com', + deployment: 'gpt-4.1-mini-se', + apiVersion: '2025-01-01-preview', + temperature: 0.7, + maxTokens: 8192, +}; + +// ── Task-spezifische Presets ── +export const AI_PRESETS = { + headline: { temperature: 0.7, maxTokens: 300 }, + memory: { temperature: 0.7, maxTokens: 8192 }, + translation: { temperature: 0.3, maxTokens: 8192 }, + selection: { temperature: 0.3, maxTokens: 2048 }, +} as const; + +export type AiPreset = keyof typeof AI_PRESETS; diff --git a/apps/memoro/apps/backend/src/ai/ai.module.ts b/apps/memoro/apps/backend/src/ai/ai.module.ts new file mode 100644 index 000000000..ad2c0987b --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/ai.module.ts @@ -0,0 +1,12 @@ +import { Module } from '@nestjs/common'; +import { AiService } from './ai.service'; +import { HeadlineService } from './headline/headline.service'; +import { MemoryService } from './memory/memory.service'; +import { QuestionService } from './memory/question.service'; +import { UserPromptService } from './shared/user-prompt.service'; + +@Module({ + providers: [AiService, HeadlineService, MemoryService, QuestionService, UserPromptService], + exports: [AiService, HeadlineService, MemoryService, QuestionService, UserPromptService], +}) +export class AiModule {} diff --git a/apps/memoro/apps/backend/src/ai/ai.service.ts b/apps/memoro/apps/backend/src/ai/ai.service.ts new file mode 100644 index 000000000..23ae20c74 --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/ai.service.ts @@ -0,0 +1,141 @@ +import { Injectable, Logger } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { + GEMINI_DEFAULT, + AZURE_DEFAULT, + type GeminiConfig, + type AzureOpenAIConfig, + type GenerateOptions, +} from './ai-model.config'; + +@Injectable() +export class AiService { + private readonly logger = new Logger(AiService.name); + private readonly geminiApiKey: string; + private readonly azureApiKey: string; + + constructor(private configService: ConfigService) { + this.geminiApiKey = this.configService.get('GEMINI_API_KEY', ''); + this.azureApiKey = this.configService.get('AZURE_OPENAI_KEY', ''); + } + + /** + * Generiert Text mit Gemini (Primary) → Azure (Fallback). + * Gibt den rohen Text-Content zurück. + */ + async generateText( + prompt: string, + options?: GenerateOptions & { systemInstruction?: string } + ): Promise { + // Primary: Gemini + if (this.geminiApiKey) { + const result = await this.callGemini(prompt, this.geminiApiKey, options); + if (result !== null) return result; + this.logger.warn('Gemini failed, falling back to Azure OpenAI'); + } else { + this.logger.warn('No Gemini API key, using Azure OpenAI directly'); + } + + // Fallback: Azure + if (!this.azureApiKey) { + throw new Error('No AI provider available: both Gemini and Azure keys missing'); + } + const result = await this.callAzure(prompt, options); + if (result !== null) return result; + + throw new Error('All AI providers failed'); + } + + private async callGemini( + prompt: string, + apiKey: string, + options?: GenerateOptions & { systemInstruction?: string } + ): Promise { + const config: GeminiConfig = { + ...GEMINI_DEFAULT, + temperature: options?.temperature ?? GEMINI_DEFAULT.temperature, + maxOutputTokens: options?.maxTokens ?? GEMINI_DEFAULT.maxOutputTokens, + }; + + try { + const url = `${config.endpoint}/${config.model}:generateContent?key=${apiKey}`; + const body: any = { + contents: [{ parts: [{ text: prompt }] }], + generationConfig: { + temperature: config.temperature, + maxOutputTokens: config.maxOutputTokens, + }, + }; + + if (options?.systemInstruction) { + body.systemInstruction = { + parts: [{ text: options.systemInstruction }], + }; + } + + const start = Date.now(); + const response = await fetch(url, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(body), + }); + + if (!response.ok) { + const errorText = await response.text(); + this.logger.error(`Gemini API error (${response.status}): ${errorText}`); + return null; + } + + const data = await response.json(); + const content = data.candidates?.[0]?.content?.parts?.[0]?.text?.trim() || ''; + this.logger.debug( + `Gemini ${config.model} responded in ${Date.now() - start}ms (${content.length} chars)` + ); + return content || null; + } catch (error) { + this.logger.error(`Gemini call failed: ${error instanceof Error ? error.message : error}`); + return null; + } + } + + private async callAzure(prompt: string, options?: GenerateOptions): Promise { + const config: AzureOpenAIConfig = { + ...AZURE_DEFAULT, + temperature: options?.temperature ?? AZURE_DEFAULT.temperature, + maxTokens: options?.maxTokens ?? AZURE_DEFAULT.maxTokens, + }; + + try { + const url = `${config.endpoint}/openai/deployments/${config.deployment}/chat/completions?api-version=${config.apiVersion}`; + const start = Date.now(); + const response = await fetch(url, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'api-key': this.azureApiKey, + }, + body: JSON.stringify({ + messages: [{ role: 'user', content: prompt }], + max_tokens: config.maxTokens, + temperature: config.temperature, + }), + }); + + if (!response.ok) { + const errorText = await response.text(); + this.logger.error(`Azure OpenAI error (${response.status}): ${errorText}`); + return null; + } + + const data = await response.json(); + const content = data.choices?.[0]?.message?.content?.trim() || ''; + this.logger.debug( + `Azure ${config.deployment} responded in ${Date.now() - start}ms (${content.length} chars)` + ); + return content || null; + } catch (error) { + this.logger.error(`Azure call failed: ${error instanceof Error ? error.message : error}`); + return null; + } + } +} diff --git a/apps/memoro/apps/backend/src/ai/headline/headline.prompts.ts b/apps/memoro/apps/backend/src/ai/headline/headline.prompts.ts new file mode 100644 index 000000000..1061c351c --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/headline/headline.prompts.ts @@ -0,0 +1,219 @@ +/** + * System-Prompts für die Headline-Generierung in verschiedenen Sprachen + * + * Die Prompts werden verwendet, um Überschriften und Einleitungen für Memos zu generieren. + * Jede Sprache hat ihren eigenen Prompt, der die spezifischen Anforderungen und Formatierungen enthält. + */ /** + * Interface für die Prompt-Konfiguration + */ /** + * System-Prompts für die Headline-Generierung + * + * Unterstützte Sprachen (62): + * - de: Deutsch + * - en: Englisch + * - fr: Französisch + * - es: Spanisch + * - it: Italienisch + * - nl: Niederländisch + * - pt: Portugiesisch + * - ru: Russisch + * - ja: Japanisch + * - ko: Koreanisch + * - zh: Chinesisch + * - ar: Arabisch + * - hi: Hindi + * - tr: Türkisch + * - pl: Polnisch + * - da: Dänisch + * - sv: Schwedisch + * - nb: Norwegisch + * - fi: Finnisch + * - cs: Tschechisch + * - hu: Ungarisch + * - el: Griechisch + * - he: Hebräisch + * - id: Indonesisch + * - th: Thai + * - vi: Vietnamesisch + * - uk: Ukrainisch + * - ro: Rumänisch + * - bg: Bulgarisch + * - ca: Katalanisch + * - hr: Kroatisch + * - sk: Slowakisch + * - et: Estnisch + * - lv: Lettisch + * - lt: Litauisch + * - bn: Bengalisch + * - ms: Malaiisch + * - ta: Tamil + * - te: Telugu + * - ur: Urdu + * - mr: Marathi + * - gu: Gujarati + * - ml: Malayalam + * - kn: Kannada + * - pa: Punjabi + * - af: Afrikaans + * - fa: Persisch + * - ka: Georgisch + * - is: Isländisch + * - sq: Albanisch + * - az: Aserbaidschanisch + * - eu: Baskisch + * - gl: Galizisch + * - kk: Kasachisch + * - mk: Mazedonisch + * - sr: Serbisch + * - sl: Slowenisch + * - mt: Maltesisch + * - hy: Armenisch + * - uz: Usbekisch + * - ga: Irisch + * - cy: Walisisch + * - fil: Filipino + */ export const SYSTEM_PROMPTS = { + headline: { + // Deutsch + de: 'Du bist ein Assistent, der Texte analysiert und zusammenfasst. Deine Aufgabe ist es, für den folgenden Text zwei Dinge zu erstellen:\n1. Eine kurze, prägnante Headline (maximal 8 Wörter)\n2. Ein kurzes Intro, das den Inhalt des Textes in 2-3 Sätzen zusammenfasst und neugierig macht\n\nFormatiere deine Antwort genau so:\nHEADLINE: [Deine Headline hier]\nINTRO: [Dein Intro hier]', + // Englisch + en: 'You are an assistant that analyzes and summarizes texts. Your task is to create two things for the following text:\n1. A short, concise headline (maximum 8 words)\n2. A brief intro that summarizes the content of the text in 2-3 sentences and makes the reader curious\n\nFormat your answer exactly like this:\nHEADLINE: [Your headline here]\nINTRO: [Your intro here]', + // Französisch + fr: 'Vous êtes un assistant qui analyse et résume des textes. Votre tâche est de créer deux choses pour le texte suivant :\n1. Un titre court et concis (maximum 8 mots)\n2. Une brève introduction qui résume le contenu du texte en 2-3 phrases et éveille la curiosité du lecteur\n\nFormatez votre réponse exactement comme ceci :\nHEADLINE: [Votre titre ici]\nINTRO: [Votre introduction ici]', + // Spanisch + es: 'Eres un asistente que analiza y resume textos. Tu tarea es crear dos cosas para el siguiente texto:\n1. Un título breve y conciso (máximo 8 palabras)\n2. Una breve introducción que resuma el contenido del texto en 2-3 frases y despierte la curiosidad del lector\n\nFormatea tu respuesta exactamente así:\nHEADLINE: [Tu título aquí]\nINTRO: [Tu introducción aquí]', + // Italienisch + it: 'Sei un assistente che analizza e riassume testi. Il tuo compito è creare due cose per il seguente testo:\n1. Un titolo breve e conciso (massimo 8 parole)\n2. Una breve introduzione che riassume il contenuto del testo in 2-3 frasi e suscita la curiosità del lettore\n\nFormatta la tua risposta esattamente così:\nHEADLINE: [Il tuo titolo qui]\nINTRO: [La tua introduzione qui]', + // Niederländisch + nl: 'Je bent een assistent die teksten analyseert en samenvat. Je taak is om twee dingen te maken voor de volgende tekst:\n1. Een korte, bondige kop (maximaal 8 woorden)\n2. Een korte intro die de inhoud van de tekst in 2-3 zinnen samenvat en de lezer nieuwsgierig maakt\n\nFormatteer je antwoord precies zo:\nHEADLINE: [Jouw kop hier]\nINTRO: [Jouw intro hier]', + // Portugiesisch + pt: 'Você é um assistente que analisa e resume textos. Sua tarefa é criar duas coisas para o seguinte texto:\n1. Uma manchete breve e concisa (máximo 8 palavras)\n2. Uma breve introdução que resume o conteúdo do texto em 2-3 frases e desperta a curiosidade do leitor\n\nFormate sua resposta exatamente assim:\nHEADLINE: [Sua manchete aqui]\nINTRO: [Sua introdução aqui]', + // Russisch + ru: 'Вы помощник, который анализирует и резюмирует тексты. Ваша задача - создать две вещи для следующего текста:\n1. Короткий, лаконичный заголовок (максимум 8 слов)\n2. Краткое введение, которое резюмирует содержание текста в 2-3 предложениях и вызывает любопытство у читателя\n\nФорматируйте ваш ответ точно так:\nHEADLINE: [Ваш заголовок здесь]\nINTRO: [Ваше введение здесь]', + // Japanisch + ja: 'あなたはテキストを分析し要約するアシスタントです。次のテキストに対して2つのことを作成するのがあなたの仕事です:\n1. 短く簡潔な見出し(最大8語)\n2. テキストの内容を2-3文で要約し、読者の興味を引く短い導入文\n\n次のように正確にフォーマットしてください:\nHEADLINE: [ここにあなたの見出し]\nINTRO: [ここにあなたの導入文]', + // Koreanisch + ko: '당신은 텍스트를 분석하고 요약하는 어시스턴트입니다. 다음 텍스트에 대해 두 가지를 만드는 것이 당신의 임무입니다:\n1. 짧고 간결한 헤드라인 (최대 8단어)\n2. 텍스트의 내용을 2-3문장으로 요약하고 독자의 호기심을 자극하는 짧은 소개\n\n다음과 같이 정확히 형식을 맞춰주세요:\nHEADLINE: [여기에 당신의 헤드라인]\nINTRO: [여기에 당신의 소개]', + // Chinesisch (vereinfacht) + zh: '你是一个分析和总结文本的助手。你的任务是为以下文本创建两样东西:\n1. 一个简短、简洁的标题(最多8个词)\n2. 一个简短的介绍,用2-3句话总结文本内容并激发读者的好奇心\n\n请严格按照以下格式回答:\nHEADLINE: [你的标题]\nINTRO: [你的介绍]', + // Arabisch + ar: 'أنت مساعد يحلل ويلخص النصوص. مهمتك هي إنشاء شيئين للنص التالي:\n1. عنوان قصير ومقتضب (8 كلمات كحد أقصى)\n2. مقدمة مختصرة تلخص محتوى النص في 2-3 جمل وتثير فضول القارئ\n\nقم بتنسيق إجابتك بالضبط هكذا:\nHEADLINE: [عنوانك هنا]\nINTRO: [مقدمتك هنا]', + // Hindi + hi: 'आप एक सहायक हैं जो ग्रंथों का विश्लेषण और सारांश करते हैं। निम्नलिखित पाठ के लिए दो चीजें बनाना आपका कार्य है:\n1. एक संक्षिप्त, सटीक शीर्षक (अधिकतम 8 शब्द)\n2. एक संक्षिप्त परिचय जो पाठ की सामग्री को 2-3 वाक्यों में सारांशित करता है और पाठक में जिज्ञासा जगाता है\n\nअपना उत्तर बिल्कुल इस तरह से प्रारूपित करें:\nHEADLINE: [यहाँ आपका शीर्षक]\nINTRO: [यहाँ आपका परिचय]', + // Türkisch + tr: 'Metinleri analiz eden ve özetleyen bir asistansınız. Aşağıdaki metin için iki şey oluşturmak sizin göreviniz:\n1. Kısa, özlü bir başlık (maksimum 8 kelime)\n2. Metnin içeriğini 2-3 cümlede özetleyen ve okuyucuyu meraklandıran kısa bir giriş\n\nCevabınızı tam olarak şu şekilde biçimlendirin:\nHEADLINE: [Başlığınız burada]\nINTRO: [Girişiniz burada]', + // Polnisch + pl: 'Jesteś asystentem, który analizuje i streszcza teksty. Twoim zadaniem jest stworzenie dwóch rzeczy dla następującego tekstu:\n1. Krótki, zwięzły nagłówek (maksymalnie 8 słów)\n2. Krótkie wprowadzenie, które streszcza treść tekstu w 2-3 zdaniach i wzbudza ciekawość czytelnika\n\nSformatuj swoją odpowiedź dokładnie tak:\nHEADLINE: [Twój nagłówek tutaj]\nINTRO: [Twoje wprowadzenie tutaj]', + // Dänisch + da: 'Du er en assistent, der analyserer og sammenfatter tekster. Din opgave er at skabe to ting for følgende tekst:\n1. En kort, præcis overskrift (maksimalt 8 ord)\n2. En kort intro, der sammenfatter tekstens indhold i 2-3 sætninger og gør læseren nysgerrig\n\nFormatter dit svar præcis sådan:\nHEADLINE: [Din overskrift her]\nINTRO: [Dit intro her]', + // Schwedisch + sv: 'Du är en assistent som analyserar och sammanfattar texter. Din uppgift är att skapa två saker för följande text:\n1. En kort, koncis rubrik (maximalt 8 ord)\n2. En kort intro som sammanfattar textens innehåll i 2-3 meningar och gör läsaren nyfiken\n\nFormatera ditt svar exakt så här:\nHEADLINE: [Din rubrik här]\nINTRO: [Ditt intro här]', + // Norwegisch + nb: 'Du er en assistent som analyserer og oppsummerer tekster. Oppgaven din er å lage to ting for følgende tekst:\n1. En kort, presis overskrift (maksimalt 8 ord)\n2. En kort intro som oppsummerer tekstens innhold i 2-3 setninger og gjør leseren nysgjerrig\n\nFormater svaret ditt nøyaktig slik:\nHEADLINE: [Din overskrift her]\nINTRO: [Ditt intro her]', + // Finnisch + fi: 'Olet avustaja, joka analysoi ja tiivistää tekstejä. Tehtäväsi on luoda kaksi asiaa seuraavalle tekstille:\n1. Lyhyt, ytimekäs otsikko (enintään 8 sanaa)\n2. Lyhyt johdanto, joka tiivistää tekstin sisällön 2-3 lauseessa ja herättää lukijan uteliaisuuden\n\nMuotoile vastauksesi täsmälleen näin:\nHEADLINE: [Otsikkosi tähän]\nINTRO: [Johdantosi tähän]', + // Tschechisch + cs: 'Jste asistent, který analyzuje a shrnuje texty. Vaším úkolem je vytvořit dvě věci pro následující text:\n1. Krátký, stručný nadpis (maximálně 8 slov)\n2. Krátký úvod, který shrne obsah textu ve 2-3 větách a vzbudí zvědavost čtenáře\n\nNaformátujte svou odpověď přesně takto:\nHEADLINE: [Váš nadpis zde]\nINTRO: [Váš úvod zde]', + // Ungarisch + hu: 'Ön egy asszisztens, aki szövegeket elemez és összefoglal. Az Ön feladata, hogy két dolgot hozzon létre a következő szöveghez:\n1. Egy rövid, tömör címsor (maximum 8 szó)\n2. Egy rövid bevezető, amely 2-3 mondatban összefoglalja a szöveg tartalmát és felkelti az olvasó kíváncsiságát\n\nFormázza válaszát pontosan így:\nHEADLINE: [Az Ön címsora itt]\nINTRO: [Az Ön bevezetője itt]', + // Griechisch + el: 'Είστε ένας βοηθός που αναλύει και συνοψίζει κείμενα. Το καθήκον σας είναι να δημιουργήσετε δύο πράγματα για το ακόλουθο κείμενο:\n1. Έναν σύντομο, περιεκτικό τίτλο (μέγιστο 8 λέξεις)\n2. Μια σύντομη εισαγωγή που συνοψίζει το περιεχόμενο του κειμένου σε 2-3 προτάσεις και προκαλεί την περιέργεια του αναγνώστη\n\nΜορφοποιήστε την απάντησή σας ακριβώς έτσι:\nHEADLINE: [Ο τίτλος σας εδώ]\nINTRO: [Η εισαγωγή σας εδώ]', + // Hebräisch + he: 'אתה עוזר שמנתח ומסכם טקסטים. המשימה שלך היא ליצור שני דברים לטקסט הבא:\n1. כותרת קצרה ותמציתית (מקסימום 8 מילים)\n2. הקדמה קצרה שמסכמת את תוכן הטקסט ב-2-3 משפטים ומעוררת סקרנות אצל הקורא\n\nעצב את התשובה שלך בדיוק כך:\nHEADLINE: [הכותרת שלך כאן]\nINTRO: [ההקדמה שלך כאן]', + // Indonesisch + id: 'Anda adalah asisten yang menganalisis dan merangkum teks. Tugas Anda adalah membuat dua hal untuk teks berikut:\n1. Judul yang pendek dan ringkas (maksimal 8 kata)\n2. Intro singkat yang merangkum isi teks dalam 2-3 kalimat dan membuat pembaca penasaran\n\nFormat jawaban Anda persis seperti ini:\nHEADLINE: [Judul Anda di sini]\nINTRO: [Intro Anda di sini]', + // Thai + th: 'คุณเป็นผู้ช่วยที่วิเคราะห์และสรุปข้อความ งานของคุณคือการสร้างสองสิ่งสำหรับข้อความต่อไปนี้:\n1. หัวข้อที่สั้นและกระชับ (ไม่เกิน 8 คำ)\n2. บทนำสั้นๆ ที่สรุปเนื้อหาของข้อความใน 2-3 ประโยคและทำให้ผู้อ่านอยากรู้\n\nจัดรูปแบบคำตอบของคุณตามนี้เป๊ะๆ:\nHEADLINE: [หัวข้อของคุณที่นี่]\nINTRO: [บทนำของคุณที่นี่]', + // Vietnamesisch + vi: 'Bạn là một trợ lý phân tích và tóm tắt văn bản. Nhiệm vụ của bạn là tạo hai thứ cho văn bản sau:\n1. Một tiêu đề ngắn gọn và súc tích (tối đa 8 từ)\n2. Một phần giới thiệu ngắn tóm tắt nội dung văn bản trong 2-3 câu và khơi gợi sự tò mò của người đọc\n\nĐịnh dạng câu trả lời của bạn chính xác như thế này:\nHEADLINE: [Tiêu đề của bạn ở đây]\nINTRO: [Phần giới thiệu của bạn ở đây]', + // Ukrainisch + uk: 'Ви помічник, який аналізує та резюмує тексти. Ваше завдання - створити дві речі для наступного тексту:\n1. Короткий, лаконічний заголовок (максимум 8 слів)\n2. Короткий вступ, який резюмує зміст тексту у 2-3 реченнях та викликає цікавість у читача\n\nФорматуйте вашу відповідь точно так:\nHEADLINE: [Ваш заголовок тут]\nINTRO: [Ваш вступ тут]', + // Rumänisch + ro: 'Sunteți un asistent care analizează și rezumă texte. Sarcina dvs. este să creați două lucruri pentru următorul text:\n1. Un titlu scurt și concis (maximum 8 cuvinte)\n2. O scurtă introducere care rezumă conținutul textului în 2-3 propoziții și trezește curiozitatea cititorului\n\nFormatați răspunsul dvs. exact astfel:\nHEADLINE: [Titlul dvs. aici]\nINTRO: [Introducerea dvs. aici]', + // Bulgarisch + bg: 'Вие сте асистент, който анализира и резюмира текстове. Вашата задача е да създадете две неща за следния текст:\n1. Кратко, сбито заглавие (максимум 8 думи)\n2. Кратко въведение, което резюмира съдържанието на текста в 2-3 изречения и предизвиква любопитството на читателя\n\nФорматирайте отговора си точно така:\nHEADLINE: [Вашето заглавие тук]\nINTRO: [Вашето въведение тук]', + // Katalanisch + ca: 'Ets un assistent que analitza i resumeix textos. La teva tasca és crear dues coses per al següent text:\n1. Un títol breu i concís (màxim 8 paraules)\n2. Una breu introducció que resumeixi el contingut del text en 2-3 frases i desperti la curiositat del lector\n\nFormata la teva resposta exactament així:\nHEADLINE: [El teu títol aquí]\nINTRO: [La teva introducció aquí]', + // Kroatisch + hr: 'Vi ste asistent koji analizira i sažima tekstove. Vaš zadatak je stvoriti dvije stvari za sljedeći tekst:\n1. Kratak, sažet naslov (maksimalno 8 riječi)\n2. Kratak uvod koji sažima sadržaj teksta u 2-3 rečenice i pobuđuje znatiželju čitatelja\n\nFormatirajte svoj odgovor točno ovako:\nHEADLINE: [Vaš naslov ovdje]\nINTRO: [Vaš uvod ovdje]', + // Slowakisch + sk: 'Ste asistent, ktorý analyzuje a sumarizuje texty. Vašou úlohou je vytvoriť dve veci pre nasledujúci text:\n1. Krátky, stručný nadpis (maximálne 8 slov)\n2. Krátky úvod, ktorý sumarizuje obsah textu v 2-3 vetách a vzbudí zvedavosť čitateľa\n\nNaformátujte svoju odpoveď presne takto:\nHEADLINE: [Váš nadpis tu]\nINTRO: [Váš úvod tu]', + // Estnisch + et: 'Olete assistent, kes analüüsib ja kokkuvõtab tekste. Teie ülesanne on luua kaks asja järgmise teksti jaoks:\n1. Lühike, kokkuvõtlik pealkiri (maksimaalselt 8 sõna)\n2. Lühike sissejuhatus, mis võtab teksti sisu kokku 2-3 lauses ja äratab lugeja uudishimu\n\nVormistage oma vastus täpselt nii:\nHEADLINE: [Teie pealkiri siin]\nINTRO: [Teie sissejuhatus siin]', + // Lettisch + lv: 'Jūs esat asistents, kas analizē un apkopo tekstus. Jūsu uzdevums ir izveidot divas lietas šādam tekstam:\n1. Īsu, kodolīgu virsrakstu (maksimums 8 vārdi)\n2. Īsu ievadu, kas apkopo teksta saturu 2-3 teikumos un modina lasītāja ziņkāri\n\nFormatējiet savu atbildi tieši tā:\nHEADLINE: [Jūsu virsraksts šeit]\nINTRO: [Jūsu ievads šeit]', + // Litauisch + lt: 'Esate asistentas, kuris analizuoja ir apibendrина tekstus. Jūsų užduotis - sukurti du dalykus šiam tekstui:\n1. Trumpą, glaustą antraštę (ne daugiau kaip 8 žodžiai)\n2. Trumpą įvadą, kuris apibendrinta teksto turinį 2-3 sakiniais ir žadina skaitytojo smalsumą\n\nSuformatuokite savo atsakymą tiksliai taip:\nHEADLINE: [Jūsų antraštė čia]\nINTRO: [Jūsų įvadas čia]', + // Bengalisch + bn: 'আপনি একজন সহায়ক যিনি পাঠ্য বিশ্লেষণ এবং সারসংক্ষেপ করেন। নিম্নলিখিত পাঠ্যের জন্য দুটি জিনিস তৈরি করা আপনার কাজ:\n1. একটি সংক্ষিপ্ত, সারগর্ভ শিরোনাম (সর্বোচ্চ ৮টি শব্দ)\n2. একটি সংক্ষিপ্ত ভূমিকা যা ২-৩টি বাক্যে পাঠ্যের বিষয়বস্তু সারসংক্ষেপ করে এবং পাঠকের কৌতূহল জাগায়\n\nআপনার উত্তর ঠিক এভাবে ফরম্যাট করুন:\nHEADLINE: [এখানে আপনার শিরোনাম]\nINTRO: [এখানে আপনার ভূমিকা]', + // Malaiisch + ms: 'Anda adalah pembantu yang menganalisis dan meringkaskan teks. Tugas anda adalah untuk mencipta dua perkara untuk teks berikut:\n1. Tajuk utama yang pendek dan padat (maksimum 8 perkataan)\n2. Pengenalan ringkas yang meringkaskan kandungan teks dalam 2-3 ayat dan menimbulkan rasa ingin tahu pembaca\n\nFormatkan jawapan anda tepat seperti ini:\nHEADLINE: [Tajuk utama anda di sini]\nINTRO: [Pengenalan anda di sini]', + // Tamil + ta: 'நீங்கள் உரைகளை பகுப்பாய்வு செய்து சுருக்கும் உதவியாளர். பின்வரும் உரைக்கு இரண்டு விஷயங்களை உருவாக்குவது உங்கள் பணி:\n1. ஒரு குறுகிய, சுருக்கமான தலைப்பு (அதிகபட்சம் 8 வார்த்தைகள்)\n2. உரையின் உள்ளடக்கத்தை 2-3 வாக்கியங்களில் சுருக்கி வாசகரின் ஆர்வத்தை தூண்டும் குறுகிய அறிமுகம்\n\nஉங்கள் பதிலை சரியாக இப்படி வடிவமைக்கவும்:\nHEADLINE: [இங்கே உங்கள் தலைப்பு]\nINTRO: [இங்கே உங்கள் அறிமுகம்]', + // Telugu + te: 'మీరు టెక్స్ట్‌లను విశ్లేషించి సంక్షిప్తీకరించే సహాయకుడు. కింది టెక్స్ట్ కోసం రెండు విషయాలు సృష్టించడం మీ పని:\n1. ఒక చిన్న, సంక్షిప్త శీర్షిక (గరిష్టంగా 8 పదాలు)\n2. టెక్స్ట్ యొక్క కంటెంట్‌ను 2-3 వాక్యాలలో సంక్షిప్తీకరించి పాఠకుడిలో ఆసక్తిని రేకెత్తించే చిన్న పరిచయం\n\nమీ సమాధానాన్ని సరిగ్గా ఇలా ఫార్మాట్ చేయండి:\nHEADLINE: [ఇక్కడ మీ శీర్షిక]\nINTRO: [ఇక్కడ మీ పరిచయం]', + // Urdu + ur: 'آپ ایک معاون ہیں جو متن کا تجزیہ اور خلاصہ کرتے ہیں۔ مندرجہ ذیل متن کے لیے دو چیزیں بنانا آپ کا کام ہے:\n1. ایک مختصر، جامع سرخی (زیادہ سے زیادہ 8 الفاظ)\n2. ایک مختصر تعارف جو متن کے مواد کو 2-3 جملوں میں خلاصہ کرے اور قاری میں تجسس پیدا کرے\n\nاپنے جواب کو بالکل اس طرح فارمیٹ کریں:\nHEADLINE: [یہاں آپ کی سرخی]\nINTRO: [یہاں آپ کا تعارف]', + // Marathi + mr: 'तुम्ही मजकूरांचे विश्लेषण आणि सारांश करणारे सहाय्यक आहात. पुढील मजकुरासाठी दोन गोष्टी तयार करणे हे तुमचे काम आहे:\n1. एक लहान, संक्षिप्त मथळा (जास्तीत जास्त 8 शब्द)\n2. एक छोटी प्रस्तावना जी मजकुराची सामग्री 2-3 वाक्यांमध्ये सारांशित करते आणि वाचकामध्ये कुतूहल निर्माण करते\n\nतुमचे उत्तर अगदी अशा प्रकारे स्वरूपित करा:\nHEADLINE: [इथे तुमचा मथळा]\nINTRO: [इथे तुमची प्रस्तावना]', + // Gujarati + gu: 'તમે એક સહાયક છો જે ટેક્સ્ટનું વિશ્લેષણ અને સારાંશ કરે છે. નીચેના ટેક્સ્ટ માટે બે વસ્તુઓ બનાવવી એ તમારું કામ છે:\n1. એક ટૂંકું, સંક્ષિપ્ત હેડલાઇન (મહત્તમ 8 શબ્દો)\n2. એક ટૂંકો પરિચય જે ટેક્સ્ટની સામગ્રીને 2-3 વાક્યોમાં સારાંશ આપે અને વાચકમાં જિજ્ઞાસા જગાડે\n\nતમારા જવાબને બરાબર આ રીતે ફોર્મેટ કરો:\nHEADLINE: [અહીં તમારું હેડલાઇન]\nINTRO: [અહીં તમારો પરિચય]', + // Malayalam + ml: 'നിങ്ങൾ വാചകങ്ങൾ വിശകലനം ചെയ്യുകയും സംഗ്രഹിക്കുകയും ചെയ്യുന്ന ഒരു സഹായകനാണ്. ഇനിപ്പറയുന്ന വാചകത്തിനായി രണ്ട് കാര്യങ്ങൾ സൃഷ്ടിക്കുക എന്നതാണ് നിങ്ങളുടെ ജോലി:\n1. ഒരു ചെറിയ, സംക്ഷിപ്ത തലക്കെട്ട് (പരമാവധി 8 വാക്കുകൾ)\n2. വാചകത്തിന്റെ ഉള്ളടക്കം 2-3 വാക്യങ്ങളിൽ സംഗ്രഹിക്കുകയും വായനക്കാരനിൽ ജിജ്ഞാസ ഉണർത്തുകയും ചെയ്യുന്ന ഒരു ചെറിയ ആമുഖം\n\nനിങ്ങളുടെ ഉത്തരം കൃത്യമായി ഇപ്രകാരം ഫോർമാറ്റ് ചെയ്യുക:\nHEADLINE: [ഇവിടെ നിങ്ങളുടെ തലക്കെട്ട്]\nINTRO: [ഇവിടെ നിങ്ങളുടെ ആമുഖം]', + // Kannada + kn: 'ನೀವು ಪಠ್ಯಗಳನ್ನು ವಿಶ್ಲೇಷಿಸುವ ಮತ್ತು ಸಾರಾಂಶಗೊಳಿಸುವ ಸಹಾಯಕರಾಗಿದ್ದೀರಿ. ಕೆಳಗಿನ ಪಠ್ಯಕ್ಕಾಗಿ ಎರಡು ವಿಷಯಗಳನ್ನು ರಚಿಸುವುದು ನಿಮ್ಮ ಕೆಲಸ:\n1. ಒಂದು ಸಣ್ಣ, ಸಂಕ್ಷಿಪ್ತ ಶೀರ್ಷಿಕೆ (ಗರಿಷ್ಠ 8 ಪದಗಳು)\n2. ಪಠ್ಯದ ವಿಷಯವನ್ನು 2-3 ವಾಕ್ಯಗಳಲ್ಲಿ ಸಾರಾಂಶಗೊಳಿಸುವ ಮತ್ತು ಓದುಗರಲ್ಲಿ ಕುತೂಹಲವನ್ನು ಹುಟ್ಟಿಸುವ ಒಂದು ಸಣ್ಣ ಪರಿಚಯ\n\nನಿಮ್ಮ ಉತ್ತರವನ್ನು ನಿಖರವಾಗಿ ಈ ರೀತಿ ಫಾರ್ಮ್ಯಾಟ್ ಮಾಡಿ:\nHEADLINE: [ಇಲ್ಲಿ ನಿಮ್ಮ ಶೀರ್ಷಿಕೆ]\nINTRO: [ಇಲ್ಲಿ ನಿಮ್ಮ ಪರಿಚಯ]', + // Punjabi + pa: 'ਤੁਸੀਂ ਇੱਕ ਸਹਾਇਕ ਹੋ ਜੋ ਟੈਕਸਟਾਂ ਦਾ ਵਿਸ਼ਲੇਸ਼ਣ ਅਤੇ ਸੰਖੇਪ ਕਰਦੇ ਹੋ। ਹੇਠਲੇ ਟੈਕਸਟ ਲਈ ਦੋ ਚੀਜ਼ਾਂ ਬਣਾਉਣਾ ਤੁਹਾਡਾ ਕੰਮ ਹੈ:\n1. ਇੱਕ ਛੋਟੀ, ਸੰਖੇਪ ਸਿਰਲੇਖ (ਵੱਧ ਤੋਂ ਵੱਧ 8 ਸ਼ਬਦ)\n2. ਇੱਕ ਛੋਟੀ ਜਾਣ-ਪਛਾਣ ਜੋ ਟੈਕਸਟ ਦੀ ਸਮੱਗਰੀ ਨੂੰ 2-3 ਵਾਕਾਂ ਵਿੱਚ ਸੰਖੇਪ ਕਰੇ ਅਤੇ ਪਾਠਕ ਵਿੱਚ ਉਤਸੁਕਤਾ ਪੈਦਾ ਕਰੇ\n\nਆਪਣੇ ਜਵਾਬ ਨੂੰ ਬਿਲਕੁਲ ਇਸ ਤਰ੍ਹਾਂ ਫਾਰਮੈਟ ਕਰੋ:\nHEADLINE: [ਇੱਥੇ ਤੁਹਾਡੀ ਸਿਰਲੇਖ]\nINTRO: [ਇੱਥੇ ਤੁਹਾਡੀ ਜਾਣ-ਪਛਾਣ]', + // Afrikaans + af: "Jy is 'n assistent wat tekste ontleed en opsom. Jou taak is om twee dinge vir die volgende teks te skep:\n1. 'n Kort, bondige opskrif (maksimum 8 woorde)\n2. 'n Kort inleiding wat die inhoud van die teks in 2-3 sinne opsom en die leser nuuskierig maak\n\nFormateer jou antwoord presies so:\nHEADLINE: [Jou opskrif hier]\nINTRO: [Jou inleiding hier]", + // Persisch/Farsi + fa: 'شما دستیاری هستید که متون را تجزیه و تحلیل و خلاصه می‌کند. وظیفه شما ایجاد دو چیز برای متن زیر است:\n1. یک عنوان کوتاه و مختصر (حداکثر 8 کلمه)\n2. یک مقدمه کوتاه که محتوای متن را در 2-3 جمله خلاصه کند و کنجکاوی خواننده را برانگیزد\n\nپاسخ خود را دقیقاً به این شکل قالب‌بندی کنید:\nHEADLINE: [عنوان شما اینجا]\nINTRO: [مقدمه شما اینجا]', + // Georgisch + ka: 'თქვენ ხართ ასისტენტი, რომელიც აანალიზებს და აჯამებს ტექსტებს. თქვენი ამოცანაა შემდეგი ტექსტისთვის ორი რამ შექმნათ:\n1. მოკლე, ლაკონური სათაური (მაქსიმუმ 8 სიტყვა)\n2. მოკლე შესავალი, რომელიც აჯამებს ტექსტის შინაარსს 2-3 წინადადებაში და აღძრავს მკითხველის ცნობისმოყვარეობას\n\nგააფორმეთ თქვენი პასუხი ზუსტად ასე:\nHEADLINE: [თქვენი სათაური აქ]\nINTRO: [თქვენი შესავალი აქ]', + // Isländisch + is: 'Þú ert aðstoðarmaður sem greinir og dregur saman texta. Verkefni þitt er að búa til tvö hluti fyrir eftirfarandi texta:\n1. Stuttan, hnitmiðaðan fyrirsögn (að hámarki 8 orð)\n2. Stutta inngang sem dregur saman efni textans í 2-3 setningum og vekur forvitni lesandans\n\nSníðdu svarið þitt nákvæmlega svona:\nHEADLINE: [Fyrirsögnin þín hér]\nINTRO: [Inngangurinn þinn hér]', + // Albanisch + sq: 'Ju jeni një asistent që analizon dhe përmbledh tekste. Detyra juaj është të krijoni dy gjëra për tekstin e mëposhtëm:\n1. Një titull të shkurtër dhe të përqendruar (maksimumi 8 fjalë)\n2. Një hyrje të shkurtër që përmbledh përmbajtjen e tekstit në 2-3 fjali dhe ngjall kuriozitenin e lexuesit\n\nFormatoni përgjigjen tuaj saktësisht kështu:\nHEADLINE: [Titulli juaj këtu]\nINTRO: [Hyrja juaj këtu]', + // Aserbaidschanisch + az: 'Siz mətnləri təhlil edən və xülasə çıxaran köməkçisiniz. Sizin vəzifəniz aşağıdakı mətn üçün iki şey yaratmaqdır:\n1. Qısa, dəqiq başlıq (maksimum 8 söz)\n2. Mətnin məzmununu 2-3 cümlədə xülasə edən və oxucunun marağını oyadan qısa giriş\n\nCavabınızı dəqiq belə formatlaşdırın:\nHEADLINE: [Başlığınız burada]\nINTRO: [Girişiniz burada]', + // Baskisch + eu: 'Testuak aztertzen eta laburbildu egiten dituen laguntzaile bat zara. Zure zeregina honako testuarentzat bi gauza sortzea da:\n1. Izenburua labur eta zehatza (gehienez 8 hitz)\n2. Testuaren edukia 2-3 esalditan laburbiltzen duen eta irakurlearen jakin-mina piztuko duen sarrera laburra\n\nErantzuna zehatz-mehatz honela formateatu:\nHEADLINE: [Zure izenburua hemen]\nINTRO: [Zure sarrera hemen]', + // Galizisch + gl: 'Es un asistente que analiza e resume textos. A túa tarefa é crear dúas cousas para o seguinte texto:\n1. Un título breve e conciso (máximo 8 palabras)\n2. Unha breve introdución que resuma o contido do texto en 2-3 frases e esperte a curiosidade do lector\n\nFormatea a túa resposta exactamente así:\nHEADLINE: [O teu título aquí]\nINTRO: [A túa introdución aquí]', + // Kasachisch + kk: 'Сіз мәтіндерді талдайтын және қорытындылайтын көмекшісіз. Сіздің міндетіңіз келесі мәтін үшін екі нәрсе жасау:\n1. Қысқа, нақты тақырып (ең көбі 8 сөз)\n2. Мәтін мазмұнын 2-3 сөйлемде қорытындылайтын және оқырманның қызығушылығын туғызатын қысқа кіріспе\n\nЖауабыңызды дәл осылай пішімдеңіз:\nHEADLINE: [Мұнда сіздің тақырыбыңыз]\nINTRO: [Мұнда сіздің кіріспеңіз]', + // Mazedonisch + mk: 'Вие сте асистент кој анализира и резимира текстови. Вашата задача е да создадете две работи за следниот текст:\n1. Краток, јасен наслов (максимум 8 зборови)\n2. Краток вовед кој го резимира содржината на текстот во 2-3 реченици и ја буди љубопитноста на читателот\n\nФорматирајте го вашиот одговор точно вака:\nHEADLINE: [Вашиот наслов тука]\nINTRO: [Вашиот вовед тука]', + // Serbisch + sr: 'Ви сте асистент који анализира и резимира текстове. Ваш задатак је да направите две ствари за следећи текст:\n1. Кратак, јасан наслов (максимум 8 речи)\n2. Кратак увод који резимира садржај текста у 2-3 реченице и буди радозналост читаоца\n\nФорматирајте ваш одговор тачно овако:\nHEADLINE: [Ваш наслов овде]\nINTRO: [Ваш увод овде]', + // Slowenisch + sl: 'Ste pomočnik, ki analizira in povzema besedila. Vaša naloga je ustvariti dve stvari za naslednje besedilo:\n1. Kratek, jedrnat naslov (največ 8 besed)\n2. Kratek uvod, ki povzema vsebino besedila v 2-3 stavkih in prebudi radovednost bralca\n\nOblikujte svoj odgovor natanko tako:\nHEADLINE: [Vaš naslov tukaj]\nINTRO: [Vaš uvod tukaj]', + // Maltesisch + mt: "Inti assistent li janalizza u jissommarja testi. Il-kompitu tiegħek huwa li toħloq żewġ affarijiet għat-test li ġej:\n1. Intestatura qasira u konċiza (massimu 8 kliem)\n2. Introduzzjoni qasira li tissommarja l-kontenut tat-test f'2-3 sentenzi u tqajjem il-kurżità tal-qarrej\n\nFormatja t-tweġiba tiegħek eżattament hekk:\nHEADLINE: [L-intestatura tiegħek hawn]\nINTRO: [L-introduzzjoni tiegħek hawn]", + // Armenisch + hy: 'Դուք օգնական եք, որը վերլուծում և ամփոփում է տեքստեր: Ձեր խնդիրն է ստեղծել երկու բան հետևյալ տեքստի համար:\n1. Կարճ, հակիրճ վերնագիր (առավելագույնը 8 բառ)\n2. Կարճ ներածություն, որը ամփոփում է տեքստի բովանդակությունը 2-3 նախադասությամբ և արթնացնում ընթերցողի հետաքրքրությունը\n\nՁևակերպեք ձեր պատասխանը հենց այսպես:\nHEADLINE: [Ձեր վերնագիրը այստեղ]\nINTRO: [Ձեր ներածությունը այստեղ]', + // Usbekisch + uz: "Siz matnlarni tahlil qiluvchi va xulosa chiqaruvchi yordamchisiz. Sizning vazifangiz quyidagi matn uchun ikki narsa yaratishdir:\n1. Qisqa, aniq sarlavha (maksimal 8 so'z)\n2. Matn mazmunini 2-3 jumlada xulosa qiladigan va o'quvchining qiziqishini uyg'otadigan qisqa kirish\n\nJavobingizni aynan shunday formatlang:\nHEADLINE: [Bu yerda sizning sarlavhangiz]\nINTRO: [Bu yerda sizning kirishingiz]", + // Irisch + ga: 'Is cúntóir thú a dhéanann anailís agus achoimre ar théacsanna. Is é do thasc dhá rud a chruthú don téacs seo a leanas:\n1. Ceannlíne ghearr, ghonta (8 bhfocal ar a mhéad)\n2. Réamhrá gearr a dhéanann achoimre ar ábhar an téacs i 2-3 abairt agus a spreagann fiosracht an léitheora\n\nFormáidigh do fhreagra díreach mar seo:\nHEADLINE: [Do cheannlíne anseo]\nINTRO: [Do réamhrá anseo]', + // Walisisch + cy: "Rydych chi'n gynorthwyydd sy'n dadansoddi ac yn crynhoi testunau. Eich tasg yw creu dau beth ar gyfer y testun canlynol:\n1. Pennawd byr, cryno (uchafswm o 8 gair)\n2. Cyflwyniad byr sy'n crynhoi cynnwys y testun mewn 2-3 brawddeg ac yn ennyn chwilfrydedd y darllenydd\n\nFformatiwch eich ateb yn union fel hyn:\nHEADLINE: [Eich pennawd yma]\nINTRO: [Eich cyflwyniad yma]", + // Filipino + fil: 'Ikaw ay isang katulong na nag-aanalisa at bumubuod ng mga teksto. Ang iyong gawain ay lumikha ng dalawang bagay para sa sumusunod na teksto:\n1. Maikling, malinaw na pamagat (hindi hihigit sa 8 salita)\n2. Maikling panimula na bumubuod sa nilalaman ng teksto sa 2-3 pangungusap at nakakagising ng kuryosidad ng mambabasa\n\nI-format ang iyong sagot nang eksakto tulad nito:\nHEADLINE: [Ang iyong pamagat dito]\nINTRO: [Ang iyong panimula dito]', + }, +}; +/** + * Hilfsfunktion zum Abrufen des Headline-Prompts für eine bestimmte Sprache + * @param language Sprache (z.B. 'de', 'en', 'fr') + * @returns Headline-Prompt für die angegebene Sprache oder Fallback + */ export function getHeadlinePrompt(language) { + const lang = language.toLowerCase().split('-')[0]; // z.B. 'de-DE' -> 'de' + // Versuche spezifische Sprache, dann Deutsch, dann Englisch, dann erste verfügbare + return ( + SYSTEM_PROMPTS.headline[lang] || + SYSTEM_PROMPTS.headline['de'] || + SYSTEM_PROMPTS.headline['en'] || + Object.values(SYSTEM_PROMPTS.headline)[0] || + 'You are an assistant that analyzes and summarizes texts.' + ); +} diff --git a/apps/memoro/apps/backend/src/ai/headline/headline.service.ts b/apps/memoro/apps/backend/src/ai/headline/headline.service.ts new file mode 100644 index 000000000..bf9fd28fc --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/headline/headline.service.ts @@ -0,0 +1,239 @@ +import { Injectable, Logger } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { createClient } from '@supabase/supabase-js'; +import { AiService } from '../ai.service'; +import { AI_PRESETS } from '../ai-model.config'; +import { SYSTEM_PROMPTS } from './headline.prompts'; + +@Injectable() +export class HeadlineService { + private readonly logger = new Logger(HeadlineService.name); + private readonly supabaseUrl: string; + private readonly supabaseServiceKey: string; + + constructor( + private aiService: AiService, + private configService: ConfigService + ) { + this.supabaseUrl = this.configService.get('MEMORO_SUPABASE_URL', ''); + this.supabaseServiceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY', ''); + } + + /** + * Generiert Headline + Intro aus einem Transkript. + */ + async generateHeadlineAndIntro( + transcript: string, + language = 'de' + ): Promise<{ headline: string; intro: string }> { + const prompt = this.buildPrompt(transcript, language); + + try { + const content = await this.aiService.generateText(prompt, AI_PRESETS.headline); + + const result = this.parseResponse(content); + this.logger.debug(`Headline generated: "${result.headline}" (lang=${language})`); + return result; + } catch (error) { + this.logger.error( + `Headline generation failed: ${error instanceof Error ? error.message : error}` + ); + return { headline: 'Neue Aufnahme', intro: 'Keine Zusammenfassung verfügbar.' }; + } + } + + /** + * Vollständige Pipeline: Memo laden → Headline generieren → Memo updaten → Broadcast senden. + */ + async processHeadlineForMemo(memoId: string): Promise<{ headline: string; intro: string }> { + const supabase = createClient(this.supabaseUrl, this.supabaseServiceKey); + + // Set processing status + await this.setProcessingStatus(supabase, memoId, 'processing'); + + try { + // Memo laden + const { data: memo, error: memoError } = await supabase + .from('memos') + .select('*') + .eq('id', memoId) + .single(); + + if (memoError || !memo) { + throw new Error(`Memo not found: ${memoError?.message || 'unknown'}`); + } + + // Transkript extrahieren + const transcript = this.extractTranscript(memo); + if (!transcript) { + await this.setErrorStatus(supabase, memoId, 'Kein Transkript im Memo gefunden'); + throw new Error('No transcript found in memo'); + } + + // Sprache ermitteln + const language = this.detectLanguage(memo); + + // Headline generieren + const { headline, intro } = await this.generateHeadlineAndIntro(transcript, language); + + // Memo updaten + const { error: updateError } = await supabase + .from('memos') + .update({ + title: headline, + intro, + updated_at: new Date().toISOString(), + }) + .eq('id', memoId); + + if (updateError) { + throw new Error(`Memo update failed: ${updateError.message}`); + } + + // Broadcast senden (fire & forget) + this.sendBroadcast(supabase, memoId, headline, intro).catch((err) => + this.logger.warn(`Broadcast failed for memo ${memoId}: ${err}`) + ); + + // Status auf completed setzen + await this.setCompletedStatus(supabase, memoId, { headline, intro, language }); + + this.logger.log(`Headline processed for memo ${memoId}: "${headline}"`); + return { headline, intro }; + } catch (error) { + const msg = error instanceof Error ? error.message : String(error); + await this.setErrorStatus(supabase, memoId, msg); + throw error; + } + } + + // ── Private Helpers ── + + private buildPrompt(transcript: string, language: string): string { + const baseLanguage = language.split('-')[0].toLowerCase(); + const systemPrompt = + SYSTEM_PROMPTS.headline[baseLanguage] || + SYSTEM_PROMPTS.headline['de'] || + SYSTEM_PROMPTS.headline['en']; + return `${systemPrompt}\n\n${transcript}`; + } + + private parseResponse(content: string): { headline: string; intro: string } { + const headlineMatch = content.match(/HEADLINE:\s*(.+?)(?=\nINTRO:|$)/s); + const introMatch = content.match(/INTRO:\s*(.+?)$/s); + return { + headline: headlineMatch?.[1]?.trim() || 'Neue Aufnahme', + intro: introMatch?.[1]?.trim() || 'Keine Zusammenfassung verfügbar.', + }; + } + + private extractTranscript(memo: any): string { + // Utterances (bevorzugt) + if (memo.source?.utterances?.length > 0) { + return [...memo.source.utterances] + .sort((a: any, b: any) => (a.offset || 0) - (b.offset || 0)) + .map((u: any) => u.text) + .filter(Boolean) + .join(' '); + } + + // Direkte Transkript-Felder + if (memo.transcript) return memo.transcript; + if (memo.source?.transcript) return memo.source.transcript; + if (memo.source?.content) return memo.source.content; + + // Kombinierte Aufnahmen + if (memo.source?.type === 'combined' && memo.source?.additional_recordings) { + return memo.source.additional_recordings + .map((rec: any) => { + if (rec.utterances?.length > 0) { + return [...rec.utterances] + .sort((a: any, b: any) => (a.offset || 0) - (b.offset || 0)) + .map((u: any) => u.text) + .filter(Boolean) + .join(' '); + } + return rec.transcript || ''; + }) + .filter(Boolean) + .join('\n\n'); + } + + return ''; + } + + private detectLanguage(memo: any): string { + if (memo.source?.primary_language) return memo.source.primary_language; + if (memo.source?.languages?.[0]) return memo.source.languages[0]; + if (memo.metadata?.primary_language) return memo.metadata.primary_language; + return 'de'; + } + + private async setProcessingStatus(supabase: any, memoId: string, status: string): Promise { + try { + await supabase.rpc('set_memo_process_status', { + p_memo_id: memoId, + p_process_name: 'headline_and_intro', + p_status: status, + p_timestamp: new Date().toISOString(), + }); + } catch (err) { + this.logger.error(`Failed to set processing status for ${memoId}: ${err}`); + } + } + + private async setCompletedStatus(supabase: any, memoId: string, details: any): Promise { + try { + await supabase.rpc('set_memo_process_status_with_details', { + p_memo_id: memoId, + p_process_name: 'headline_and_intro', + p_status: 'completed', + p_timestamp: new Date().toISOString(), + p_details: details, + }); + } catch (err) { + this.logger.error(`Failed to set completed status for ${memoId}: ${err}`); + } + } + + private async setErrorStatus(supabase: any, memoId: string, errorMsg: string): Promise { + try { + await supabase.rpc('set_memo_process_error', { + p_memo_id: memoId, + p_process_name: 'headline_and_intro', + p_timestamp: new Date().toISOString(), + p_reason: errorMsg, + p_details: null, + }); + } catch (err) { + this.logger.error(`Failed to set error status for ${memoId}: ${err}`); + } + } + + private async sendBroadcast( + supabase: any, + memoId: string, + headline: string, + intro: string + ): Promise { + const channel = supabase.channel(`memo-updates-${memoId}`); + await new Promise((resolve) => { + channel.subscribe(async (status: string) => { + if (status === 'SUBSCRIBED') { + await channel.send({ + type: 'broadcast', + event: 'memo-updated', + payload: { + type: 'memo-updated', + memoId, + changes: { title: headline, intro, updated_at: new Date().toISOString() }, + source: 'headline-ai-service', + }, + }); + supabase.removeChannel(channel); + resolve(); + } + }); + }); + } +} diff --git a/apps/memoro/apps/backend/src/ai/memory/memory.prompts.ts b/apps/memoro/apps/backend/src/ai/memory/memory.prompts.ts new file mode 100644 index 000000000..097016282 --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/memory/memory.prompts.ts @@ -0,0 +1,75 @@ +/** + * System-Prompts für die Memory-Erstellung in verschiedenen Sprachen + * + * Die Prompts werden als System-Prompt für die AI-Nachrichten verwendet, + * um konsistente und hilfreiche Antworten zu generieren. + */ /** + * Interface für die Prompt-Konfiguration + */ /** + * System-Prompts für die Memory-Erstellung + * + * Unterstützte Sprachen: + * - de: Deutsch + * - en: Englisch + * - fr: Französisch + * - es: Spanisch + * - it: Italienisch + * - nl: Niederländisch + * - pt: Portugiesisch + * - ru: Russisch + * - ja: Japanisch + * - ko: Koreanisch + * - zh: Chinesisch + * - ar: Arabisch + * - hi: Hindi + * - tr: Türkisch + * - pl: Polnisch + */ export const SYSTEM_PROMPTS = { + system: { + // Deutsch + de: 'Du bist ein hilfreicher Assistent, der Texte analysiert und verarbeitet. Deine Aufgabe ist es, Transkripte von Gesrpächen gemäß den gegebenen Anweisungen zu bearbeiten. Antworte präzise, strukturiert und hilfreich. Antworte in plain text.', + // Englisch + en: 'You are a helpful assistant that analyzes and processes texts. Your task is to process transcripts of conversations according to the given instructions. Respond precisely, structured, and helpfully. Respond in plain text.', + // Französisch + fr: 'Vous êtes un assistant utile qui analyse et traite les textes. Votre tâche est de traiter les transcriptions de conversations selon les instructions données. Répondez de manière précise, structurée et utile. Répondez en texte brut.', + // Spanisch + es: 'Eres un asistente útil que analiza y procesa textos. Tu tarea es procesar transcripciones de conversaciones según las instrucciones dadas. Responde de forma precisa, estructurada y útil. Responde en texto plano.', + // Italienisch + it: 'Sei un assistente utile che analizza e elabora testi. Il tuo compito è elaborare trascrizioni di conversazioni secondo le istruzioni date. Rispondi in modo preciso, strutturato e utile. Rispondi in testo semplice.', + // Niederländisch + nl: 'Je bent een behulpzame assistent die teksten analyseert en verwerkt. Je taak is om transcripties van gesprekken te verwerken volgens de gegeven instructies. Antwoord precies, gestructureerd en behulpzaam. Antwoord in platte tekst.', + // Portugiesisch + pt: 'Você é um assistente útil que analisa e processa textos. Sua tarefa é processar transcrições de conversas de acordo com as instruções dadas. Responda de forma precisa, estruturada e útil. Responda em texto simples.', + // Russisch + ru: 'Вы полезный помощник, который анализирует и обрабатывает тексты. Ваша задача - обрабатывать расшифровки разговоров согласно данным инструкциям. Отвечайте точно, структурированно и полезно. Отвечайте простым текстом.', + // Japanisch + ja: 'あなたはテキストを分析・処理する有用なアシスタントです。あなたの仕事は、与えられた指示に従って会話の転写を処理することです。正確で構造化された有用な回答をしてください。プレーンテキストで回答してください。', + // Koreanisch + ko: '당신은 텍스트를 분석하고 처리하는 유용한 어시스턴트입니다. 당신의 임무는 주어진 지시에 따라 대화의 전사본을 처리하는 것입니다. 정확하고 구조화되며 도움이 되는 방식으로 응답하세요. 일반 텍스트로 응답하세요.', + // Chinesisch (vereinfacht) + zh: '你是一个有用的助手,负责分析和处理文本。你的任务是根据给定的指令处理对话的转录。请准确、结构化、有帮助地回答。请用纯文本回答。', + // Arabisch + ar: 'أنت مساعد مفيد يحلل ويعالج النصوص. مهمتك هي معالجة نسخ المحادثات وفقاً للتعليمات المقدمة. أجب بدقة وبطريقة منظمة ومفيدة. أجب بنص عادي.', + // Hindi + hi: 'आप एक उपयोगी सहायक हैं जो पाठों का विश्लेषण और प्रसंस्करण करते हैं। आपका कार्य दिए गए निर्देशों के अनुसार बातचीत के प्रतिलेख को संसाधित करना है। सटीक, संरचित और सहायक तरीके से उत्तर दें। सादे पाठ में उत्तर दें।', + // Türkisch + tr: 'Metinleri analiz eden ve işleyen yararlı bir asistansınız. Göreviniz, verilen talimatlara göre konuşma transkriptlerini işlemektir. Kesin, yapılandırılmış ve yararlı şekilde yanıt verin. Düz metin olarak yanıt verin.', + // Polnisch + pl: 'Jesteś pomocnym asystentem, który analizuje i przetwarza teksty. Twoim zadaniem jest przetwarzanie transkrypcji rozmów zgodnie z podanymi instrukcjami. Odpowiadaj precyzyjnie, uporządkowanie i pomocnie. Odpowiadaj zwykłym tekstem.', + }, +}; +/** + * Hilfsfunktion zum Abrufen des System-Prompts für eine bestimmte Sprache + * @param language Sprache (z.B. 'de', 'en', 'fr') + * @returns System-Prompt für die angegebene Sprache oder Fallback + */ export function getSystemPrompt(language) { + const lang = language.toLowerCase().split('-')[0]; // z.B. 'de-DE' -> 'de' + // Versuche spezifische Sprache, dann Deutsch, dann Englisch, dann erste verfügbare + return ( + SYSTEM_PROMPTS.system[lang] || + SYSTEM_PROMPTS.system['de'] || + SYSTEM_PROMPTS.system['en'] || + Object.values(SYSTEM_PROMPTS.system)[0] || + 'You are a helpful AI assistant.' + ); +} diff --git a/apps/memoro/apps/backend/src/ai/memory/memory.service.ts b/apps/memoro/apps/backend/src/ai/memory/memory.service.ts new file mode 100644 index 000000000..1fb574a58 --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/memory/memory.service.ts @@ -0,0 +1,138 @@ +import { Injectable, Logger } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { createClient } from '@supabase/supabase-js'; +import { AiService } from '../ai.service'; +import { AI_PRESETS } from '../ai-model.config'; +import { getTranscriptText } from '../shared/transcript-utils'; +import { UserPromptService } from '../shared/user-prompt.service'; + +@Injectable() +export class MemoryService { + private readonly logger = new Logger(MemoryService.name); + private readonly supabaseUrl: string; + private readonly supabaseServiceKey: string; + + constructor( + private aiService: AiService, + private userPromptService: UserPromptService, + private configService: ConfigService + ) { + this.supabaseUrl = this.configService.get('MEMORO_SUPABASE_URL', ''); + this.supabaseServiceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY', ''); + } + + /** + * Erstellt eine Memory für ein Memo mit einem spezifischen Prompt. + * Repliziert die create-memory Edge Function. + */ + async createMemory( + memoId: string, + promptId: string + ): Promise<{ memoryId: string; title: string; content: string }> { + const supabase = createClient(this.supabaseUrl, this.supabaseServiceKey); + + // Memo laden + const { data: memo, error: memoError } = await supabase + .from('memos') + .select('*') + .eq('id', memoId) + .single(); + + if (memoError || !memo) { + throw new Error(`Memo not found: ${memoError?.message || 'unknown'}`); + } + + // Prompt laden + const { data: prompt, error: promptError } = await supabase + .from('prompts') + .select('*') + .eq('id', promptId) + .single(); + + if (promptError || !prompt) { + throw new Error(`Prompt not found: ${promptError?.message || 'unknown'}`); + } + + // Transkript extrahieren + const transcript = getTranscriptText(memo); + if (!transcript) { + throw new Error('No transcript found in memo'); + } + + // Sprache ermitteln + const primaryLanguage = memo.source?.primary_language || memo.source?.languages?.[0]; + const baseLang = primaryLanguage ? primaryLanguage.split('-')[0].toLowerCase() : 'de'; + + // Prompt-Text extrahieren (mehrsprachig) + let promptText = this.getLocalizedText(prompt.prompt_text, baseLang); + if (!promptText) { + throw new Error(`No prompt text found for prompt ${promptId}`); + } + + // System Pre-Prompt voranstellen (User-spezifisch oder Default) + const prePrompt = await this.userPromptService.getSystemPromptForMemo(memo.user_id, baseLang); + if (prePrompt) { + promptText = `${prePrompt}\n\n${promptText}`; + } + + // Memory-Titel extrahieren + const memoryTitle = this.getLocalizedText(prompt.memory_title, baseLang) || 'Memory'; + + // Prompt mit Transkript zusammenbauen + const fullPrompt = promptText.includes('{transcript}') + ? promptText.replace('{transcript}', transcript) + : `${promptText}\n\nText: ${transcript}`; + + // AI-Antwort generieren + const answer = await this.aiService.generateText(fullPrompt, AI_PRESETS.memory); + + if (!answer) { + throw new Error('No response from AI'); + } + + // Sort-Order ermitteln + const { data: maxSortData } = await supabase + .from('memories') + .select('sort_order') + .eq('memo_id', memoId) + .order('sort_order', { ascending: false }) + .limit(1) + .single(); + + const nextSortOrder = maxSortData?.sort_order + ? maxSortData.sort_order + 1 + : Math.floor(Math.random() * 5000) + 5000; + + // Memory speichern + const { data: newMemory, error: insertError } = await supabase + .from('memories') + .insert({ + memo_id: memoId, + title: memoryTitle, + content: answer, + media: null, + sort_order: nextSortOrder, + metadata: { + type: 'manual_prompt', + prompt_id: promptId, + created_by: 'ai_memory_service', + }, + }) + .select() + .single(); + + if (insertError) { + throw new Error(`Failed to create memory: ${insertError.message}`); + } + + this.logger.log(`Memory created: ${newMemory.id} for memo ${memoId} (prompt: ${promptId})`); + return { memoryId: newMemory.id, title: memoryTitle, content: answer }; + } + + private getLocalizedText(textObj: any, lang: string): string { + if (!textObj || typeof textObj !== 'object') return ''; + return ( + textObj[lang] || textObj['de'] || textObj['en'] || (Object.values(textObj)[0] as string) || '' + ); + } +} diff --git a/apps/memoro/apps/backend/src/ai/memory/question.service.ts b/apps/memoro/apps/backend/src/ai/memory/question.service.ts new file mode 100644 index 000000000..8890c8183 --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/memory/question.service.ts @@ -0,0 +1,195 @@ +import { Injectable, Logger } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { createClient } from '@supabase/supabase-js'; +import { AiService } from '../ai.service'; +import { AI_PRESETS } from '../ai-model.config'; +import { UserPromptService } from '../shared/user-prompt.service'; + +@Injectable() +export class QuestionService { + private readonly logger = new Logger(QuestionService.name); + private readonly supabaseUrl: string; + private readonly supabaseServiceKey: string; + + constructor( + private aiService: AiService, + private userPromptService: UserPromptService, + private configService: ConfigService + ) { + this.supabaseUrl = this.configService.get('MEMORO_SUPABASE_URL', ''); + this.supabaseServiceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY', ''); + } + + /** + * Beantwortet eine Frage zu einem Memo und speichert die Antwort als Memory. + * Repliziert die question-memo Edge Function. + */ + async askQuestion( + memoId: string, + question: string + ): Promise<{ memoryId: string; question: string; answer: string }> { + const supabase = createClient(this.supabaseUrl, this.supabaseServiceKey); + + // Memo laden + const { data: memo, error: memoError } = await supabase + .from('memos') + .select('*') + .eq('id', memoId) + .single(); + + if (memoError || !memo) { + throw new Error(`Memo not found: ${memoError?.message || 'unknown'}`); + } + + // Kontext-Informationen extrahieren + const contextInfo = this.extractContextInfo(memo.source, memo.metadata); + if (!contextInfo.transcript) { + throw new Error('No transcript found in memo'); + } + + // Sprache ermitteln + const primaryLanguage = memo.source?.primary_language || memo.source?.languages?.[0]; + const baseLang = primaryLanguage ? primaryLanguage.split('-')[0].toLowerCase() : 'de'; + + // System-Prompt laden (User-spezifisch oder Default) + const prePrompt = await this.userPromptService.getSystemPromptForMemo(memo.user_id, baseLang); + + // Prompt zusammenbauen + const prompt = this.buildQuestionPrompt(question, contextInfo, prePrompt); + + // AI-Antwort generieren + const answer = await this.aiService.generateText(prompt, AI_PRESETS.memory); + + if (!answer) { + throw new Error('No response from AI'); + } + + // Sort-Order ermitteln (Q&A range: 200-299) + const { data: maxSortData } = await supabase + .from('memories') + .select('sort_order') + .eq('memo_id', memoId) + .order('sort_order', { ascending: false }) + .limit(1) + .single(); + + const nextSortOrder = maxSortData?.sort_order ? maxSortData.sort_order + 1 : 200; + + // Memory speichern + const { data: newMemory, error: insertError } = await supabase + .from('memories') + .insert({ + memo_id: memoId, + title: question, + content: answer, + media: null, + sort_order: nextSortOrder, + metadata: { + type: 'question', + question, + created_by: 'ai_question_service', + }, + }) + .select() + .single(); + + if (insertError) { + throw new Error(`Failed to create memory: ${insertError.message}`); + } + + this.logger.log(`Question answered for memo ${memoId}: "${question.substring(0, 50)}..."`); + return { memoryId: newMemory.id, question, answer }; + } + + private buildQuestionPrompt(question: string, contextInfo: any, prePrompt: string): string { + const contextParts: string[] = []; + + if (contextInfo.locationName) { + contextParts.push(`Aufnahmeort: ${contextInfo.locationName}`); + } else if (contextInfo.locationAddress) { + contextParts.push(`Aufnahmeort: ${contextInfo.locationAddress}`); + } + + const statsInfo: string[] = []; + if (contextInfo.hasMultipleSpeakers) { + statsInfo.push(`${contextInfo.speakerCount} Sprecher`); + } + statsInfo.push(`${Math.round(contextInfo.duration)}s Dauer`); + if (contextInfo.wordCount) { + statsInfo.push(`${contextInfo.wordCount} Wörter`); + } + contextParts.push(`Audio-Info: ${statsInfo.join(', ')}`); + + const contextFooter = + contextParts.length > 0 + ? `\n\nZusätzliche Kontext-Informationen:\n${contextParts.join('\n')}` + : ''; + + const userPrompt = `Frage: ${question}\n\nTranskript:\n${contextInfo.transcript}${contextFooter}\n\n${contextInfo.hasMultipleSpeakers ? 'Du kannst bei Bedarf auf spezifische Sprecher verweisen.' : ''}`; + + return prePrompt ? `${prePrompt}\n\n${userPrompt}` : userPrompt; + } + + private extractContextInfo(source: any, metadata: any = {}): any { + const transcript = this.formatTranscriptWithSpeakers(source); + + let speakerCount = 0; + let totalDuration = 0; + const language = source?.primary_language || source?.languages?.[0] || 'unbekannt'; + + if (source?.type === 'combined' && source?.additional_recordings) { + const allSpeakers = new Set(); + for (const rec of source.additional_recordings) { + if (rec.speakers) { + Object.keys(rec.speakers).forEach((id) => allSpeakers.add(id)); + } + if (rec.duration) totalDuration += rec.duration; + } + speakerCount = allSpeakers.size; + totalDuration = source.duration || totalDuration; + } else { + speakerCount = source?.speakers ? Object.keys(source.speakers).length : 0; + totalDuration = source?.duration || 0; + } + + return { + transcript, + duration: metadata?.stats?.audioDuration || totalDuration, + speakerCount, + wordCount: metadata?.stats?.wordCount || null, + language, + locationName: metadata?.location?.address?.name || null, + locationAddress: metadata?.location?.address?.formattedAddress || null, + hasMultipleSpeakers: speakerCount > 1, + hasLocation: !!( + metadata?.location?.address?.name || metadata?.location?.address?.formattedAddress + ), + }; + } + + private formatTranscriptWithSpeakers(source: any): string { + if (source?.type === 'combined' && source?.additional_recordings?.length > 0) { + const transcripts = source.additional_recordings + .map((rec: any) => { + if (rec.utterances?.length > 0) { + return rec.speakers + ? rec.utterances + .map((u: any) => `${rec.speakers[u.speakerId] || u.speakerId}: ${u.text}`) + .join('\n') + : rec.utterances.map((u: any) => u.text).join(' '); + } + return rec.transcript || rec.content || rec.transcription || ''; + }) + .filter(Boolean); + if (transcripts.length > 0) return transcripts.join('\n\n--- Nächstes Memo ---\n\n'); + } + + if (source?.utterances?.length > 0 && source?.speakers) { + return source.utterances + .map((u: any) => `${source.speakers[u.speakerId] || u.speakerId}: ${u.text}`) + .join('\n'); + } + + return source?.transcript || source?.content || source?.transcription || ''; + } +} diff --git a/apps/memoro/apps/backend/src/ai/shared/system-prompts.ts b/apps/memoro/apps/backend/src/ai/shared/system-prompts.ts new file mode 100644 index 000000000..2f5eb38b1 --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/shared/system-prompts.ts @@ -0,0 +1,199 @@ +/** + * Root System Prompts für alle Edge Functions + * + * Diese Prompts werden als Basis für alle Text-Analyse und Verarbeitungsfunktionen verwendet. + * Jede Sprache hat ihren eigenen Prompt, der die spezifischen Anforderungen berücksichtigt. + */ + +export const ROOT_SYSTEM_PROMPTS = { + PRE_PROMPT: { + // Deutsch + de: 'Du bist ein hilfreicher Assistent, der Texte analysiert und verarbeitet. Deine Aufgabe ist es, Transkripte von Gesprächen gemäß den gegebenen Anweisungen zu bearbeiten. Antworte in Markdown mit einem schönen Format. Nutze keine Tabellen und keinen Code in Markdown. Antworte präzise, strukturiert und hilfreich.', + + // Englisch + en: 'You are a helpful assistant that analyzes and processes texts. Your task is to process conversation transcripts according to the given instructions. Respond in Markdown with a nice format. Do not use tables or code in Markdown. Respond precisely, structured, and helpfully.', + + // Französisch + fr: "Vous êtes un assistant utile qui analyse et traite les textes. Votre tâche est de traiter les transcriptions de conversations selon les instructions données. Répondez en Markdown avec un beau format. N'utilisez pas de tableaux ou de code en Markdown. Répondez de manière précise, structurée et utile.", + + // Spanisch + es: 'Eres un asistente útil que analiza y procesa textos. Tu tarea es procesar transcripciones de conversaciones según las instrucciones dadas. Responde en Markdown con un formato atractivo. No uses tablas o código en Markdown. Responde de manera precisa, estructurada y útil.', + + // Italienisch + it: 'Sei un assistente utile che analizza ed elabora testi. Il tuo compito è elaborare trascrizioni di conversazioni secondo le istruzioni fornite. Rispondi in Markdown con un bel formato. Non usare tabelle o codice in Markdown. Rispondi in modo preciso, strutturato e utile.', + + // Niederländisch + nl: 'Je bent een behulpzame assistent die teksten analyseert en verwerkt. Je taak is om transcripties van gesprekken te verwerken volgens de gegeven instructies. Antwoord in Markdown met een mooi formaat. Gebruik geen tabellen of code in Markdown. Antwoord precies, gestructureerd en behulpzaam.', + + // Portugiesisch + pt: 'Você é um assistente útil que analisa e processa textos. Sua tarefa é processar transcrições de conversas de acordo com as instruções fornecidas. Responda em Markdown com um formato bonito. Não use tabelas ou código em Markdown. Responda de forma precisa, estruturada e útil.', + + // Russisch + ru: 'Вы полезный помощник, который анализирует и обрабатывает тексты. Ваша задача - обрабатывать расшифровки разговоров в соответствии с данными инструкциями. Отвечайте в Markdown с красивым форматированием. Не используйте таблицы или код в Markdown. Отвечайте точно, структурированно и полезно.', + + // Japanisch + ja: 'あなたはテキストを分析し処理する有用なアシスタントです。あなたの仕事は、与えられた指示に従って会話の文字起こしを処理することです。Markdownで美しいフォーマットで回答してください。Markdownでテーブルやコードを使用しないでください。正確で、構造化され、役立つように回答してください。', + + // Koreanisch + ko: '당신은 텍스트를 분석하고 처리하는 유용한 어시스턴트입니다. 당신의 임무는 주어진 지시에 따라 대화 녹취록을 처리하는 것입니다. 멋진 형식의 Markdown으로 응답하세요. Markdown에서 표나 코드를 사용하지 마세요. 정확하고 구조화되며 도움이 되도록 응답하세요.', + + // Chinesisch + zh: '你是一个有用的助手,分析和处理文本。你的任务是根据给定的指示处理对话记录。以优美的Markdown格式回复。不要在Markdown中使用表格或代码。回复要准确、有条理、有帮助。', + + // Arabisch + ar: 'أنت مساعد مفيد يحلل ويعالج النصوص. مهمتك هي معالجة نصوص المحادثات وفقًا للتعليمات المعطاة. أجب بتنسيق Markdown جميل. لا تستخدم الجداول أو الكود في Markdown. أجب بدقة وبشكل منظم ومفيد.', + + // Hindi + hi: 'आप एक सहायक सहायक हैं जो ग्रंथों का विश्लेषण और प्रसंस्करण करते हैं। आपका कार्य दिए गए निर्देशों के अनुसार वार्तालाप प्रतिलेखों को संसाधित करना है। एक अच्छे प्रारूप के साथ Markdown में उत्तर दें। Markdown में तालिकाओं या कोड का उपयोग न करें। सटीक, संरचित और सहायक रूप से उत्तर दें।', + + // Türkisch + tr: "Metinleri analiz eden ve işleyen yardımcı bir asistansınız. Göreviniz, verilen talimatlara göre konuşma transkriptlerini işlemektir. Güzel bir formatla Markdown'da yanıt verin. Markdown'da tablo veya kod kullanmayın. Kesin, yapılandırılmış ve yararlı bir şekilde yanıt verin.", + + // Polnisch + pl: 'Jesteś pomocnym asystentem, który analizuje i przetwarza teksty. Twoim zadaniem jest przetwarzanie transkrypcji rozmów zgodnie z podanymi instrukcjami. Odpowiadaj w Markdown z ładnym formatowaniem. Nie używaj tabel ani kodu w Markdown. Odpowiadaj precyzyjnie, strukturalnie i pomocnie.', + + // Dänisch + da: 'Du er en hjælpsom assistent, der analyserer og behandler tekster. Din opgave er at behandle samtaleudskrifter i henhold til de givne instruktioner. Svar i Markdown med et pænt format. Brug ikke tabeller eller kode i Markdown. Svar præcist, struktureret og hjælpsomt.', + + // Schwedisch + sv: 'Du är en hjälpsam assistent som analyserar och bearbetar texter. Din uppgift är att bearbeta samtalstranskriptioner enligt givna instruktioner. Svara i Markdown med ett snyggt format. Använd inte tabeller eller kod i Markdown. Svara exakt, strukturerat och hjälpsamt.', + + // Norwegisch + nb: 'Du er en hjelpsom assistent som analyserer og behandler tekster. Din oppgave er å behandle samtaletranskripsjoner i henhold til gitte instruksjoner. Svar i Markdown med et pent format. Ikke bruk tabeller eller kode i Markdown. Svar presist, strukturert og hjelpsomt.', + + // Finnisch + fi: 'Olet hyödyllinen avustaja, joka analysoi ja käsittelee tekstejä. Tehtäväsi on käsitellä keskustelulitterointeja annettujen ohjeiden mukaisesti. Vastaa Markdownissa kauniilla muotoilulla. Älä käytä taulukoita tai koodia Markdownissa. Vastaa tarkasti, jäsennellysti ja avuliaasti.', + + // Tschechisch + cs: 'Jste užitečný asistent, který analyzuje a zpracovává texty. Vaším úkolem je zpracovávat přepisy konverzací podle daných pokynů. Odpovězte v Markdownu s pěkným formátováním. Nepoužívejte tabulky nebo kód v Markdownu. Odpovězte přesně, strukturovaně a užitečně.', + + // Ungarisch + hu: 'Ön egy hasznos asszisztens, aki szövegeket elemez és dolgoz fel. Az Ön feladata a beszélgetések átiratainak feldolgozása a megadott utasítások szerint. Válaszoljon Markdownban szép formázással. Ne használjon táblázatokat vagy kódot a Markdownban. Válaszoljon pontosan, strukturáltan és hasznossan.', + + // Griechisch + el: 'Είστε ένας χρήσιμος βοηθός που αναλύει και επεξεργάζεται κείμενα. Το καθήκον σας είναι να επεξεργάζεστε μεταγραφές συνομιλιών σύμφωνα με τις δοθείσες οδηγίες. Απαντήστε σε Markdown με όμορφη μορφοποίηση. Μην χρησιμοποιείτε πίνακες ή κώδικα στο Markdown. Απαντήστε με ακρίβεια, δομημένα και χρήσιμα.', + + // Hebräisch + he: 'אתה עוזר מועיל שמנתח ומעבד טקסטים. המשימה שלך היא לעבד תמלילי שיחות בהתאם להוראות שניתנו. הגב ב-Markdown עם עיצוב יפה. אל תשתמש בטבלאות או קוד ב-Markdown. הגב בצורה מדויקת, מובנית ומועילה.', + + // Indonesisch + id: 'Anda adalah asisten yang membantu menganalisis dan memproses teks. Tugas Anda adalah memproses transkrip percakapan sesuai dengan instruksi yang diberikan. Tanggapi dalam Markdown dengan format yang bagus. Jangan gunakan tabel atau kode dalam Markdown. Tanggapi dengan tepat, terstruktur, dan bermanfaat.', + + // Thai + th: 'คุณเป็นผู้ช่วยที่มีประโยชน์ที่วิเคราะห์และประมวลผลข้อความ งานของคุณคือประมวลผลบทสนทนาตามคำแนะนำที่กำหนด ตอบกลับใน Markdown ด้วยรูปแบบที่สวยงาม อย่าใช้ตารางหรือโค้ดใน Markdown ตอบกลับอย่างแม่นยำ มีโครงสร้าง และเป็นประโยชน์', + + // Vietnamesisch + vi: 'Bạn là một trợ lý hữu ích phân tích và xử lý văn bản. Nhiệm vụ của bạn là xử lý bản ghi cuộc trò chuyện theo hướng dẫn đã cho. Trả lời bằng Markdown với định dạng đẹp. Không sử dụng bảng hoặc mã trong Markdown. Trả lời chính xác, có cấu trúc và hữu ích.', + + // Ukrainisch + uk: 'Ви корисний помічник, який аналізує та обробляє тексти. Ваше завдання - обробляти розшифровки розмов відповідно до наданих інструкцій. Відповідайте в Markdown з гарним форматуванням. Не використовуйте таблиці або код у Markdown. Відповідайте точно, структуровано та корисно.', + + // Rumänisch + ro: 'Sunteți un asistent util care analizează și procesează texte. Sarcina dvs. este să procesați transcrierile conversațiilor conform instrucțiunilor date. Răspundeți în Markdown cu un format frumos. Nu utilizați tabele sau cod în Markdown. Răspundeți precis, structurat și util.', + + // Bulgarisch + bg: 'Вие сте полезен асистент, който анализира и обработва текстове. Вашата задача е да обработвате транскрипции на разговори според дадените инструкции. Отговорете в Markdown с красив формат. Не използвайте таблици или код в Markdown. Отговорете точно, структурирано и полезно.', + + // Katalanisch + ca: 'Ets un assistent útil que analitza i processa textos. La teva tasca és processar transcripcions de converses segons les instruccions donades. Respon en Markdown amb un format bonic. No utilitzis taules o codi en Markdown. Respon de manera precisa, estructurada i útil.', + + // Kroatisch + hr: 'Vi ste korisni asistent koji analizira i obrađuje tekstove. Vaš zadatak je obraditi transkripcije razgovora prema danim uputama. Odgovorite u Markdownu s lijepim formatom. Ne koristite tablice ili kod u Markdownu. Odgovorite precizno, strukturirano i korisno.', + + // Slowakisch + sk: 'Ste užitočný asistent, ktorý analyzuje a spracováva texty. Vašou úlohou je spracovávať prepisy konverzácií podľa daných pokynov. Odpovedzte v Markdowne s pekným formátovaním. Nepoužívajte tabuľky alebo kód v Markdowne. Odpovedzte presne, štruktúrovane a užitočne.', + + // Estnisch + et: 'Olete kasulik assistent, kes analüüsib ja töötleb tekste. Teie ülesanne on töödelda vestluste ärakirju vastavalt antud juhistele. Vastake Markdownis ilusa vorminguga. Ärge kasutage Markdownis tabeleid ega koodi. Vastake täpselt, struktureeritult ja kasulikult.', + + // Lettisch + lv: 'Jūs esat noderīgs asistents, kas analizē un apstrādā tekstus. Jūsu uzdevums ir apstrādāt sarunu atšifrējumus saskaņā ar dotajiem norādījumiem. Atbildiet Markdown ar skaistu formatējumu. Neizmantojiet tabulas vai kodu Markdown. Atbildiet precīzi, strukturēti un noderīgi.', + + // Litauisch + lt: 'Esate naudingas asistentas, kuris analizuoja ir apdoroja tekstus. Jūsų užduotis yra apdoroti pokalbių stenogramas pagal pateiktas instrukcijas. Atsakykite Markdown su gražiu formatavimu. Nenaudokite lentelių ar kodo Markdown. Atsakykite tiksliai, struktūrizuotai ir naudingai.', + + // Bengalisch + bn: 'আপনি একজন সহায়ক সহকারী যিনি পাঠ্য বিশ্লেষণ এবং প্রক্রিয়া করেন। আপনার কাজ হল প্রদত্ত নির্দেশাবলী অনুসারে কথোপকথনের প্রতিলিপি প্রক্রিয়া করা। সুন্দর বিন্যাসের সাথে Markdown-এ উত্তর দিন। Markdown-এ টেবিল বা কোড ব্যবহার করবেন না। সুনির্দিষ্ট, কাঠামোগত এবং সহায়কভাবে উত্তর দিন।', + + // Malaiisch + ms: 'Anda adalah pembantu berguna yang menganalisis dan memproses teks. Tugas anda adalah memproses transkrip perbualan mengikut arahan yang diberikan. Balas dalam Markdown dengan format yang cantik. Jangan gunakan jadual atau kod dalam Markdown. Balas dengan tepat, berstruktur dan berguna.', + + // Tamil + ta: 'நீங்கள் உரைகளை பகுப்பாய்வு செய்து செயலாக்கும் பயனுள்ள உதவியாளர். கொடுக்கப்பட்ட அறிவுறுத்தல்களின்படி உரையாடல் படியெடுப்புகளை செயலாக்குவது உங்கள் பணி. அழகான வடிவத்துடன் Markdown இல் பதிலளிக்கவும். Markdown இல் அட்டவணைகள் அல்லது குறியீட்டைப் பயன்படுத்த வேண்டாம். துல்லியமாக, கட்டமைக்கப்பட்ட மற்றும் பயனுள்ள வகையில் பதிலளிக்கவும்.', + + // Telugu + te: 'మీరు టెక్స్ట్‌లను విశ్లేషించి ప్రాసెస్ చేసే సహాయక అసిస్టెంట్. ఇచ్చిన సూచనల ప్రకారం సంభాషణ ట్రాన్స్‌క్రిప్ట్‌లను ప్రాసెస్ చేయడం మీ పని. అందమైన ఫార్మాట్‌తో Markdown లో స్పందించండి. Markdown లో పట్టికలు లేదా కోడ్ ఉపయోగించవద్దు. ఖచ్చితంగా, నిర్మాణాత్మకంగా మరియు సహాయకరంగా స్పందించండి.', + + // Urdu + ur: 'آپ ایک مددگار معاون ہیں جو متن کا تجزیہ اور عمل کرتے ہیں۔ آپ کا کام دی گئی ہدایات کے مطابق گفتگو کی نقلیں پروسیس کرنا ہے۔ خوبصورت فارمیٹ کے ساتھ Markdown میں جواب دیں۔ Markdown میں ٹیبلز یا کوڈ استعمال نہ کریں۔ درست، منظم اور مددگار طریقے سے جواب دیں۔', + + // Marathi + mr: 'तुम्ही एक उपयुक्त सहाय्यक आहात जो मजकूरांचे विश्लेषण आणि प्रक्रिया करतो. दिलेल्या सूचनांनुसार संभाषण प्रतिलेखनांवर प्रक्रिया करणे हे तुमचे कार्य आहे. सुंदर स्वरूपासह Markdown मध्ये उत्तर द्या. Markdown मध्ये सारण्या किंवा कोड वापरू नका. अचूक, संरचित आणि उपयुक्त पद्धतीने उत्तर द्या.', + + // Gujarati + gu: 'તમે એક મદદરૂપ સહાયક છો જે ટેક્સ્ટનું વિશ્લેષણ અને પ્રક્રિયા કરે છે. આપેલી સૂચનાઓ અનુસાર વાતચીતની ટ્રાન્સક્રિપ્ટ્સ પર પ્રક્રિયા કરવી એ તમારું કામ છે. સુંદર ફોર્મેટ સાથે Markdown માં જવાબ આપો. Markdown માં કોષ્ટકો અથવા કોડનો ઉપયોગ કરશો નહીં. ચોક્કસ, સંરચિત અને મદદરૂપ રીતે જવાબ આપો.', + + // Malayalam + ml: 'നിങ്ങൾ വാചകങ്ങൾ വിശകലനം ചെയ്യുകയും പ്രോസസ്സ് ചെയ്യുകയും ചെയ്യുന്ന സഹായകരമായ സഹായിയാണ്. നൽകിയിരിക്കുന്ന നിർദ്ദേശങ്ങൾ അനുസരിച്ച് സംഭാഷണ ട്രാൻസ്ക്രിപ്റ്റുകൾ പ്രോസസ്സ് ചെയ്യുക എന്നതാണ് നിങ്ങളുടെ ജോലി. മനോഹരമായ ഫോർമാറ്റിൽ Markdown ൽ പ്രതികരിക്കുക. Markdown ൽ ടേബിളുകളോ കോഡോ ഉപയോഗിക്കരുത്. കൃത്യമായും ഘടനാപരമായും സഹായകരമായും പ്രതികരിക്കുക.', + + // Kannada + kn: 'ನೀವು ಪಠ್ಯಗಳನ್ನು ವಿಶ್ಲೇಷಿಸುವ ಮತ್ತು ಪ್ರಕ್ರಿಯೆಗೊಳಿಸುವ ಸಹಾಯಕ ಸಹಾಯಕರಾಗಿದ್ದೀರಿ. ನೀಡಿದ ಸೂಚನೆಗಳ ಪ್ರಕಾರ ಸಂಭಾಷಣೆ ಪ್ರತಿಲಿಪಿಗಳನ್ನು ಪ್ರಕ್ರಿಯೆಗೊಳಿಸುವುದು ನಿಮ್ಮ ಕೆಲಸ. ಸುಂದರ ಸ್ವರೂಪದೊಂದಿಗೆ Markdown ನಲ್ಲಿ ಪ್ರತಿಕ್ರಿಯಿಸಿ. Markdown ನಲ್ಲಿ ಕೋಷ್ಟಕಗಳು ಅಥವಾ ಕೋಡ್ ಬಳಸಬೇಡಿ. ನಿಖರವಾಗಿ, ರಚನಾತ್ಮಕವಾಗಿ ಮತ್ತು ಸಹಾಯಕವಾಗಿ ಪ್ರತಿಕ್ರಿಯಿಸಿ.', + + // Punjabi + pa: 'ਤੁਸੀਂ ਇੱਕ ਮਦਦਗਾਰ ਸਹਾਇਕ ਹੋ ਜੋ ਟੈਕਸਟਾਂ ਦਾ ਵਿਸ਼ਲੇਸ਼ਣ ਅਤੇ ਪ੍ਰਕਿਰਿਆ ਕਰਦੇ ਹੋ। ਤੁਹਾਡਾ ਕੰਮ ਦਿੱਤੀਆਂ ਹਦਾਇਤਾਂ ਅਨੁਸਾਰ ਗੱਲਬਾਤ ਦੀਆਂ ਨਕਲਾਂ ਨੂੰ ਪ੍ਰਕਿਰਿਆ ਕਰਨਾ ਹੈ। ਸੁੰਦਰ ਫਾਰਮੈਟ ਨਾਲ Markdown ਵਿੱਚ ਜਵਾਬ ਦਿਓ। Markdown ਵਿੱਚ ਸਾਰਣੀਆਂ ਜਾਂ ਕੋਡ ਦੀ ਵਰਤੋਂ ਨਾ ਕਰੋ। ਸਟੀਕ, ਢਾਂਚਾਗਤ ਅਤੇ ਮਦਦਗਾਰ ਢੰਗ ਨਾਲ ਜਵਾਬ ਦਿਓ।', + + // Afrikaans + af: "Jy is 'n nuttige assistent wat tekste ontleed en verwerk. Jou taak is om gespreksafskrifte te verwerk volgens die gegewe instruksies. Antwoord in Markdown met 'n mooi formaat. Moenie tabelle of kode in Markdown gebruik nie. Antwoord presies, gestruktureerd en nuttig.", + + // Persisch + fa: 'شما یک دستیار مفید هستید که متون را تحلیل و پردازش می‌کند. وظیفه شما پردازش رونوشت‌های مکالمات طبق دستورالعمل‌های داده شده است. با فرمت زیبا در Markdown پاسخ دهید. از جداول یا کد در Markdown استفاده نکنید. به طور دقیق، ساختاریافته و مفید پاسخ دهید.', + + // Georgisch + ka: 'თქვენ ხართ სასარგებლო ასისტენტი, რომელიც აანალიზებს და ამუშავებს ტექსტებს. თქვენი ამოცანაა საუბრების ჩანაწერების დამუშავება მოცემული ინსტრუქციების შესაბამისად. უპასუხეთ Markdown-ში ლამაზი ფორმატით. არ გამოიყენოთ ცხრილები ან კოდი Markdown-ში. უპასუხეთ ზუსტად, სტრუქტურირებულად და სასარგებლოდ.', + + // Isländisch + is: 'Þú ert gagnlegur aðstoðarmaður sem greinir og vinnur úr textum. Verkefni þitt er að vinna úr samtalsskrám samkvæmt gefnum leiðbeiningum. Svaraðu í Markdown með fallegu sniði. Notaðu ekki töflur eða kóða í Markdown. Svaraðu nákvæmlega, skipulega og gagnlega.', + + // Albanisch + sq: 'Ju jeni një asistent i dobishëm që analizon dhe përpunon tekste. Detyra juaj është të përpunoni transkriptet e bisedave sipas udhëzimeve të dhëna. Përgjigjuni në Markdown me një format të bukur. Mos përdorni tabela ose kod në Markdown. Përgjigjuni saktësisht, të strukturuar dhe të dobishëm.', + + // Aserbaidschanisch + az: 'Siz mətnləri təhlil edən və emal edən faydalı köməkçisiniz. Sizin vəzifəniz verilmiş təlimatlara uyğun olaraq söhbət transkriptlərini emal etməkdir. Gözəl formatla Markdown-da cavab verin. Markdown-da cədvəllər və ya kod istifadə etməyin. Dəqiq, strukturlaşdırılmış və faydalı şəkildə cavab verin.', + + // Baskisch + eu: 'Testuak aztertzen eta prozesatzen dituen laguntzaile erabilgarria zara. Zure zeregina elkarrizketen transkripzioak prozesatzea da emandako argibideen arabera. Erantzun Markdownean formatu ederrarekin. Ez erabili taulak edo kodea Markdownean. Erantzun zehatz, egituratuta eta lagungarri.', + + // Galizisch + gl: 'Es un asistente útil que analiza e procesa textos. A túa tarefa é procesar transcricións de conversas segundo as instrucións dadas. Responde en Markdown cun formato bonito. Non uses táboas ou código en Markdown. Responde de forma precisa, estruturada e útil.', + + // Kasachisch + kk: 'Сіз мәтіндерді талдайтын және өңдейтін пайдалы көмекшісіз. Сіздің міндетіңіз берілген нұсқауларға сәйкес сөйлесу транскрипттерін өңдеу. Әдемі пішіммен Markdown-да жауап беріңіз. Markdown-да кестелер немесе код қолданбаңыз. Дәл, құрылымдалған және пайдалы түрде жауап беріңіз.', + + // Mazedonisch + mk: 'Вие сте корисен асистент кој анализира и обработува текстови. Вашата задача е да обработувате транскрипти на разговори според дадените упатства. Одговорете во Markdown со убав формат. Не користете табели или код во Markdown. Одговорете прецизно, структурирано и корисно.', + + // Serbisch + sr: 'Ви сте корисни асистент који анализира и обрађује текстове. Ваш задатак је да обрађујете транскрипте разговора према датим упутствима. Одговорите у Markdown-у са лепим форматом. Не користите табеле или код у Markdown-у. Одговорите прецизно, структурисано и корисно.', + + // Slowenisch + sl: 'Ste koristen pomočnik, ki analizira in obdeluje besedila. Vaša naloga je obdelati prepise pogovorov v skladu z danimi navodili. Odgovorite v Markdownu z lepim formatom. Ne uporabljajte tabel ali kode v Markdownu. Odgovorite natančno, strukturirano in koristno.', + + // Maltesisch + mt: "Inti assistent utli li janalizza u jipproċessa testi. Il-kompitu tiegħek huwa li tipproċessa traskrizzjonijiet ta' konversazzjonijiet skont l-istruzzjonijiet mogħtija. Wieġeb f'Markdown b'format sabiħ. Tużax tabelli jew kodiċi f'Markdown. Wieġeb b'mod preċiż, strutturat u utli.", + + // Armenisch + hy: 'Դուք օգտակար օգնական եք, որը վերլուծում և մշակում է տեքստեր: Ձեր խնդիրն է մշակել զրույցների արձանագրությունները տրված հրահանգների համաձայն: Պատասխանեք Markdown-ում գեղեցիկ ձևաչափով: Մի օգտագործեք աղյուսակներ կամ կոդ Markdown-ում: Պատասխանեք ճշգրիտ, կառուցվածքային և օգտակար:', + + // Usbekisch + uz: "Siz matnlarni tahlil qiluvchi va qayta ishlovchi foydali yordamchisiz. Sizning vazifangiz berilgan ko'rsatmalarga muvofiq suhbat transkriptlarini qayta ishlashdir. Chiroyli formatda Markdown-da javob bering. Markdown-da jadvallar yoki koddan foydalanmang. Aniq, tuzilgan va foydali tarzda javob bering.", + + // Irisch + ga: 'Is cúntóir cabhrach thú a dhéanann anailís agus próiseáil ar théacsanna. Is é do thasc tras-scríbhinní comhrá a phróiseáil de réir na dtreoracha a thugtar. Freagair i Markdown le formáid álainn. Ná húsáid táblaí ná cód i Markdown. Freagair go beacht, struchtúrtha agus cabhrach.', + + // Walisisch + cy: "Rydych chi'n gynorthwyydd defnyddiol sy'n dadansoddi ac yn prosesu testunau. Eich tasg yw prosesu trawsgrifiadau sgwrs yn ôl y cyfarwyddiadau a roddir. Atebwch yn Markdown gyda fformat hardd. Peidiwch â defnyddio tablau na chod yn Markdown. Atebwch yn fanwl gywir, wedi'i strwythuro ac yn ddefnyddiol.", + + // Filipino + fil: 'Ikaw ay isang kapaki-pakinabang na katulong na nag-aanalisa at nagpoproseso ng mga teksto. Ang iyong gawain ay iproseso ang mga transkripsyon ng pag-uusap ayon sa mga ibinigay na tagubilin. Tumugon sa Markdown na may magandang format. Huwag gumamit ng mga talahanayan o code sa Markdown. Tumugon nang tumpak, nakaayos, at nakakatulong.', + }, +}; diff --git a/apps/memoro/apps/backend/src/ai/shared/transcript-utils.ts b/apps/memoro/apps/backend/src/ai/shared/transcript-utils.ts new file mode 100644 index 000000000..381516322 --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/shared/transcript-utils.ts @@ -0,0 +1,81 @@ +/** + * Shared utility functions for handling transcript generation from utterances + * Used across multiple edge functions + */ + +/** + * Generate a plain text transcript from utterances array + * @param utterances - Array of utterance objects with text property + * @returns Plain text transcript string + */ +export function generateTranscriptFromUtterances( + utterances?: Array<{ + text: string; + speakerId?: string; + offset?: number; + duration?: number; + }> | null +): string { + if (!utterances || !Array.isArray(utterances) || utterances.length === 0) { + return ''; + } + + // Sort utterances by offset if available + const sortedUtterances = [...utterances].sort((a, b) => { + const offsetA = a.offset || 0; + const offsetB = b.offset || 0; + return offsetA - offsetB; + }); + + // Concatenate all utterance texts with spaces + return sortedUtterances + .map((utterance) => utterance.text) + .filter((text) => text && text.trim() !== '') + .join(' '); +} + +/** + * Get transcript text from memo (generates from utterances or returns legacy transcript) + * @param memo - The memo object + * @returns The transcript text + */ +export function getTranscriptText(memo: any): string { + // If utterances exist, generate transcript from them + if ( + memo?.source?.utterances && + Array.isArray(memo.source.utterances) && + memo.source.utterances.length > 0 + ) { + return generateTranscriptFromUtterances(memo.source.utterances); + } + + // Fall back to legacy transcript fields for backward compatibility + return ( + memo?.transcript || + memo?.source?.transcript || + memo?.source?.content || + memo?.source?.transcription || + memo?.source?.text || + memo?.metadata?.transcript || + '' + ); +} + +/** + * Get transcript from additional recording + * @param recording - The additional recording object + * @returns The transcript text + */ +export function getRecordingTranscript(recording: any): string { + // If utterances exist, generate transcript from them + if ( + recording?.utterances && + Array.isArray(recording.utterances) && + recording.utterances.length > 0 + ) { + return generateTranscriptFromUtterances(recording.utterances); + } + + // Fall back to transcript field + return recording?.transcript || ''; +} diff --git a/apps/memoro/apps/backend/src/ai/shared/user-prompt.service.ts b/apps/memoro/apps/backend/src/ai/shared/user-prompt.service.ts new file mode 100644 index 000000000..921c95143 --- /dev/null +++ b/apps/memoro/apps/backend/src/ai/shared/user-prompt.service.ts @@ -0,0 +1,49 @@ +import { Injectable, Logger } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { createClient } from '@supabase/supabase-js'; +import { ROOT_SYSTEM_PROMPTS } from './system-prompts'; + +@Injectable() +export class UserPromptService { + private readonly logger = new Logger(UserPromptService.name); + private readonly supabaseUrl: string; + private readonly supabaseServiceKey: string; + + constructor(private configService: ConfigService) { + this.supabaseUrl = this.configService.get('MEMORO_SUPABASE_URL', ''); + this.supabaseServiceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY', ''); + } + + /** + * Gibt den System-Prompt für einen User zurück. + * Wenn der User einen eigenen definiert hat, wird dieser verwendet. + * Sonst der Standard-PRE_PROMPT in der jeweiligen Sprache. + */ + async getSystemPrompt(userId: string, language = 'de'): Promise { + try { + const supabase = createClient(this.supabaseUrl, this.supabaseServiceKey); + const { data: user, error } = await supabase + .from('users') + .select('app_settings') + .eq('id', userId) + .single(); + + if (!error && user?.app_settings?.memoro?.systemPrompt) { + this.logger.debug(`Using custom system prompt for user ${userId}`); + return user.app_settings.memoro.systemPrompt; + } + } catch (err) { + this.logger.warn(`Failed to load user system prompt, using default: ${err}`); + } + + const baseLang = language.split('-')[0].toLowerCase(); + return ROOT_SYSTEM_PROMPTS.PRE_PROMPT[baseLang] || ROOT_SYSTEM_PROMPTS.PRE_PROMPT['de']; + } + + /** + * Gibt den System-Prompt für den Owner eines Memos zurück. + */ + async getSystemPromptForMemo(memoUserId: string, language = 'de'): Promise { + return this.getSystemPrompt(memoUserId, language); + } +} diff --git a/apps/memoro/apps/backend/src/app.module.ts b/apps/memoro/apps/backend/src/app.module.ts new file mode 100644 index 000000000..cc8c3b6b8 --- /dev/null +++ b/apps/memoro/apps/backend/src/app.module.ts @@ -0,0 +1,32 @@ +import { Module } from '@nestjs/common'; +import { ConfigModule } from '@nestjs/config'; +import { AuthModule } from './auth/auth.module'; +import { AuthProxyModule } from './auth-proxy/auth-proxy.module'; +import { SpacesModule } from './spaces/spaces.module'; +import { MemoroModule } from './memoro/memoro.module'; +import { MeetingsModule } from './meetings/meetings.module'; +import { HealthModule } from './health/health.module'; +import { CreditsModule } from './credits/credits.module'; +import { SettingsModule } from './settings/settings.module'; +import { CleanupModule } from './cleanup/cleanup.module'; +import { AiModule } from './ai/ai.module'; + +@Module({ + imports: [ + ConfigModule.forRoot({ + isGlobal: true, + ignoreEnvFile: process.env.NODE_ENV === 'production', + }), + AuthModule, + AuthProxyModule, + SpacesModule, + MemoroModule, + MeetingsModule, + HealthModule, + CreditsModule, + SettingsModule, + CleanupModule, + AiModule, + ], +}) +export class AppModule {} diff --git a/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.controller.spec.ts b/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.controller.spec.ts new file mode 100644 index 000000000..2f9c4ecc5 --- /dev/null +++ b/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.controller.spec.ts @@ -0,0 +1,222 @@ +import { Test, TestingModule } from '@nestjs/testing'; +import { AuthProxyController } from './auth-proxy.controller'; +import { AuthProxyService } from './auth-proxy.service'; +import { HttpException, HttpStatus } from '@nestjs/common'; + +describe('AuthProxyController', () => { + let controller: AuthProxyController; + let service: jest.Mocked; + + const mockAuthProxyService = { + signin: jest.fn(), + signup: jest.fn(), + googleSignin: jest.fn(), + appleSignin: jest.fn(), + refresh: jest.fn(), + logout: jest.fn(), + forgotPassword: jest.fn(), + validate: jest.fn(), + getCredits: jest.fn(), + getDevices: jest.fn(), + }; + + beforeEach(async () => { + const module: TestingModule = await Test.createTestingModule({ + controllers: [AuthProxyController], + providers: [ + { + provide: AuthProxyService, + useValue: mockAuthProxyService, + }, + ], + }).compile(); + + controller = module.get(AuthProxyController); + service = module.get(AuthProxyService); + }); + + afterEach(() => { + jest.clearAllMocks(); + }); + + describe('signin', () => { + it('should call authProxyService.signin with payload', async () => { + const payload = { email: 'test@test.com', password: 'password' }; + const expectedResult = { token: 'token', user: { id: '123' } }; + + mockAuthProxyService.signin.mockResolvedValue(expectedResult); + + const result = await controller.signin(payload); + + expect(service.signin).toHaveBeenCalledWith(payload); + expect(result).toEqual(expectedResult); + }); + + it('should handle service errors', async () => { + const payload = { email: 'test@test.com', password: 'password' }; + const error = new Error('Service error'); + + mockAuthProxyService.signin.mockRejectedValue(error); + + await expect(controller.signin(payload)).rejects.toThrow(error); + }); + }); + + describe('signup', () => { + it('should call authProxyService.signup with payload', async () => { + const payload = { email: 'test@test.com', password: 'password' }; + const expectedResult = { user: { id: '123' } }; + + mockAuthProxyService.signup.mockResolvedValue(expectedResult); + + const result = await controller.signup(payload); + + expect(service.signup).toHaveBeenCalledWith(payload); + expect(result).toEqual(expectedResult); + }); + }); + + describe('googleSignin', () => { + it('should call authProxyService.googleSignin with payload', async () => { + const payload = { idToken: 'google-token' }; + const expectedResult = { token: 'token', user: { id: '123' } }; + + mockAuthProxyService.googleSignin.mockResolvedValue(expectedResult); + + const result = await controller.googleSignin(payload); + + expect(service.googleSignin).toHaveBeenCalledWith(payload); + expect(result).toEqual(expectedResult); + }); + }); + + describe('appleSignin', () => { + it('should call authProxyService.appleSignin with payload', async () => { + const payload = { idToken: 'apple-token' }; + const expectedResult = { token: 'token', user: { id: '123' } }; + + mockAuthProxyService.appleSignin.mockResolvedValue(expectedResult); + + const result = await controller.appleSignin(payload); + + expect(service.appleSignin).toHaveBeenCalledWith(payload); + expect(result).toEqual(expectedResult); + }); + }); + + describe('refresh', () => { + it('should call authProxyService.refresh with payload', async () => { + const payload = { refreshToken: 'refresh-token' }; + const expectedResult = { token: 'new-token', refreshToken: 'new-refresh' }; + + mockAuthProxyService.refresh.mockResolvedValue(expectedResult); + + const result = await controller.refresh(payload); + + expect(service.refresh).toHaveBeenCalledWith(payload); + expect(result).toEqual(expectedResult); + }); + }); + + describe('logout', () => { + it('should call authProxyService.logout with payload', async () => { + const payload = { token: 'token' }; + + mockAuthProxyService.logout.mockResolvedValue(undefined); + + const result = await controller.logout(payload); + + expect(service.logout).toHaveBeenCalledWith(payload); + expect(result).toBeUndefined(); + }); + + it('should have HttpCode 204', async () => { + const metadata = Reflect.getMetadata('__httpCode__', controller.logout); + expect(metadata).toBe(204); + }); + }); + + describe('forgotPassword', () => { + it('should call authProxyService.forgotPassword with payload', async () => { + const payload = { email: 'test@test.com' }; + const expectedResult = { message: 'Password reset email sent' }; + + mockAuthProxyService.forgotPassword.mockResolvedValue(expectedResult); + + const result = await controller.forgotPassword(payload); + + expect(service.forgotPassword).toHaveBeenCalledWith(payload); + expect(result).toEqual(expectedResult); + }); + }); + + describe('validate', () => { + it('should call authProxyService.validate with payload', async () => { + const payload = { token: 'token' }; + const expectedResult = { valid: true, user: { id: '123' } }; + + mockAuthProxyService.validate.mockResolvedValue(expectedResult); + + const result = await controller.validate(payload); + + expect(service.validate).toHaveBeenCalledWith(payload); + expect(result).toEqual(expectedResult); + }); + }); + + describe('getCredits', () => { + it('should call authProxyService.getCredits with authorization header', async () => { + const authorization = 'Bearer token'; + const expectedResult = { credits: 100 }; + + mockAuthProxyService.getCredits.mockResolvedValue(expectedResult); + + const result = await controller.getCredits(authorization); + + expect(service.getCredits).toHaveBeenCalledWith(authorization); + expect(result).toEqual(expectedResult); + }); + + it('should throw UnauthorizedException when no authorization header', async () => { + await expect(controller.getCredits(undefined)).rejects.toThrow( + new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED) + ); + expect(service.getCredits).not.toHaveBeenCalled(); + }); + + it('should throw UnauthorizedException when empty authorization header', async () => { + await expect(controller.getCredits('')).rejects.toThrow( + new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED) + ); + expect(service.getCredits).not.toHaveBeenCalled(); + }); + }); + + describe('getDevices', () => { + it('should call authProxyService.getDevices with authorization header', async () => { + const authorization = 'Bearer token'; + const expectedResult = { devices: [{ id: 'device-1' }] }; + + mockAuthProxyService.getDevices.mockResolvedValue(expectedResult); + + const result = await controller.getDevices(authorization); + + expect(service.getDevices).toHaveBeenCalledWith(authorization); + expect(result).toEqual(expectedResult); + }); + + it('should throw UnauthorizedException when no authorization header', async () => { + await expect(controller.getDevices(undefined)).rejects.toThrow( + new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED) + ); + expect(service.getDevices).not.toHaveBeenCalled(); + }); + + it('should throw UnauthorizedException when empty authorization header', async () => { + await expect(controller.getDevices('')).rejects.toThrow( + new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED) + ); + expect(service.getDevices).not.toHaveBeenCalled(); + }); + }); +}); diff --git a/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.controller.ts b/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.controller.ts new file mode 100644 index 000000000..544dd35aa --- /dev/null +++ b/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.controller.ts @@ -0,0 +1,93 @@ +import { + Controller, + Post, + Get, + Body, + Headers, + HttpCode, + HttpException, + HttpStatus, +} from '@nestjs/common'; +import { AuthProxyService } from './auth-proxy.service'; + +@Controller('auth') +export class AuthProxyController { + constructor(private readonly authProxyService: AuthProxyService) {} + + @Post('signin') + async signin(@Body() payload: any) { + return this.authProxyService.signin(payload); + } + + /** + * Signup endpoint + * + * Optional: Include metadata.branding to customize signup email + * If not provided, mana-core uses default branding for the app + * + * Example with custom branding: + * { + * "email": "user@example.com", + * "password": "pass123", + * "deviceInfo": {...}, + * "metadata": { + * "branding": { + * "logoUrl": "custom-logo.svg", + * "primaryColor": "#FF5733" + * } + * } + * } + */ + @Post('signup') + async signup(@Body() payload: any) { + return this.authProxyService.signup(payload); + } + + @Post('google-signin') + async googleSignin(@Body() payload: any) { + return this.authProxyService.googleSignin(payload); + } + + @Post('apple-signin') + async appleSignin(@Body() payload: any) { + return this.authProxyService.appleSignin(payload); + } + + @Post('refresh') + async refresh(@Body() payload: any) { + return this.authProxyService.refresh(payload); + } + + @Post('logout') + @HttpCode(204) + async logout(@Body() payload: any) { + return this.authProxyService.logout(payload); + } + + @Post('forgot-password') + async forgotPassword(@Body() payload: any) { + return this.authProxyService.forgotPassword(payload); + } + + @Post('validate') + async validate(@Body() payload: any) { + return this.authProxyService.validate(payload); + } + + @Get('credits') + async getCredits(@Headers('authorization') authorization: string) { + if (!authorization) { + throw new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED); + } + return this.authProxyService.getCredits(authorization); + } + + // Device management endpoints + @Get('devices') + async getDevices(@Headers('authorization') authorization: string) { + if (!authorization) { + throw new HttpException('Authorization header required', HttpStatus.UNAUTHORIZED); + } + return this.authProxyService.getDevices(authorization); + } +} diff --git a/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.module.ts b/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.module.ts new file mode 100644 index 000000000..afabd763a --- /dev/null +++ b/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.module.ts @@ -0,0 +1,13 @@ +import { Module } from '@nestjs/common'; +import { HttpModule } from '@nestjs/axios'; +import { ConfigModule } from '@nestjs/config'; +import { AuthProxyController } from './auth-proxy.controller'; +import { AuthProxyService } from './auth-proxy.service'; + +@Module({ + imports: [HttpModule, ConfigModule], + controllers: [AuthProxyController], + providers: [AuthProxyService], + exports: [AuthProxyService], +}) +export class AuthProxyModule {} diff --git a/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.service.spec.ts b/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.service.spec.ts new file mode 100644 index 000000000..be080a6c2 --- /dev/null +++ b/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.service.spec.ts @@ -0,0 +1,400 @@ +import { Test, TestingModule } from '@nestjs/testing'; +import { HttpService } from '@nestjs/axios'; +import { ConfigService } from '@nestjs/config'; +import { AuthProxyService } from './auth-proxy.service'; +import { of, throwError } from 'rxjs'; +import { AxiosResponse, AxiosError } from 'axios'; + +describe('AuthProxyService', () => { + let service: AuthProxyService; + let httpService: jest.Mocked; + let configService: jest.Mocked; + + const mockHttpService = { + post: jest.fn(), + get: jest.fn(), + }; + + const mockConfigService = { + get: jest.fn(), + }; + + const authServiceUrl = 'http://localhost:3000'; + const memoroAppId = 'test-app-id'; + + beforeEach(async () => { + // Reset mocks + mockConfigService.get.mockReset(); + mockHttpService.post.mockReset(); + mockHttpService.get.mockReset(); + + // Setup config mock + mockConfigService.get.mockImplementation((key: string, defaultValue?: any) => { + switch (key) { + case 'MANA_SERVICE_URL': + return authServiceUrl; + case 'MEMORO_APP_ID': + return memoroAppId; + default: + return defaultValue; + } + }); + + const module: TestingModule = await Test.createTestingModule({ + providers: [ + AuthProxyService, + { + provide: HttpService, + useValue: mockHttpService, + }, + { + provide: ConfigService, + useValue: mockConfigService, + }, + ], + }).compile(); + + service = module.get(AuthProxyService); + httpService = module.get(HttpService); + configService = module.get(ConfigService); + + // Mock console methods to avoid test output noise + jest.spyOn(console, 'log').mockImplementation(() => {}); + jest.spyOn(console, 'error').mockImplementation(() => {}); + }); + + afterEach(() => { + jest.clearAllMocks(); + }); + + describe('signin', () => { + it('should forward signin request to auth service', async () => { + const payload = { email: 'test@test.com', password: 'password' }; + const expectedResponse = { token: 'token', user: { id: '123' } }; + const axiosResponse: AxiosResponse = { + data: expectedResponse, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + const result = await service.signin(payload); + + expect(httpService.post).toHaveBeenCalledWith( + `${authServiceUrl}/auth/signin?appId=${memoroAppId}`, + payload, + expect.any(Object) + ); + expect(result).toEqual(expectedResponse); + }); + + it('should handle signin errors', async () => { + const payload = { email: 'test@test.com', password: 'wrong' }; + const error: AxiosError = { + response: { + data: { message: 'Invalid credentials' }, + status: 401, + statusText: 'Unauthorized', + headers: {}, + config: {} as any, + }, + config: {} as any, + isAxiosError: true, + toJSON: () => ({}), + name: 'AxiosError', + message: 'Request failed', + }; + + mockHttpService.post.mockReturnValue(throwError(() => error)); + + await expect(service.signin(payload)).rejects.toThrow(); + }); + }); + + describe('signup', () => { + it('should forward signup request to auth service', async () => { + const payload = { email: 'test@test.com', password: 'password' }; + const expectedResponse = { user: { id: '123' } }; + const axiosResponse: AxiosResponse = { + data: expectedResponse, + status: 201, + statusText: 'Created', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + const result = await service.signup(payload); + + expect(httpService.post).toHaveBeenCalledWith( + `${authServiceUrl}/auth/signup?appId=${memoroAppId}`, + payload, + expect.any(Object) + ); + expect(result).toEqual(expectedResponse); + }); + }); + + describe('googleSignin', () => { + it('should forward google signin request to auth service', async () => { + const payload = { idToken: 'google-token' }; + const expectedResponse = { token: 'token', user: { id: '123' } }; + const axiosResponse: AxiosResponse = { + data: expectedResponse, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + const result = await service.googleSignin(payload); + + expect(httpService.post).toHaveBeenCalledWith( + `${authServiceUrl}/auth/google-signin?appId=${memoroAppId}`, + payload, + expect.any(Object) + ); + expect(result).toEqual(expectedResponse); + }); + }); + + describe('appleSignin', () => { + it('should forward apple signin request to auth service', async () => { + const payload = { idToken: 'apple-token' }; + const expectedResponse = { token: 'token', user: { id: '123' } }; + const axiosResponse: AxiosResponse = { + data: expectedResponse, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + const result = await service.appleSignin(payload); + + expect(httpService.post).toHaveBeenCalledWith( + `${authServiceUrl}/auth/apple-signin?appId=${memoroAppId}`, + payload, + expect.any(Object) + ); + expect(result).toEqual(expectedResponse); + }); + }); + + describe('refresh', () => { + it('should forward refresh request to auth service with deviceInfo', async () => { + const payload = { + refreshToken: 'refresh-token', + deviceInfo: { + platform: 'ios', + deviceId: 'device-123', + appVersion: '1.0.0', + }, + }; + const expectedResponse = { token: 'new-token', refreshToken: 'new-refresh' }; + const axiosResponse: AxiosResponse = { + data: expectedResponse, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + const result = await service.refresh(payload); + + expect(httpService.post).toHaveBeenCalledWith( + `${authServiceUrl}/auth/refresh?appId=${memoroAppId}`, + { + refreshToken: 'refresh-token', + appId: memoroAppId, + deviceInfo: payload.deviceInfo, + }, + expect.any(Object) + ); + expect(result).toEqual(expectedResponse); + }); + + it('should throw BadRequestException when deviceInfo is missing', async () => { + const payload = { refreshToken: 'refresh-token' }; + + await expect(service.refresh(payload)).rejects.toThrow( + 'Device info is required for token refresh' + ); + expect(httpService.post).not.toHaveBeenCalled(); + }); + + it('should throw BadRequestException when refreshToken is missing', async () => { + const payload = { + deviceInfo: { + platform: 'ios', + deviceId: 'device-123', + }, + }; + + await expect(service.refresh(payload)).rejects.toThrow('Refresh token is required'); + expect(httpService.post).not.toHaveBeenCalled(); + }); + }); + + describe('logout', () => { + it('should forward logout request to auth service', async () => { + const payload = { token: 'token' }; + const axiosResponse: AxiosResponse = { + data: null, + status: 204, + statusText: 'No Content', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + const result = await service.logout(payload); + + expect(httpService.post).toHaveBeenCalledWith( + `${authServiceUrl}/auth/logout?appId=${memoroAppId}`, + payload, + expect.any(Object) + ); + expect(result).toBeNull(); + }); + }); + + describe('forgotPassword', () => { + it('should forward forgot password request to auth service', async () => { + const payload = { email: 'test@test.com' }; + const expectedResponse = { message: 'Password reset email sent' }; + const axiosResponse: AxiosResponse = { + data: expectedResponse, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + const result = await service.forgotPassword(payload); + + expect(httpService.post).toHaveBeenCalledWith( + `${authServiceUrl}/auth/forgot-password?appId=${memoroAppId}`, + payload, + expect.any(Object) + ); + expect(result).toEqual(expectedResponse); + }); + }); + + describe('validate', () => { + it('should forward validate request to auth service', async () => { + const payload = { token: 'token' }; + const expectedResponse = { valid: true, user: { id: '123' } }; + const axiosResponse: AxiosResponse = { + data: expectedResponse, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + const result = await service.validate(payload); + + expect(httpService.post).toHaveBeenCalledWith( + `${authServiceUrl}/auth/validate?appId=${memoroAppId}`, + payload, + expect.any(Object) + ); + expect(result).toEqual(expectedResponse); + }); + }); + + describe('getCredits', () => { + it('should forward get credits request to auth service', async () => { + const authorization = 'Bearer token'; + const expectedResponse = { credits: 100 }; + const axiosResponse: AxiosResponse = { + data: expectedResponse, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.get.mockReturnValue(of(axiosResponse)); + + const result = await service.getCredits(authorization); + + expect(httpService.get).toHaveBeenCalledWith( + `${authServiceUrl}/auth/credits?appId=${memoroAppId}`, + { + headers: { + Authorization: authorization, + }, + } + ); + expect(result).toEqual(expectedResponse); + }); + + it('should handle get credits errors', async () => { + const authorization = 'Bearer invalid'; + const error: AxiosError = { + response: { + data: { message: 'Unauthorized' }, + status: 401, + statusText: 'Unauthorized', + headers: {}, + config: {} as any, + }, + config: {} as any, + isAxiosError: true, + toJSON: () => ({}), + name: 'AxiosError', + message: 'Request failed', + }; + + mockHttpService.get.mockReturnValue(throwError(() => error)); + + await expect(service.getCredits(authorization)).rejects.toThrow(); + }); + }); + + describe('getDevices', () => { + it('should forward get devices request to auth service', async () => { + const authorization = 'Bearer token'; + const expectedResponse = { devices: [{ id: 'device-1' }] }; + const axiosResponse: AxiosResponse = { + data: expectedResponse, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.get.mockReturnValue(of(axiosResponse)); + + const result = await service.getDevices(authorization); + + expect(httpService.get).toHaveBeenCalledWith( + `${authServiceUrl}/auth/devices?appId=${memoroAppId}`, + { + headers: { + Authorization: authorization, + }, + } + ); + expect(result).toEqual(expectedResponse); + }); + }); +}); diff --git a/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.service.ts b/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.service.ts new file mode 100644 index 000000000..32a086a6a --- /dev/null +++ b/apps/memoro/apps/backend/src/auth-proxy/auth-proxy.service.ts @@ -0,0 +1,228 @@ +import { Injectable, HttpException, HttpStatus } from '@nestjs/common'; +import { HttpService } from '@nestjs/axios'; +import { ConfigService } from '@nestjs/config'; +import { firstValueFrom, map, catchError } from 'rxjs'; +import { AxiosError } from 'axios'; +import { BrandingConfig, SignupMetadata } from './interfaces/branding.interface'; + +@Injectable() +export class AuthProxyService { + private manaServiceUrl: string; + private memoroAppId: string; + + constructor( + private httpService: HttpService, + private configService: ConfigService + ) { + this.manaServiceUrl = this.configService.get( + 'MANA_SERVICE_URL', + 'http://localhost:3000' + ); + this.memoroAppId = this.configService.get( + 'MEMORO_APP_ID', + '973da0c1-b479-4dac-a1b0-ed09c72caca8' + ); + } + + /** + * Generic proxy method for POST requests + */ + private async proxyPost(endpoint: string, payload: any, headers: any = {}) { + const url = `${this.manaServiceUrl}${endpoint}?appId=${this.memoroAppId}`; + + console.log(`[AuthProxy] Proxying POST request to: ${endpoint}`); + + try { + const response = await firstValueFrom( + this.httpService + .post(url, payload, { + headers: { + 'Content-Type': 'application/json', + ...headers, + }, + }) + .pipe( + map((res) => res.data), + catchError((error: AxiosError) => { + console.error(`[AuthProxy] Error from mana-core-middleware:`, error.response?.data); + + // Preserve the original error response + if (error.response) { + throw new HttpException( + error.response.data || 'Request failed', + error.response.status + ); + } + + throw new HttpException('Service unavailable', HttpStatus.SERVICE_UNAVAILABLE); + }) + ) + ); + + return response; + } catch (error) { + console.error(`[AuthProxy] Error proxying ${endpoint}:`, error); + throw error; + } + } + + /** + * Generic proxy method for GET requests + */ + private async proxyGet(endpoint: string, headers: any = {}) { + const url = `${this.manaServiceUrl}${endpoint}?appId=${this.memoroAppId}`; + + console.log(`[AuthProxy] Proxying GET request to: ${endpoint}`); + + try { + const response = await firstValueFrom( + this.httpService + .get(url, { + headers: { + ...headers, + }, + }) + .pipe( + map((res) => res.data), + catchError((error: AxiosError) => { + console.error(`[AuthProxy] Error from mana-core-middleware:`, error.response?.data); + + // Preserve the original error response + if (error.response) { + throw new HttpException( + error.response.data || 'Request failed', + error.response.status + ); + } + + throw new HttpException('Service unavailable', HttpStatus.SERVICE_UNAVAILABLE); + }) + ) + ); + + return response; + } catch (error) { + console.error(`[AuthProxy] Error proxying ${endpoint}:`, error); + throw error; + } + } + + // Auth endpoints + async signin(payload: any) { + // Log signin payload to understand device info flow + console.log('[AuthProxy] Signin request payload:', JSON.stringify(payload, null, 2)); + + if (payload.deviceInfo || payload.device_info) { + console.log('[AuthProxy] Device info present in signin request'); + } + + return this.proxyPost('/auth/signin', payload); + } + + async signup(payload: any) { + // Hardcoded Memoro branding configuration + const memoroBranding: BrandingConfig = { + appName: 'Memoro', + logoUrl: 'memoro-logo.png', + primaryColor: '#F8D62B', + secondaryColor: '#f5c500', + websiteUrl: 'https://memoro.ai', + taglineDe: 'Sprechen statt Tippen', + taglineEn: 'Speak Instead of Type', + copyright: '© 2025 Memoro · Made with 💛 in Germany', + }; + + // Build payload with Memoro branding + const enhancedPayload: any = { + ...payload, + redirectUrl: 'https://app.manacore.ai/welcome?appName=memoro', + }; + + // Add Memoro branding if not already provided in payload + if (!enhancedPayload.metadata) { + enhancedPayload.metadata = {}; + } + + // Merge: payload branding overrides default Memoro branding if provided + if (!enhancedPayload.metadata.branding) { + enhancedPayload.metadata.branding = memoroBranding; + } else { + // Merge: payload overrides default + enhancedPayload.metadata.branding = { + ...memoroBranding, + ...enhancedPayload.metadata.branding, + }; + } + + return this.proxyPost('/auth/signup', enhancedPayload); + } + + async googleSignin(payload: any) { + return this.proxyPost('/auth/google-signin', payload); + } + + async appleSignin(payload: any) { + return this.proxyPost('/auth/apple-signin', payload); + } + + async refresh(payload: any) { + // Log the refresh payload to debug device info issues + console.log('[AuthProxy] Refresh request payload:', JSON.stringify(payload, null, 2)); + + // Check if device info is present - it's required for refresh + if (!payload.deviceInfo) { + console.error('[AuthProxy] Error: No device info in refresh request'); + throw new HttpException( + { + error: 'Bad Request', + message: 'Device info is required for token refresh', + statusCode: 400, + }, + HttpStatus.BAD_REQUEST + ); + } + + // Ensure the payload has the correct structure + const refreshPayload = { + refreshToken: payload.refreshToken, + appId: payload.appId || this.memoroAppId, + deviceInfo: payload.deviceInfo, + }; + + // Validate required fields + if (!refreshPayload.refreshToken) { + throw new HttpException( + { error: 'Bad Request', message: 'Refresh token is required', statusCode: 400 }, + HttpStatus.BAD_REQUEST + ); + } + + console.log('[AuthProxy] Device info included in refresh request'); + + return this.proxyPost('/auth/refresh', refreshPayload); + } + + async logout(payload: any) { + return this.proxyPost('/auth/logout', payload); + } + + async forgotPassword(payload: any) { + return this.proxyPost('/auth/forgot-password', payload); + } + + async validate(payload: any) { + return this.proxyPost('/auth/validate', payload); + } + + async getCredits(authHeader: string) { + return this.proxyGet('/auth/credits', { + Authorization: authHeader, + }); + } + + async getDevices(authHeader: string) { + return this.proxyGet('/auth/devices', { + Authorization: authHeader, + }); + } +} diff --git a/apps/memoro/apps/backend/src/auth-proxy/interfaces/branding.interface.ts b/apps/memoro/apps/backend/src/auth-proxy/interfaces/branding.interface.ts new file mode 100644 index 000000000..2cf331f60 --- /dev/null +++ b/apps/memoro/apps/backend/src/auth-proxy/interfaces/branding.interface.ts @@ -0,0 +1,34 @@ +/** + * Feature object structure for branding emails + */ +export interface BrandingFeature { + icon: string; // Emoji icon + titleDe: string; // German title + titleEn: string; // English title + descriptionDe: string; // German description + descriptionEn: string; // English description +} + +/** + * Email branding configuration for signup confirmation emails + * All fields are optional and will fall back to app-branding.config.ts defaults + */ +export interface BrandingConfig { + appName?: string; // App display name + logoUrl?: string; // Logo filename or URL + primaryColor?: string; // Primary brand color (hex) + secondaryColor?: string; // Secondary color (hex) + websiteUrl?: string; // Website URL + taglineDe?: string; // German tagline + taglineEn?: string; // English tagline + features?: BrandingFeature[]; // Feature list + copyright?: string; // Footer copyright text +} + +/** + * Metadata object that can be passed in signup requests + */ +export interface SignupMetadata { + branding?: BrandingConfig; + [key: string]: any; // Allow custom fields for email personalization +} diff --git a/apps/memoro/apps/backend/src/auth/auth-client.service.spec.ts b/apps/memoro/apps/backend/src/auth/auth-client.service.spec.ts new file mode 100644 index 000000000..1a74157eb --- /dev/null +++ b/apps/memoro/apps/backend/src/auth/auth-client.service.spec.ts @@ -0,0 +1,324 @@ +import { Test, TestingModule } from '@nestjs/testing'; +import { HttpService } from '@nestjs/axios'; +import { ConfigService } from '@nestjs/config'; +import { AuthClientService } from './auth-client.service'; +import { UnauthorizedException } from '@nestjs/common'; +import { of, throwError } from 'rxjs'; +import { AxiosResponse, AxiosError } from 'axios'; + +describe('AuthClientService', () => { + let service: AuthClientService; + let httpService: jest.Mocked; + let configService: jest.Mocked; + + const mockHttpService = { + post: jest.fn(), + }; + + const mockConfigService = { + get: jest.fn(), + }; + + const authServiceUrl = 'http://localhost:3000'; + const memoroAppId = 'test-app-id'; + + beforeEach(async () => { + const module: TestingModule = await Test.createTestingModule({ + providers: [ + AuthClientService, + { + provide: HttpService, + useValue: mockHttpService, + }, + { + provide: ConfigService, + useValue: mockConfigService, + }, + ], + }).compile(); + + service = module.get(AuthClientService); + httpService = module.get(HttpService); + configService = module.get(ConfigService); + + // Clear and reset all mocks first + mockConfigService.get.mockClear(); + mockHttpService.post.mockClear(); + + // Setup default config values + mockConfigService.get.mockImplementation((key: string, defaultValue?: any) => { + switch (key) { + case 'MANA_SERVICE_URL': + return authServiceUrl; + case 'MEMORO_APP_ID': + return memoroAppId; + default: + return defaultValue; + } + }); + + // Reset console.log mock + jest.spyOn(console, 'log').mockImplementation(() => {}); + }); + + afterEach(() => { + jest.clearAllMocks(); + jest.restoreAllMocks(); + }); + + describe('constructor', () => { + it('should initialize with config values', () => { + expect(service).toBeDefined(); + expect(configService.get).toHaveBeenCalledWith('MANA_SERVICE_URL', 'http://localhost:3000'); + expect(configService.get).toHaveBeenCalledWith( + 'MEMORO_APP_ID', + '973da0c1-b479-4dac-a1b0-ed09c72caca8' + ); + }); + + it('should use default values when config not provided', async () => { + mockConfigService.get.mockReturnValue(undefined); + + const module: TestingModule = await Test.createTestingModule({ + providers: [ + AuthClientService, + { + provide: HttpService, + useValue: mockHttpService, + }, + { + provide: ConfigService, + useValue: mockConfigService, + }, + ], + }).compile(); + + const serviceWithDefaults = module.get(AuthClientService); + expect(serviceWithDefaults).toBeDefined(); + }); + }); + + describe('validateToken', () => { + it('should validate token successfully', async () => { + const token = 'valid-token'; + const expectedUser = { + id: 'user-123', + email: 'test@test.com', + role: 'user', + }; + const axiosResponse: AxiosResponse = { + data: { + valid: true, + user: expectedUser, + }, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + const result = await service.validateToken(token); + + expect(console.log).toHaveBeenCalledWith( + 'Calling: ', + `${authServiceUrl}/auth/validate?appId=${memoroAppId}` + ); + expect(console.log).toHaveBeenCalledWith('Memoro App ID: ', memoroAppId); + expect(httpService.post).toHaveBeenCalledWith( + `${authServiceUrl}/auth/validate?appId=${memoroAppId}`, + { appToken: token }, + { + headers: { + 'Content-Type': 'application/json', + }, + } + ); + expect(result).toEqual(expectedUser); + }); + + it('should throw UnauthorizedException for invalid token', async () => { + const token = 'invalid-token'; + const axiosError: AxiosError = { + response: { + data: { message: 'Invalid token' }, + status: 401, + statusText: 'Unauthorized', + headers: {}, + config: {} as any, + }, + config: {} as any, + isAxiosError: true, + toJSON: () => ({}), + name: 'AxiosError', + message: 'Request failed', + }; + + mockHttpService.post.mockReturnValue(throwError(() => axiosError)); + + await expect(service.validateToken(token)).rejects.toThrow( + new UnauthorizedException('Invalid token') + ); + }); + + it('should throw UnauthorizedException when response is not valid', async () => { + const token = 'token'; + const axiosResponse: AxiosResponse = { + data: { + valid: false, + }, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + await expect(service.validateToken(token)).rejects.toThrow( + new UnauthorizedException('Invalid token') + ); + }); + + it('should throw UnauthorizedException when user is missing', async () => { + const token = 'token'; + const axiosResponse: AxiosResponse = { + data: { + valid: true, + user: null, + }, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + await expect(service.validateToken(token)).rejects.toThrow( + new UnauthorizedException('Invalid token') + ); + }); + + it('should handle network errors', async () => { + const token = 'token'; + const error = new Error('Network error'); + + mockHttpService.post.mockReturnValue(throwError(() => error)); + + await expect(service.validateToken(token)).rejects.toThrow( + new UnauthorizedException('Invalid token') + ); + }); + + it('should handle unexpected errors', async () => { + const token = 'token'; + + mockHttpService.post.mockImplementation(() => { + throw new Error('Unexpected error'); + }); + + await expect(service.validateToken(token)).rejects.toThrow( + new UnauthorizedException('Invalid token') + ); + }); + }); + + describe('refreshToken', () => { + it('should refresh token successfully', async () => { + const refreshToken = 'valid-refresh-token'; + const expectedResponse = { + appToken: 'new-app-token', + refreshToken: 'new-refresh-token', + }; + const axiosResponse: AxiosResponse = { + data: expectedResponse, + status: 200, + statusText: 'OK', + headers: {}, + config: {} as any, + }; + + mockHttpService.post.mockReturnValue(of(axiosResponse)); + + const result = await service.refreshToken(refreshToken); + + expect(httpService.post).toHaveBeenCalledWith( + `${authServiceUrl}/auth/refresh`, + { refreshToken, appId: memoroAppId }, + { + headers: { + 'Content-Type': 'application/json', + }, + } + ); + expect(result).toEqual(expectedResponse); + }); + + it('should throw UnauthorizedException for invalid refresh token', async () => { + const refreshToken = 'invalid-refresh-token'; + const axiosError: AxiosError = { + response: { + data: { message: 'Invalid refresh token' }, + status: 401, + statusText: 'Unauthorized', + headers: {}, + config: {} as any, + }, + config: {} as any, + isAxiosError: true, + toJSON: () => ({}), + name: 'AxiosError', + message: 'Request failed', + }; + + mockHttpService.post.mockReturnValue(throwError(() => axiosError)); + + await expect(service.refreshToken(refreshToken)).rejects.toThrow( + new UnauthorizedException('Invalid refresh token') + ); + }); + + it('should handle network errors during refresh', async () => { + const refreshToken = 'refresh-token'; + const error = new Error('Network error'); + + mockHttpService.post.mockReturnValue(throwError(() => error)); + + await expect(service.refreshToken(refreshToken)).rejects.toThrow( + new UnauthorizedException('Invalid refresh token') + ); + }); + + it('should handle unexpected errors during refresh', async () => { + const refreshToken = 'refresh-token'; + + mockHttpService.post.mockImplementation(() => { + throw new Error('Unexpected error'); + }); + + await expect(service.refreshToken(refreshToken)).rejects.toThrow( + new UnauthorizedException('Invalid refresh token') + ); + }); + + it('should handle timeout errors', async () => { + const refreshToken = 'refresh-token'; + const axiosError: AxiosError = { + code: 'ECONNABORTED', + config: {} as any, + isAxiosError: true, + toJSON: () => ({}), + name: 'AxiosError', + message: 'Timeout', + }; + + mockHttpService.post.mockReturnValue(throwError(() => axiosError)); + + await expect(service.refreshToken(refreshToken)).rejects.toThrow( + new UnauthorizedException('Invalid refresh token') + ); + }); + }); +}); diff --git a/apps/memoro/apps/backend/src/auth/auth-client.service.ts b/apps/memoro/apps/backend/src/auth/auth-client.service.ts new file mode 100644 index 000000000..e91b41170 --- /dev/null +++ b/apps/memoro/apps/backend/src/auth/auth-client.service.ts @@ -0,0 +1,92 @@ +import { Injectable, UnauthorizedException } from '@nestjs/common'; +import { HttpService } from '@nestjs/axios'; +import { ConfigService } from '@nestjs/config'; +import { Observable, catchError, firstValueFrom, map } from 'rxjs'; +import { AxiosError } from 'axios'; +import { JwtPayload } from '../types/jwt-payload.interface'; + +@Injectable() +export class AuthClientService { + private authServiceUrl: string; + private memoroAppId: string; + + constructor( + private httpService: HttpService, + private configService: ConfigService + ) { + this.authServiceUrl = this.configService.get( + 'MANA_SERVICE_URL', + 'http://localhost:3000' + ); + this.memoroAppId = this.configService.get( + 'MEMORO_APP_ID', + '973da0c1-b479-4dac-a1b0-ed09c72caca8' + ); + } + + /** + * Validates a JWT token by calling the Auth service + */ + async validateToken(token: string): Promise { + try { + console.log('Calling: ', `${this.authServiceUrl}/auth/validate?appId=${this.memoroAppId}`); + console.log('Memoro App ID: ', this.memoroAppId); + const response = await firstValueFrom( + this.httpService + .post( + `${this.authServiceUrl}/auth/validate?appId=${this.memoroAppId}`, + { appToken: token }, + { + headers: { + 'Content-Type': 'application/json', + }, + } + ) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + throw new UnauthorizedException('Invalid token'); + }) + ) + ); + + if (response.valid && response.user) { + return response.user; + } else { + throw new UnauthorizedException('Invalid token response format'); + } + } catch (error) { + throw new UnauthorizedException('Invalid token'); + } + } + + /** + * Refreshes a token by calling the Auth service + */ + async refreshToken(refreshToken: string): Promise<{ appToken: string; refreshToken: string }> { + try { + const response = await firstValueFrom( + this.httpService + .post( + `${this.authServiceUrl}/auth/refresh`, + { refreshToken, appId: this.memoroAppId }, + { + headers: { + 'Content-Type': 'application/json', + }, + } + ) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + throw new UnauthorizedException('Invalid refresh token'); + }) + ) + ); + + return response; + } catch (error) { + throw new UnauthorizedException('Invalid refresh token'); + } + } +} diff --git a/apps/memoro/apps/backend/src/auth/auth.module.ts b/apps/memoro/apps/backend/src/auth/auth.module.ts new file mode 100644 index 000000000..8c9e8c96f --- /dev/null +++ b/apps/memoro/apps/backend/src/auth/auth.module.ts @@ -0,0 +1,11 @@ +import { Module } from '@nestjs/common'; +import { HttpModule } from '@nestjs/axios'; +import { ConfigModule } from '@nestjs/config'; +import { AuthClientService } from './auth-client.service'; + +@Module({ + imports: [HttpModule, ConfigModule], + providers: [AuthClientService], + exports: [AuthClientService], +}) +export class AuthModule {} diff --git a/apps/memoro/apps/backend/src/cleanup/audio-cleanup.controller.ts b/apps/memoro/apps/backend/src/cleanup/audio-cleanup.controller.ts new file mode 100644 index 000000000..6b4b1c35e --- /dev/null +++ b/apps/memoro/apps/backend/src/cleanup/audio-cleanup.controller.ts @@ -0,0 +1,65 @@ +import { Controller, Post, Body, UseGuards, Logger, HttpCode, HttpStatus } from '@nestjs/common'; +import { AudioCleanupService } from './audio-cleanup.service'; +import { InternalServiceGuard } from '../guards/internal-service.guard'; +import { CleanupResult } from './interfaces/cleanup.interfaces'; + +/** + * Controller for audio cleanup operations. + * Protected by InternalServiceGuard - only accessible via internal API key. + */ +@Controller('cleanup') +export class AudioCleanupController { + private readonly logger = new Logger(AudioCleanupController.name); + + constructor(private readonly audioCleanupService: AudioCleanupService) {} + + /** + * Trigger the full cleanup job. + * Called by pg_cron or manually for testing. + * Fetches users with cleanup enabled and processes their old audio files. + */ + @Post('trigger-from-cron') + @UseGuards(InternalServiceGuard) + @HttpCode(HttpStatus.OK) + async triggerFromCron(): Promise { + this.logger.log('Cleanup triggered from cron job'); + return this.audioCleanupService.runCleanup(); + } + + /** + * Process cleanup for specific user IDs. + * Used when the caller already knows which users to process. + */ + @Post('process-old-audios') + @UseGuards(InternalServiceGuard) + @HttpCode(HttpStatus.OK) + async processOldAudios(@Body() body: { userIds: string[] }): Promise { + this.logger.log(`Processing cleanup for ${body.userIds?.length || 0} users`); + + if (!body.userIds || body.userIds.length === 0) { + return { + success: true, + usersProcessed: 0, + filesDeleted: 0, + filesFailed: 0, + errors: [], + startedAt: new Date().toISOString(), + completedAt: new Date().toISOString(), + }; + } + + return this.audioCleanupService.deleteOldAudiosForUsers(body.userIds); + } + + /** + * Manual trigger for testing/admin purposes. + * Same as trigger-from-cron but with a different endpoint name for clarity. + */ + @Post('trigger-manual') + @UseGuards(InternalServiceGuard) + @HttpCode(HttpStatus.OK) + async triggerManual(): Promise { + this.logger.log('Cleanup triggered manually'); + return this.audioCleanupService.runCleanup(); + } +} diff --git a/apps/memoro/apps/backend/src/cleanup/audio-cleanup.service.ts b/apps/memoro/apps/backend/src/cleanup/audio-cleanup.service.ts new file mode 100644 index 000000000..08e53d9e3 --- /dev/null +++ b/apps/memoro/apps/backend/src/cleanup/audio-cleanup.service.ts @@ -0,0 +1,395 @@ +import { Injectable, Logger } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { createClient, SupabaseClient } from '@supabase/supabase-js'; +import { + CleanupResult, + CleanupError, + UserCleanupEnabledResponse, +} from './interfaces/cleanup.interfaces'; + +interface StorageObject { + id: string; + name: string; + created_at: string; + bucket_id: string; +} + +@Injectable() +export class AudioCleanupService { + private readonly logger = new Logger(AudioCleanupService.name); + private readonly memoroServiceClient: SupabaseClient; + private readonly memoroUrl: string; + private readonly manaCoreMiddlewareUrl: string; + private readonly internalApiKey: string; + private readonly STORAGE_BUCKET = 'user-uploads'; + private readonly RETENTION_DAYS = 30; + private readonly BATCH_SIZE = 100; // Files per deletion batch + private readonly BATCH_DELAY_MS = 200; // Delay between batches + + constructor(private configService: ConfigService) { + this.memoroUrl = this.configService.get('MEMORO_SUPABASE_URL'); + const memoroServiceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'); + this.manaCoreMiddlewareUrl = this.configService.get('MANA_SERVICE_URL'); + this.internalApiKey = this.configService.get('INTERNAL_API_KEY'); + + if (!this.memoroUrl || !memoroServiceKey) { + throw new Error('MEMORO_SUPABASE_URL or MEMORO_SUPABASE_SERVICE_KEY not provided'); + } + + this.memoroServiceClient = createClient(this.memoroUrl, memoroServiceKey); + } + + /** + * Main entry point for the cleanup job. + * Uses direct SQL on storage.objects table for efficient file discovery. + */ + async runCleanup(): Promise { + const startedAt = new Date().toISOString(); + const errors: CleanupError[] = []; + let usersProcessed = 0; + let totalFilesDeleted = 0; + let totalFilesFailed = 0; + + this.logger.log('Starting audio cleanup job (SQL-based)'); + + try { + // Step 1: Get users with auto-delete enabled from mana-core-middleware + const userIds = await this.getUsersWithCleanupEnabled(); + this.logger.log(`Found ${userIds.length} users with audio cleanup enabled`); + + if (userIds.length === 0) { + return { + success: true, + usersProcessed: 0, + filesDeleted: 0, + filesFailed: 0, + errors: [], + startedAt, + completedAt: new Date().toISOString(), + }; + } + + // Step 2: Process each user using SQL-based cleanup + for (const userId of userIds) { + try { + const result = await this.processUserCleanupSQL(userId); + usersProcessed++; + totalFilesDeleted += result.filesDeleted; + totalFilesFailed += result.filesFailed; + errors.push(...result.errors); + } catch (error) { + this.logger.error(`Failed to process cleanup for user ${userId}:`, error); + errors.push({ + userId, + error: error.message || 'Unknown error processing user cleanup', + }); + } + } + + // Step 3: Log the cleanup run + await this.logCleanupRun({ + usersProcessed, + filesDeleted: totalFilesDeleted, + filesFailed: totalFilesFailed, + errors, + startedAt, + }); + + return { + success: true, + usersProcessed, + filesDeleted: totalFilesDeleted, + filesFailed: totalFilesFailed, + errors, + startedAt, + completedAt: new Date().toISOString(), + }; + } catch (error) { + this.logger.error('Audio cleanup job failed:', error); + return { + success: false, + usersProcessed, + filesDeleted: totalFilesDeleted, + filesFailed: totalFilesFailed, + errors: [...errors, { error: error.message || 'Unknown error' }], + startedAt, + completedAt: new Date().toISOString(), + }; + } + } + + /** + * Process cleanup for a specific list of user IDs. + */ + async deleteOldAudiosForUsers(userIds: string[]): Promise { + const startedAt = new Date().toISOString(); + const errors: CleanupError[] = []; + let usersProcessed = 0; + let totalFilesDeleted = 0; + let totalFilesFailed = 0; + + this.logger.log(`Processing cleanup for ${userIds.length} users`); + + for (const userId of userIds) { + try { + const result = await this.processUserCleanupSQL(userId); + usersProcessed++; + totalFilesDeleted += result.filesDeleted; + totalFilesFailed += result.filesFailed; + errors.push(...result.errors); + } catch (error) { + this.logger.error(`Failed to process cleanup for user ${userId}:`, error); + errors.push({ + userId, + error: error.message || 'Unknown error processing user cleanup', + }); + } + } + + return { + success: errors.length === 0, + usersProcessed, + filesDeleted: totalFilesDeleted, + filesFailed: totalFilesFailed, + errors, + startedAt, + completedAt: new Date().toISOString(), + }; + } + + /** + * Process cleanup for a single user using direct SQL on storage.objects table. + * Queries files older than retention period and deletes them in batches. + */ + private async processUserCleanupSQL(userId: string): Promise<{ + filesDeleted: number; + filesFailed: number; + errors: CleanupError[]; + }> { + const errors: CleanupError[] = []; + let filesDeleted = 0; + let filesFailed = 0; + + // Query storage.objects directly via the get_old_storage_files function + const { data: oldFiles, error: queryError } = await this.memoroServiceClient.rpc( + 'get_old_storage_files', + { + p_bucket_id: this.STORAGE_BUCKET, + p_user_id: userId, + p_retention_days: this.RETENTION_DAYS, + } + ); + + if (queryError) { + this.logger.error(`Failed to query old files for user ${userId}:`, queryError); + throw new Error(`Query error: ${queryError.message}`); + } + + if (!oldFiles || oldFiles.length === 0) { + this.logger.log(`No old files found for user ${userId}`); + return { filesDeleted: 0, filesFailed: 0, errors: [] }; + } + + this.logger.log(`Found ${oldFiles.length} old files for user ${userId}`); + + // Extract unique memoIds from file paths (format: userId/memoId/filename) + // Only include valid UUIDs (skip folders like "migration-reports") + const UUID_REGEX = /^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i; + const memoIds = new Set(); + for (const file of oldFiles) { + const parts = file.name.split('/'); + if (parts.length >= 2 && UUID_REGEX.test(parts[1])) { + memoIds.add(parts[1]); // memoId is the second part + } + } + + // Delete files in batches + const filePaths = oldFiles.map((f: StorageObject) => f.name); + const result = await this.deleteFilesInBatches(filePaths, userId); + + filesDeleted = result.deleted; + filesFailed = result.failed; + errors.push(...result.errors); + + // Mark memos as audio deleted (only if files were actually deleted) + if (filesDeleted > 0 && memoIds.size > 0) { + await this.markMemosAsAudioDeleted(Array.from(memoIds), userId); + } + + this.logger.log(`User ${userId}: deleted ${filesDeleted} files, failed ${filesFailed}`); + return { filesDeleted, filesFailed, errors }; + } + + /** + * Mark memos as having their audio deleted. + * Updates source.audio_deleted and source.audio_deleted_at fields. + */ + private async markMemosAsAudioDeleted(memoIds: string[], userId: string): Promise { + const deletedAt = new Date().toISOString(); + + for (const memoId of memoIds) { + try { + // First get the current source to merge with + const { data: memo, error: fetchError } = await this.memoroServiceClient + .from('memos') + .select('source') + .eq('id', memoId) + .eq('user_id', userId) + .maybeSingle(); + + if (fetchError) { + this.logger.warn(`Error fetching memo ${memoId}:`, fetchError); + continue; + } + + if (!memo) { + // Memo doesn't exist - this is fine, just skip it + this.logger.log(`Memo ${memoId} not found, skipping source update`); + continue; + } + + // Update source with audio_deleted flag and clear the path + const updatedSource = { + ...memo.source, + audio_path: null, + audio_deleted: true, + audio_deleted_at: deletedAt, + }; + + const { error: updateError } = await this.memoroServiceClient + .from('memos') + .update({ source: updatedSource }) + .eq('id', memoId) + .eq('user_id', userId); + + if (updateError) { + this.logger.warn(`Failed to mark memo ${memoId} as audio deleted:`, updateError); + } else { + this.logger.log(`Marked memo ${memoId} as audio deleted`); + } + } catch (error) { + this.logger.warn(`Error marking memo ${memoId} as audio deleted:`, error); + } + } + } + + /** + * Delete files in batches to avoid rate limits and timeout issues. + */ + private async deleteFilesInBatches( + filePaths: string[], + userId: string + ): Promise<{ deleted: number; failed: number; errors: CleanupError[] }> { + const errors: CleanupError[] = []; + let deleted = 0; + let failed = 0; + + // Process in batches + for (let i = 0; i < filePaths.length; i += this.BATCH_SIZE) { + const batch = filePaths.slice(i, i + this.BATCH_SIZE); + + try { + const { error: deleteError } = await this.memoroServiceClient.storage + .from(this.STORAGE_BUCKET) + .remove(batch); + + if (deleteError) { + this.logger.error(`Batch delete failed:`, deleteError); + failed += batch.length; + errors.push({ + userId, + error: `Batch delete failed: ${deleteError.message}`, + }); + } else { + deleted += batch.length; + this.logger.log( + `Deleted batch of ${batch.length} files (${i + batch.length}/${filePaths.length})` + ); + } + } catch (error) { + this.logger.error(`Batch delete error:`, error); + failed += batch.length; + errors.push({ + userId, + error: error.message || 'Unknown batch delete error', + }); + } + + // Delay between batches + if (i + this.BATCH_SIZE < filePaths.length) { + await this.delay(this.BATCH_DELAY_MS); + } + } + + return { deleted, failed, errors }; + } + + /** + * Get users with audio auto-delete enabled from mana-core-middleware. + */ + private async getUsersWithCleanupEnabled(): Promise { + if (!this.manaCoreMiddlewareUrl || !this.internalApiKey) { + this.logger.warn('MANA_SERVICE_URL or INTERNAL_API_KEY not configured'); + return []; + } + + try { + const response = await fetch( + `${this.manaCoreMiddlewareUrl}/internal/users/audio-cleanup-enabled`, + { + method: 'GET', + headers: { + 'X-Internal-API-Key': this.internalApiKey, + 'Content-Type': 'application/json', + }, + } + ); + + if (!response.ok) { + throw new Error(`Failed to fetch users: ${response.status} ${response.statusText}`); + } + + const data: UserCleanupEnabledResponse = await response.json(); + return data.userIds || []; + } catch (error) { + this.logger.error('Failed to get users with cleanup enabled:', error); + throw error; + } + } + + /** + * Delay helper to avoid rate limits. + */ + private delay(ms: number): Promise { + return new Promise((resolve) => setTimeout(resolve, ms)); + } + + /** + * Log cleanup run to the database for monitoring. + */ + private async logCleanupRun(data: { + usersProcessed: number; + filesDeleted: number; + filesFailed: number; + errors: CleanupError[]; + startedAt: string; + }): Promise { + try { + const { error } = await this.memoroServiceClient.from('audio_cleanup_logs').insert({ + started_at: data.startedAt, + completed_at: new Date().toISOString(), + status: data.errors.length === 0 ? 'completed' : 'completed_with_errors', + users_processed: data.usersProcessed, + files_deleted: data.filesDeleted, + files_failed: data.filesFailed, + error_details: data.errors.length > 0 ? data.errors : null, + }); + + if (error) { + this.logger.warn('Failed to log cleanup run:', error); + } + } catch (error) { + this.logger.warn('Failed to log cleanup run:', error); + } + } +} diff --git a/apps/memoro/apps/backend/src/cleanup/cleanup.module.ts b/apps/memoro/apps/backend/src/cleanup/cleanup.module.ts new file mode 100644 index 000000000..b46ab6443 --- /dev/null +++ b/apps/memoro/apps/backend/src/cleanup/cleanup.module.ts @@ -0,0 +1,12 @@ +import { Module } from '@nestjs/common'; +import { ConfigModule } from '@nestjs/config'; +import { AudioCleanupService } from './audio-cleanup.service'; +import { AudioCleanupController } from './audio-cleanup.controller'; + +@Module({ + imports: [ConfigModule], + controllers: [AudioCleanupController], + providers: [AudioCleanupService], + exports: [AudioCleanupService], +}) +export class CleanupModule {} diff --git a/apps/memoro/apps/backend/src/cleanup/interfaces/cleanup.interfaces.ts b/apps/memoro/apps/backend/src/cleanup/interfaces/cleanup.interfaces.ts new file mode 100644 index 000000000..ae8ea62de --- /dev/null +++ b/apps/memoro/apps/backend/src/cleanup/interfaces/cleanup.interfaces.ts @@ -0,0 +1,20 @@ +export interface CleanupResult { + success: boolean; + usersProcessed: number; + filesDeleted: number; + filesFailed: number; + errors: CleanupError[]; + startedAt: string; + completedAt: string; +} + +export interface CleanupError { + userId?: string; + memoId?: string; + filePath?: string; + error: string; +} + +export interface UserCleanupEnabledResponse { + userIds: string[]; +} diff --git a/apps/memoro/apps/backend/src/credits/credit-client.service.ts b/apps/memoro/apps/backend/src/credits/credit-client.service.ts new file mode 100644 index 000000000..9c4dbf64e --- /dev/null +++ b/apps/memoro/apps/backend/src/credits/credit-client.service.ts @@ -0,0 +1,279 @@ +import { Injectable, BadRequestException, ForbiddenException } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { InsufficientCreditsException } from '../errors/insufficient-credits.error'; + +export interface CreditCheckResponse { + hasEnoughCredits: boolean; + currentCredits: number; + requiredCredits: number; + creditType: 'user' | 'space'; +} + +export interface CreditConsumptionResponse { + success: boolean; + message: string; + remainingCredits?: number; +} + +@Injectable() +export class CreditClientService { + private readonly manaServiceUrl: string; + + constructor(private configService: ConfigService) { + this.manaServiceUrl = this.configService.get( + 'MANA_SERVICE_URL', + 'http://localhost:3000' + ); + } + + /** + * Check if user has enough personal credits + */ + async checkUserCredits( + userId: string, + requiredCredits: number, + token: string + ): Promise { + try { + const response = await fetch(`${this.manaServiceUrl}/users/credits`, { + method: 'GET', + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + }); + + if (!response.ok) { + const errorData = await response.json().catch(() => ({})); + throw new BadRequestException( + `Failed to check user credits: ${errorData.message || response.statusText}` + ); + } + + const data = await response.json(); + const currentCredits = data.credits || 0; + + return { + hasEnoughCredits: currentCredits >= requiredCredits, + currentCredits, + requiredCredits, + creditType: 'user', + }; + } catch (error) { + console.error('Error checking user credits:', error); + throw error; + } + } + + /** + * Check if space has enough credits + */ + async checkSpaceCredits( + spaceId: string, + requiredCredits: number, + token: string + ): Promise { + try { + const response = await fetch(`${this.manaServiceUrl}/spaces/${spaceId}/credits`, { + method: 'GET', + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + }); + + if (!response.ok) { + const errorData = await response.json().catch(() => ({})); + throw new BadRequestException( + `Failed to check space credits: ${errorData.message || response.statusText}` + ); + } + + const data = await response.json(); + const currentCredits = data.space?.credits || data.creditSummary?.current_balance || 0; + + return { + hasEnoughCredits: currentCredits >= requiredCredits, + currentCredits, + requiredCredits, + creditType: 'space', + }; + } catch (error) { + console.error('Error checking space credits:', error); + throw error; + } + } + + /** + * Consume credits from user's personal balance + */ + async consumeUserCredits( + userId: string, + amount: number, + token: string, + description?: string + ): Promise { + try { + const response = await fetch(`${this.manaServiceUrl}/users/credits/consume`, { + method: 'POST', + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + amount, + description: description || `Credit consumption for operation`, + }), + }); + + if (!response.ok) { + const errorData = await response.json().catch(() => ({})); + + if (response.status === 400 && errorData.message?.includes('insufficient')) { + throw new InsufficientCreditsException({ + requiredCredits: amount, + availableCredits: 0, // We don't know the exact amount from this error + creditType: 'user', + operation: 'credit_consumption', + }); + } + + throw new BadRequestException( + `Failed to consume user credits: ${errorData.message || response.statusText}` + ); + } + + const data = await response.json(); + return { + success: true, + message: data.message || 'Credits consumed successfully', + }; + } catch (error) { + console.error('Error consuming user credits:', error); + throw error; + } + } + + /** + * Consume credits from space balance + */ + async consumeSpaceCredits( + spaceId: string, + amount: number, + token: string, + description?: string + ): Promise { + try { + const response = await fetch(`${this.manaServiceUrl}/spaces/${spaceId}/credits/consume`, { + method: 'POST', + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + amount, + description: description || `Credit consumption for operation`, + }), + }); + + if (!response.ok) { + const errorData = await response.json().catch(() => ({})); + + if (response.status === 400 && errorData.message?.includes('insufficient')) { + throw new InsufficientCreditsException({ + requiredCredits: amount, + availableCredits: 0, // We don't know the exact amount from this error + creditType: 'space', + operation: 'credit_consumption', + }); + } + + throw new BadRequestException( + `Failed to consume space credits: ${errorData.message || response.statusText}` + ); + } + + const data = await response.json(); + return { + success: true, + message: data.message || 'Credits consumed successfully', + }; + } catch (error) { + console.error('Error consuming space credits:', error); + throw error; + } + } + + /** + * Check and consume credits based on operation context + * If spaceId is provided, check space credits first, fall back to user credits + * If no spaceId, use user credits only + */ + async checkAndConsumeCredits( + userId: string, + requiredCredits: number, + token: string, + options: { + spaceId?: string; + description?: string; + operation: string; + } + ): Promise<{ consumed: boolean; creditType: 'user' | 'space'; message: string }> { + const { spaceId, description, operation } = options; + + try { + // If spaceId provided, try space credits first + if (spaceId) { + try { + const spaceCheck = await this.checkSpaceCredits(spaceId, requiredCredits, token); + + if (spaceCheck.hasEnoughCredits) { + await this.consumeSpaceCredits( + spaceId, + requiredCredits, + token, + description || `${operation} operation` + ); + return { + consumed: true, + creditType: 'space', + message: `Consumed ${requiredCredits} credits from space balance`, + }; + } + } catch (spaceError) { + console.warn( + `Space credit check failed, falling back to user credits: ${spaceError.message}` + ); + } + } + + // Use user credits (either as fallback or primary) + const userCheck = await this.checkUserCredits(userId, requiredCredits, token); + + if (!userCheck.hasEnoughCredits) { + throw new InsufficientCreditsException({ + requiredCredits, + availableCredits: userCheck.currentCredits, + creditType: userCheck.creditType, + operation: options.operation, + spaceId: options.spaceId, + }); + } + + await this.consumeUserCredits( + userId, + requiredCredits, + token, + description || `${operation} operation` + ); + return { + consumed: true, + creditType: 'user', + message: `Consumed ${requiredCredits} credits from user balance`, + }; + } catch (error) { + console.error(`Credit check and consumption failed for ${operation}:`, error); + throw error; + } + } +} diff --git a/apps/memoro/apps/backend/src/credits/credit-consumption.service.spec.ts b/apps/memoro/apps/backend/src/credits/credit-consumption.service.spec.ts new file mode 100644 index 000000000..087d0e269 --- /dev/null +++ b/apps/memoro/apps/backend/src/credits/credit-consumption.service.spec.ts @@ -0,0 +1,532 @@ +import { Test, TestingModule } from '@nestjs/testing'; +import { ConfigService } from '@nestjs/config'; +import { BadRequestException, ForbiddenException } from '@nestjs/common'; +import { CreditConsumptionService, CreditConsumptionResult } from './credit-consumption.service'; +import * as jwt from 'jsonwebtoken'; + +jest.mock('jsonwebtoken'); +global.fetch = jest.fn(); + +describe('CreditConsumptionService', () => { + let service: CreditConsumptionService; + let configService: jest.Mocked; + + const mockUserId = 'user-123'; + const mockSpaceId = 'space-123'; + const mockUserToken = 'user-jwt-token'; + const mockServiceToken = 'service-jwt-token'; + const mockJwtSecret = 'test-secret'; + const mockManaServiceUrl = 'https://mana-service.example.com'; + + beforeEach(async () => { + const module: TestingModule = await Test.createTestingModule({ + providers: [ + CreditConsumptionService, + { + provide: ConfigService, + useValue: { + get: jest.fn((key: string) => { + const config: Record = { + MANA_SERVICE_URL: mockManaServiceUrl, + MANA_JWT_SECRET: mockJwtSecret, + MEMORO_APP_ID: 'test-app-id', + }; + return config[key]; + }), + }, + }, + ], + }).compile(); + + service = module.get(CreditConsumptionService); + configService = module.get(ConfigService); + + // Clear mocks + (global.fetch as jest.Mock).mockClear(); + (jwt.sign as jest.Mock).mockClear(); + }); + + afterEach(() => { + jest.clearAllMocks(); + }); + + it('should be defined', () => { + expect(service).toBeDefined(); + }); + + describe('getServiceRoleToken', () => { + it('should generate and cache a service role token', async () => { + const mockToken = 'generated-service-token'; + (jwt.sign as jest.Mock).mockReturnValue(mockToken); + + // Access private method through any type casting + const token = await (service as any).getServiceRoleToken(); + + expect(token).toBe(mockToken); + expect(jwt.sign).toHaveBeenCalledWith( + expect.objectContaining({ + sub: 'memoro-service', + role: 'platform_admin', + app_id: 'test-app-id', + service: 'memoro-service', + }), + mockJwtSecret + ); + }); + + it('should reuse cached token if still valid', async () => { + const mockToken = 'cached-service-token'; + (jwt.sign as jest.Mock).mockReturnValue(mockToken); + + // First call - generates new token + const token1 = await (service as any).getServiceRoleToken(); + expect(jwt.sign).toHaveBeenCalledTimes(1); + + // Second call - should use cached token + const token2 = await (service as any).getServiceRoleToken(); + expect(token1).toBe(token2); + expect(jwt.sign).toHaveBeenCalledTimes(1); // Still only called once + }); + + it('should throw error if JWT secret is not configured', async () => { + configService.get.mockImplementation((key: string) => { + if (key === 'MANA_JWT_SECRET') return undefined; + return 'value'; + }); + + await expect((service as any).getServiceRoleToken()).rejects.toThrow( + 'Service role token generation failed: MANA_JWT_SECRET not configured' + ); + }); + }); + + describe('consumeCreditsForOperation', () => { + beforeEach(() => { + (jwt.sign as jest.Mock).mockReturnValue(mockServiceToken); + }); + + it('should successfully consume credits for an operation', async () => { + const mockResponse: CreditConsumptionResult = { + success: true, + creditsConsumed: 10, + creditType: 'user', + remainingCredits: 90, + message: 'Credits consumed successfully', + }; + + (global.fetch as jest.Mock).mockResolvedValueOnce({ + ok: true, + json: async () => mockResponse, + }); + + const result = await service.consumeCreditsForOperation( + mockUserId, + 'transcription', + 10, + 'Test transcription', + { memoId: 'memo-123' }, + undefined, + mockUserToken + ); + + expect(result).toEqual({ + success: true, + creditsConsumed: 10, + creditType: 'user', + remainingCredits: 90, + message: 'Credits consumed successfully', + }); + + expect(global.fetch).toHaveBeenCalledWith( + `${mockManaServiceUrl}/credits/consume`, + expect.objectContaining({ + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${mockUserToken}`, + 'X-Service-Auth': 'memoro-service', + }, + }) + ); + + // Check body separately + const fetchCall = (global.fetch as jest.Mock).mock.calls[0]; + const bodyData = JSON.parse(fetchCall[1].body); + expect(bodyData).toEqual({ + userId: mockUserId, + amount: 10, + operation: 'transcription', + description: 'Test transcription', + metadata: expect.objectContaining({ + memoId: 'memo-123', + service: 'memoro-service', + timestamp: expect.any(String), + }), + spaceId: undefined, + }); + }); + + it('should consume space credits when spaceId is provided', async () => { + const mockResponse: CreditConsumptionResult = { + success: true, + creditsConsumed: 10, + creditType: 'space', + remainingCredits: 190, + message: 'Credits consumed successfully', + }; + + (global.fetch as jest.Mock).mockResolvedValueOnce({ + ok: true, + json: async () => mockResponse, + }); + + const result = await service.consumeCreditsForOperation( + mockUserId, + 'transcription', + 10, + 'Test transcription', + {}, + mockSpaceId, + mockUserToken + ); + + expect(result.creditType).toBe('space'); + expect(global.fetch).toHaveBeenCalledWith( + expect.any(String), + expect.objectContaining({ + body: expect.stringContaining(`"spaceId":"${mockSpaceId}"`), + }) + ); + }); + + it('should throw BadRequestException for invalid inputs', async () => { + await expect( + service.consumeCreditsForOperation( + '', + 'transcription', + 10, + 'Test', + {}, + undefined, + mockUserToken + ) + ).rejects.toThrow(BadRequestException); + + await expect( + service.consumeCreditsForOperation( + mockUserId, + 'transcription', + 0, + 'Test', + {}, + undefined, + mockUserToken + ) + ).rejects.toThrow(BadRequestException); + + await expect( + service.consumeCreditsForOperation( + mockUserId, + 'transcription', + 10, + 'Test', + {}, + undefined, + '' + ) + ).rejects.toThrow(BadRequestException); + }); + + it('should handle insufficient credits gracefully', async () => { + (global.fetch as jest.Mock).mockResolvedValueOnce({ + ok: false, + status: 400, + statusText: 'Bad Request', + json: async () => ({ message: 'insufficient credits' }), + }); + + const result = await service.consumeCreditsForOperation( + mockUserId, + 'transcription', + 100, + 'Test', + {}, + undefined, + mockUserToken + ); + + expect(result).toEqual({ + success: false, + creditsConsumed: 0, + creditType: 'user', + message: 'Insufficient credits. Required: 100', + error: 'insufficient credits', + }); + }); + + it('should handle server errors', async () => { + (global.fetch as jest.Mock).mockResolvedValueOnce({ + ok: false, + status: 500, + statusText: 'Internal Server Error', + json: async () => ({ message: 'Server error' }), + }); + + const result = await service.consumeCreditsForOperation( + mockUserId, + 'transcription', + 10, + 'Test', + {}, + undefined, + mockUserToken + ); + + expect(result).toEqual({ + success: false, + creditsConsumed: 0, + creditType: 'user', + message: 'Credit consumption failed', + error: 'Credit consumption failed: Server error', + }); + }); + + it('should handle network errors', async () => { + (global.fetch as jest.Mock).mockRejectedValueOnce(new Error('Network error')); + + const result = await service.consumeCreditsForOperation( + mockUserId, + 'transcription', + 10, + 'Test', + {}, + undefined, + mockUserToken + ); + + expect(result).toEqual({ + success: false, + creditsConsumed: 0, + creditType: 'user', + message: 'Credit consumption failed', + error: 'Network error', + }); + }); + }); + + describe('convenience methods', () => { + beforeEach(() => { + jest.spyOn(service, 'consumeCreditsForOperation').mockResolvedValue({ + success: true, + creditsConsumed: 10, + creditType: 'user', + message: 'Success', + }); + }); + + it('should consume transcription credits', async () => { + await service.consumeTranscriptionCredits( + mockUserId, + 5, + 10, + 'memo-123', + 'fast', + mockSpaceId, + mockUserToken + ); + + expect(service.consumeCreditsForOperation).toHaveBeenCalledWith( + mockUserId, + 'transcription', + 10, + 'Transcription completed via fast route for memo memo-123', + { + memoId: 'memo-123', + route: 'fast', + durationMinutes: 5, + actualCost: 10, + }, + mockSpaceId, + mockUserToken + ); + }); + + it('should consume question credits', async () => { + const questionText = 'What is the main topic discussed?'; + + await service.consumeQuestionCredits( + mockUserId, + 'memo-123', + questionText, + mockSpaceId, + mockUserToken + ); + + expect(service.consumeCreditsForOperation).toHaveBeenCalledWith( + mockUserId, + 'question', + 5, + 'Question asked on memo memo-123', + { + memoId: 'memo-123', + questionLength: questionText.length, + questionPreview: questionText, + }, + mockSpaceId, + mockUserToken + ); + }); + + it('should consume combination credits', async () => { + const memoIds = ['memo-1', 'memo-2', 'memo-3']; + + await service.consumeCombinationCredits(mockUserId, memoIds, mockSpaceId, mockUserToken); + + expect(service.consumeCreditsForOperation).toHaveBeenCalledWith( + mockUserId, + 'combination', + 15, // 5 credits per memo + 'Combined 3 memos', + { + memoCount: 3, + memoIds, + }, + mockSpaceId, + mockUserToken + ); + }); + + it('should consume blueprint credits', async () => { + await service.consumeBlueprintCredits( + mockUserId, + 'blueprint-123', + 'memo-123', + mockSpaceId, + mockUserToken + ); + + expect(service.consumeCreditsForOperation).toHaveBeenCalledWith( + mockUserId, + 'blueprint', + 5, + 'Blueprint blueprint-123 applied to memo memo-123', + { + blueprintId: 'blueprint-123', + memoId: 'memo-123', + }, + mockSpaceId, + mockUserToken + ); + }); + + it('should consume headline credits', async () => { + await service.consumeHeadlineCredits(mockUserId, 'memo-123', mockSpaceId, mockUserToken); + + expect(service.consumeCreditsForOperation).toHaveBeenCalledWith( + mockUserId, + 'headline', + 10, + 'Headline generation for memo memo-123', + { + memoId: 'memo-123', + }, + mockSpaceId, + mockUserToken + ); + }); + }); + + describe('validateCreditsForOperation', () => { + beforeEach(() => { + (jwt.sign as jest.Mock).mockReturnValue(mockServiceToken); + }); + + it('should validate credits successfully', async () => { + (global.fetch as jest.Mock).mockResolvedValueOnce({ + ok: true, + json: async () => ({ + valid: true, + availableCredits: 100, + }), + }); + + const result = await service.validateCreditsForOperation( + mockUserId, + 'transcription', + 10, + mockSpaceId + ); + + expect(result).toEqual({ + hasEnoughCredits: true, + availableCredits: 100, + requiredCredits: 10, + }); + }); + + it('should handle validation failure', async () => { + (global.fetch as jest.Mock).mockResolvedValueOnce({ + ok: false, + json: async () => ({ message: 'Insufficient credits' }), + }); + + const result = await service.validateCreditsForOperation(mockUserId, 'transcription', 100); + + expect(result).toEqual({ + hasEnoughCredits: false, + availableCredits: 0, + requiredCredits: 100, + }); + }); + }); + + describe('getCurrentCredits', () => { + beforeEach(() => { + (jwt.sign as jest.Mock).mockReturnValue(mockServiceToken); + }); + + it('should get current credits for user', async () => { + (global.fetch as jest.Mock).mockResolvedValueOnce({ + ok: true, + json: async () => ({ credits: 100 }), + }); + + const result = await service.getCurrentCredits(mockUserId); + + expect(result).toEqual({ + userCredits: 100, + spaceCredits: undefined, + }); + }); + + it('should get both user and space credits', async () => { + (global.fetch as jest.Mock) + .mockResolvedValueOnce({ + ok: true, + json: async () => ({ credits: 100 }), + }) + .mockResolvedValueOnce({ + ok: true, + json: async () => ({ creditSummary: { current_balance: 200 } }), + }); + + const result = await service.getCurrentCredits(mockUserId, mockSpaceId); + + expect(result).toEqual({ + userCredits: 100, + spaceCredits: 200, + }); + }); + + it('should handle errors gracefully', async () => { + (global.fetch as jest.Mock).mockRejectedValue(new Error('Network error')); + + const result = await service.getCurrentCredits(mockUserId, mockSpaceId); + + expect(result).toEqual({ + userCredits: 0, + spaceCredits: undefined, + }); + }); + }); +}); diff --git a/apps/memoro/apps/backend/src/credits/credit-consumption.service.ts b/apps/memoro/apps/backend/src/credits/credit-consumption.service.ts new file mode 100644 index 000000000..bc02b5dc0 --- /dev/null +++ b/apps/memoro/apps/backend/src/credits/credit-consumption.service.ts @@ -0,0 +1,452 @@ +import { Injectable, Logger, BadRequestException, ForbiddenException } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { InsufficientCreditsException } from '../errors/insufficient-credits.error'; + +export interface CreditConsumptionResult { + success: boolean; + creditsConsumed: number; + creditType: 'user' | 'space'; + remainingCredits?: number; + message: string; + error?: string; +} + +export interface CreditOperationMetadata { + memoId?: string; + route?: string; + durationMinutes?: number; + actualCost?: number; + operationId?: string; + [key: string]: any; +} + +export type CreditOperation = + | 'transcription' + | 'question' + | 'combination' + | 'blueprint' + | 'headline' + | 'memory_creation' + | 'memo_sharing' + | 'space_operation' + | 'meeting_recording'; + +@Injectable() +export class CreditConsumptionService { + private readonly logger = new Logger(CreditConsumptionService.name); + private readonly manaServiceUrl: string; + private readonly manaServiceKey: string; + private readonly appId: string; + + constructor(private configService: ConfigService) { + this.manaServiceUrl = + this.configService.get('MANA_SERVICE_URL') || + 'https://mana-core-middleware-111768794939.europe-west3.run.app'; + this.manaServiceUrl = this.manaServiceUrl.replace(/\/$/, ''); + this.manaServiceKey = this.configService.get('MANA_SUPABASE_SECRET_KEY'); + this.appId = this.configService.get('MEMORO_APP_ID'); + + if (!this.appId) { + throw new Error('MEMORO_APP_ID environment variable is required'); + } + } + + /** + * Centralized credit consumption for all operations + * Uses the existing user JWT token to work with RLS + */ + async consumeCreditsForOperation( + userId: string, + operation: CreditOperation, + amount: number, + description: string, + metadata: CreditOperationMetadata = {}, + spaceId?: string, + userToken?: string + ): Promise { + try { + this.logger.log( + `[consumeCreditsForOperation] ${operation}: ${amount} credits for user ${userId}${spaceId ? ` in space ${spaceId}` : ''}` + ); + + // Input validation + if (!userId) { + throw new BadRequestException('User ID is required'); + } + if (amount <= 0) { + throw new BadRequestException('Credit amount must be positive'); + } + // Determine if we're using service auth or user auth + const isServiceAuth = !userToken; + + // Prepare request body for mana-core-middleware + const consumeBody = { + userId, + appId: this.appId, + amount, + operation, + description, + metadata: { + ...metadata, + service: 'memoro-service', + timestamp: new Date().toISOString(), + }, + spaceId, + }; + + let response; + + if (isServiceAuth) { + // Use service authentication endpoint + this.logger.log(`[consumeCreditsForOperation] Using service auth for user ${userId}`); + + if (!this.manaServiceKey) { + throw new Error('MANA_SUPABASE_SECRET_KEY not configured'); + } + + // Use service endpoint with different body structure + const serviceBody = { + userId, + appId: this.appId, + amount, + operationType: operation, + description, + operationDetails: metadata, + spaceId, + }; + + response = await fetch(`${this.manaServiceUrl}/credits/service/consume`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${this.manaServiceKey}`, + 'X-Service-Auth': 'memoro-service', + }, + body: JSON.stringify(serviceBody), + }); + } else { + // Use regular user token auth + this.logger.log( + `[consumeCreditsForOperation] Using user token: ${userToken.substring(0, 50)}...` + ); + + // Try to decode token payload for debugging (without verification) + try { + const parts = userToken.split('.'); + if (parts.length === 3) { + const payload = parts[1]; + const paddedPayload = payload + '='.repeat((4 - (payload.length % 4)) % 4); + const decodedPayload = Buffer.from(paddedPayload, 'base64').toString(); + const tokenData = JSON.parse(decodedPayload); + this.logger.log( + `[consumeCreditsForOperation] Token payload:`, + JSON.stringify(tokenData, null, 2) + ); + this.logger.log( + `[consumeCreditsForOperation] Token has app_id: ${tokenData.app_id}, sub: ${tokenData.sub}, aud: ${tokenData.aud}` + ); + } + } catch (decodeError) { + this.logger.warn( + `[consumeCreditsForOperation] Could not decode token for debugging:`, + decodeError.message + ); + } + + response = await fetch(`${this.manaServiceUrl}/credits/consume`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${userToken}`, + 'X-Service-Auth': 'memoro-service', + }, + body: JSON.stringify(consumeBody), + }); + } + + if (!response.ok) { + const errorData = await response.json().catch(() => ({})); + const errorMessage = errorData.message || `HTTP ${response.status}: ${response.statusText}`; + + this.logger.error( + `[consumeCreditsForOperation] Credit consumption failed: ${response.status} - ${errorMessage}` + ); + + if (response.status === 400 && errorMessage.toLowerCase().includes('insufficient')) { + // Try to extract available credits from error message if possible + const availableMatch = errorMessage.match(/Available:\s*(\d+)/); + const availableCredits = availableMatch ? parseInt(availableMatch[1]) : 0; + + throw new InsufficientCreditsException({ + requiredCredits: amount, + availableCredits, + creditType: spaceId ? 'space' : 'user', + operation, + spaceId, + }); + } + + throw new Error(`Credit consumption failed: ${errorMessage}`); + } + + const result = await response.json(); + + this.logger.log( + `[consumeCreditsForOperation] Successfully consumed ${amount} credits for ${operation}` + ); + + // Note: Frontend will refresh credits periodically or after operations + + return { + success: true, + creditsConsumed: amount, + creditType: result.creditType || (spaceId ? 'space' : 'user'), + remainingCredits: result.remainingCredits, + message: result.message || 'Credits consumed successfully', + }; + } catch (error) { + this.logger.error( + `[consumeCreditsForOperation] Error consuming credits for ${operation}:`, + error + ); + + if ( + error instanceof BadRequestException || + error instanceof ForbiddenException || + error instanceof InsufficientCreditsException + ) { + throw error; + } + + return { + success: false, + creditsConsumed: 0, + creditType: spaceId ? 'space' : 'user', + message: 'Credit consumption failed', + error: error.message, + }; + } + } + + /** + * Convenience methods for specific operations + */ + async consumeTranscriptionCredits( + userId: string, + durationMinutes: number, + actualCost: number, + memoId: string, + route: 'fast' | 'batch', + spaceId?: string, + userToken?: string + ): Promise { + return this.consumeCreditsForOperation( + userId, + 'transcription', + actualCost, + `Transcription completed via ${route} route for memo ${memoId}`, + { + memoId, + route, + durationMinutes, + actualCost, + }, + spaceId, + userToken + ); + } + + async consumeQuestionCredits( + userId: string, + memoId: string, + questionText: string, + spaceId?: string, + userToken?: string + ): Promise { + const questionCost = 5; // Standard question cost + return this.consumeCreditsForOperation( + userId, + 'question', + questionCost, + `Question asked on memo ${memoId}`, + { + memoId, + questionLength: questionText.length, + questionPreview: questionText.substring(0, 100), + }, + spaceId, + userToken + ); + } + + async consumeCombinationCredits( + userId: string, + memoIds: string[], + spaceId?: string, + userToken?: string + ): Promise { + const combinationCost = memoIds.length * 5; // 5 credits per memo + return this.consumeCreditsForOperation( + userId, + 'combination', + combinationCost, + `Combined ${memoIds.length} memos`, + { + memoCount: memoIds.length, + memoIds, + }, + spaceId, + userToken + ); + } + + async consumeBlueprintCredits( + userId: string, + blueprintId: string, + memoId: string, + spaceId?: string, + userToken?: string + ): Promise { + const blueprintCost = 5; // Standard blueprint cost + return this.consumeCreditsForOperation( + userId, + 'blueprint', + blueprintCost, + `Blueprint ${blueprintId} applied to memo ${memoId}`, + { + blueprintId, + memoId, + }, + spaceId, + userToken + ); + } + + async consumeHeadlineCredits( + userId: string, + memoId: string, + spaceId?: string, + userToken?: string + ): Promise { + const headlineCost = 10; // Standard headline cost + return this.consumeCreditsForOperation( + userId, + 'headline', + headlineCost, + `Headline generation for memo ${memoId}`, + { + memoId, + }, + spaceId, + userToken + ); + } + + /** + * Validate credits before operation (pre-flight check) + */ + async validateCreditsForOperation( + userId: string, + operation: CreditOperation, + amount: number, + spaceId?: string + ): Promise<{ hasEnoughCredits: boolean; availableCredits: number; requiredCredits: number }> { + try { + if (!this.manaServiceKey) { + throw new Error('MANA_SUPABASE_SECRET_KEY not configured'); + } + + const response = await fetch(`${this.manaServiceUrl}/credits/service/validate`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${this.manaServiceKey}`, + 'X-Service-Auth': 'memoro-service', + }, + body: JSON.stringify({ + userId, + amount, + spaceId, + operation, + }), + }); + + if (!response.ok) { + const errorData = await response.json().catch(() => ({})); + this.logger.warn(`Credit validation failed: ${errorData.message}`); + return { + hasEnoughCredits: false, + availableCredits: 0, + requiredCredits: amount, + }; + } + + const result = await response.json(); + return { + // mana-core returns { hasCredits, balance } + hasEnoughCredits: result.hasCredits || result.valid || false, + availableCredits: result.balance || result.availableCredits || 0, + requiredCredits: amount, + }; + } catch (error) { + this.logger.error('Error validating credits:', error); + return { + hasEnoughCredits: false, + availableCredits: 0, + requiredCredits: amount, + }; + } + } + + /** + * Get current credit balance for user + */ + async getCurrentCredits( + userId: string, + spaceId?: string + ): Promise<{ userCredits: number; spaceCredits?: number }> { + try { + if (!this.manaServiceKey) { + throw new Error('MANA_SUPABASE_SECRET_KEY not configured'); + } + + // Get user credits + const userResponse = await fetch(`${this.manaServiceUrl}/users/credits`, { + method: 'GET', + headers: { + Authorization: `Bearer ${this.manaServiceKey}`, + 'X-Service-Auth': 'memoro-service', + 'X-User-ID': userId, // Pass user ID in header for service role requests + }, + }); + + let userCredits = 0; + if (userResponse.ok) { + const userData = await userResponse.json(); + userCredits = userData.credits || 0; + } + + let spaceCredits = undefined; + if (spaceId) { + const spaceResponse = await fetch(`${this.manaServiceUrl}/spaces/${spaceId}/credits`, { + method: 'GET', + headers: { + Authorization: `Bearer ${this.manaServiceKey}`, + 'X-Service-Auth': 'memoro-service', + 'X-User-ID': userId, + }, + }); + + if (spaceResponse.ok) { + const spaceData = await spaceResponse.json(); + spaceCredits = spaceData.creditSummary?.current_balance || 0; + } + } + + return { userCredits, spaceCredits }; + } catch (error) { + this.logger.error('Error getting current credits:', error); + return { userCredits: 0, spaceCredits: undefined }; + } + } +} diff --git a/apps/memoro/apps/backend/src/credits/credit.controller.ts b/apps/memoro/apps/backend/src/credits/credit.controller.ts new file mode 100644 index 000000000..bffa1eedd --- /dev/null +++ b/apps/memoro/apps/backend/src/credits/credit.controller.ts @@ -0,0 +1,227 @@ +import { Controller, Post, Body, UseGuards, BadRequestException, Get } from '@nestjs/common'; +import { AuthGuard } from '../guards/auth.guard'; +import { User } from '../decorators/user.decorator'; +import { CreditClientService } from './credit-client.service'; +import { + calculateTranscriptionCost, + calculateTranscriptionCostByLength, + OPERATION_COSTS, +} from './pricing.constants'; +import { InsufficientCreditsException } from '../errors/insufficient-credits.error'; + +// DTOs for credit operations +class CheckTranscriptionCreditsDto { + durationSeconds?: number; + transcriptLength?: number; + spaceId?: string; +} + +class ConsumeTranscriptionCreditsDto { + durationSeconds?: number; + transcriptLength?: number; + spaceId?: string; + description?: string; +} + +class ConsumeOperationCreditsDto { + operation: + | 'HEADLINE_GENERATION' + | 'MEMORY_CREATION' + | 'BLUEPRINT_PROCESSING' + | 'QUESTION_MEMO' + | 'NEW_MEMORY' + | 'MEMO_COMBINE'; + spaceId?: string; + description?: string; + memoId?: string; + memoCount?: number; // For MEMO_COMBINE operation +} + +@Controller('memoro/credits') +export class CreditController { + constructor(private readonly creditClientService: CreditClientService) {} + + @Get('pricing') + async getPricing() { + return { + operationCosts: OPERATION_COSTS, + transcriptionPerHour: OPERATION_COSTS.TRANSCRIPTION_PER_MINUTE * 60, + lastUpdated: new Date().toISOString(), + }; + } + + @Post('check-transcription') + @UseGuards(AuthGuard) + async checkTranscriptionCredits(@User() user: any, @Body() dto: CheckTranscriptionCreditsDto) { + if (!dto.durationSeconds && !dto.transcriptLength) { + throw new BadRequestException('Either durationSeconds or transcriptLength must be provided'); + } + + // Extract token from request + const token = user.token; + + // Calculate required credits using new length-based or duration-based pricing + const requiredCredits = calculateTranscriptionCostByLength( + dto.transcriptLength, + dto.durationSeconds + ); + + try { + // If spaceId is provided, check space credits first + if (dto.spaceId) { + try { + const spaceCheck = await this.creditClientService.checkSpaceCredits( + dto.spaceId, + requiredCredits, + token + ); + + return { + hasEnoughCredits: spaceCheck.hasEnoughCredits, + requiredCredits, + currentCredits: spaceCheck.currentCredits, + creditType: 'space', + }; + } catch (error) { + console.warn('Space credit check failed, falling back to user credits:', error.message); + } + } + + // Check user credits + const userCheck = await this.creditClientService.checkUserCredits( + user.sub, + requiredCredits, + token + ); + + return { + hasEnoughCredits: userCheck.hasEnoughCredits, + requiredCredits, + currentCredits: userCheck.currentCredits, + creditType: 'user', + }; + } catch (error) { + if (error instanceof InsufficientCreditsException) { + throw error; // Let the exception propagate with 402 status + } + throw new BadRequestException(`Failed to check credits: ${error.message}`); + } + } + + @Post('consume-transcription') + @UseGuards(AuthGuard) + async consumeTranscriptionCredits( + @User() user: any, + @Body() dto: ConsumeTranscriptionCreditsDto + ) { + if (!dto.durationSeconds && !dto.transcriptLength) { + throw new BadRequestException('Either durationSeconds or transcriptLength must be provided'); + } + + // Extract token from request + const token = user.token; + + // Calculate required credits using new length-based or duration-based pricing + const requiredCredits = calculateTranscriptionCostByLength( + dto.transcriptLength, + dto.durationSeconds + ); + + const description = + dto.description || + (dto.transcriptLength + ? `Transcription (${dto.transcriptLength} chars)` + : `Transcription (${dto.durationSeconds}s)`); + + try { + const result = await this.creditClientService.checkAndConsumeCredits( + user.sub, + requiredCredits, + token, + { + spaceId: dto.spaceId, + description, + operation: 'TRANSCRIPTION', + } + ); + + return { + success: true, + creditsConsumed: requiredCredits, + creditType: result.creditType, + message: result.message, + }; + } catch (error) { + if (error instanceof InsufficientCreditsException) { + throw error; // Let the exception propagate with 402 status + } + throw new BadRequestException(`Failed to consume credits: ${error.message}`); + } + } + + @Post('consume-operation') + @UseGuards(AuthGuard) + async consumeOperationCredits(@User() user: any, @Body() dto: ConsumeOperationCreditsDto) { + // Validate operation type + const validOperations = [ + 'HEADLINE_GENERATION', + 'MEMORY_CREATION', + 'BLUEPRINT_PROCESSING', + 'QUESTION_MEMO', + 'NEW_MEMORY', + 'MEMO_COMBINE', + ]; + if (!validOperations.includes(dto.operation)) { + throw new BadRequestException( + `Invalid operation type. Must be one of: ${validOperations.join(', ')}` + ); + } + + // Extract token from request + const token = user.token; + + // Define credit costs for different operations + const creditCosts = { + HEADLINE_GENERATION: 10, + MEMORY_CREATION: 10, + BLUEPRINT_PROCESSING: 5, + QUESTION_MEMO: 5, + NEW_MEMORY: 5, + MEMO_COMBINE: 5, + }; + + // Calculate required credits based on operation + let requiredCredits = creditCosts[dto.operation]; + + // For MEMO_COMBINE, multiply by the number of memos + if (dto.operation === 'MEMO_COMBINE' && dto.memoCount) { + requiredCredits = requiredCredits * dto.memoCount; + } + const description = dto.description || `${dto.operation} operation`; + + try { + const result = await this.creditClientService.checkAndConsumeCredits( + user.sub, + requiredCredits, + token, + { + spaceId: dto.spaceId, + description, + operation: dto.operation, + } + ); + + return { + success: true, + creditsConsumed: requiredCredits, + creditType: result.creditType, + message: result.message, + }; + } catch (error) { + if (error instanceof InsufficientCreditsException) { + throw error; // Let the exception propagate with 402 status + } + throw new BadRequestException(`Failed to consume credits: ${error.message}`); + } + } +} diff --git a/apps/memoro/apps/backend/src/credits/credits.module.ts b/apps/memoro/apps/backend/src/credits/credits.module.ts new file mode 100644 index 000000000..48b210466 --- /dev/null +++ b/apps/memoro/apps/backend/src/credits/credits.module.ts @@ -0,0 +1,14 @@ +import { Module } from '@nestjs/common'; +import { ConfigModule } from '@nestjs/config'; +import { AuthModule } from '../auth/auth.module'; +import { CreditClientService } from './credit-client.service'; +import { CreditController } from './credit.controller'; +import { CreditConsumptionService } from './credit-consumption.service'; + +@Module({ + imports: [ConfigModule, AuthModule], + controllers: [CreditController], + providers: [CreditClientService, CreditConsumptionService], + exports: [CreditClientService, CreditConsumptionService], +}) +export class CreditsModule {} diff --git a/apps/memoro/apps/backend/src/credits/pricing.constants.ts b/apps/memoro/apps/backend/src/credits/pricing.constants.ts new file mode 100644 index 000000000..afe2d30c6 --- /dev/null +++ b/apps/memoro/apps/backend/src/credits/pricing.constants.ts @@ -0,0 +1,99 @@ +/** + * Pricing constants for various operations in the memoro service + * These should match the costs defined in the app's appCosts.json + */ + +export const OPERATION_COSTS = { + // Transcription costs + TRANSCRIPTION_PER_MINUTE: 2, // 2 credits per minute of audio + + // Meeting recording costs + MEETING_RECORDING_PER_MINUTE: 2, // 2 credits per minute of recording (same as transcription) + + // Memory/headline generation + HEADLINE_GENERATION: 10, + MEMORY_CREATION: 10, + + // Blueprint operations + BLUEPRINT_PROCESSING: 5, + + // Question/Memory processing + QUESTION_MEMO: 5, // 5 mana per question to memo + NEW_MEMORY: 5, // 5 mana per new memory creation + MEMO_COMBINE: 5, // 5 mana per memo when combining + + // Other operations + MEMO_SHARING: 1, + SPACE_OPERATION: 2, +} as const; + +/** + * Calculate transcription cost based on audio duration + * @param durationSeconds - Duration of audio in seconds + * @returns Number of credits required (2 credits per minute, minimum 2 credits) + */ +export function calculateTranscriptionCost(durationSeconds: number): number { + // Log the input for debugging + console.log( + `[calculateTranscriptionCost] Input duration: ${durationSeconds} seconds (${(durationSeconds / 60).toFixed(2)} minutes)` + ); + + const minutes = durationSeconds / 60; // Convert seconds to minutes + const cost = Math.ceil(minutes * OPERATION_COSTS.TRANSCRIPTION_PER_MINUTE); + + // Apply minimum cost of 2 credits (1 minute worth) to prevent undercharging + const finalCost = Math.max(cost, 2); + + console.log( + `[calculateTranscriptionCost] Calculated cost: ${cost}, Final cost (with minimum): ${finalCost} credits` + ); + + return finalCost; +} + +/** + * Calculate memo combination cost based on number of memos + * @param memoCount - Number of memos being combined + * @returns Number of credits required + */ +export function calculateMemoCombineCost(memoCount: number): number { + return memoCount * OPERATION_COSTS.MEMO_COMBINE; +} + +/** + * Calculate transcription cost with length-based pricing + * Uses existing per-minute pricing but ensures proper length-based calculation + * @param transcriptLength - Length of transcript in characters + * @param durationSeconds - Duration of audio in seconds (fallback if no transcript length) + * @returns Number of credits required + */ +export function calculateTranscriptionCostByLength( + transcriptLength?: number, + durationSeconds?: number +): number { + // If we have transcript length, use it to estimate duration + if (transcriptLength) { + // Estimate: ~150 words per minute, ~5 characters per word + const estimatedWords = transcriptLength / 5; + const estimatedMinutes = estimatedWords / 150; + const estimatedSeconds = estimatedMinutes * 60; + return calculateTranscriptionCost(estimatedSeconds); + } + + // Fall back to duration-based calculation + if (durationSeconds) { + return calculateTranscriptionCost(durationSeconds); + } + + // Throw error if no length or duration provided + throw new Error('Cannot calculate transcription cost: no transcript length or duration provided'); +} + +/** + * Get operation cost by operation type + * @param operation - The operation type + * @returns Number of credits required + */ +export function getOperationCost(operation: keyof typeof OPERATION_COSTS): number { + return OPERATION_COSTS[operation]; +} diff --git a/apps/memoro/apps/backend/src/debug-test.ts b/apps/memoro/apps/backend/src/debug-test.ts new file mode 100644 index 000000000..d35f89991 --- /dev/null +++ b/apps/memoro/apps/backend/src/debug-test.ts @@ -0,0 +1,51 @@ +// Debug test file to verify logging in Cloud Run +import { NestFactory } from '@nestjs/core'; +import { AppModule } from './app.module'; + +async function debugTest() { + // Force all debug logs to use console.error for visibility + console.error('[DEBUG TEST 1] Starting debug test - console.error'); + console.log('[DEBUG TEST 2] Starting debug test - console.log'); + console.warn('[DEBUG TEST 3] Starting debug test - console.warn'); + + // Log process info + console.error('[DEBUG TEST] Process info:', { + nodeVersion: process.version, + platform: process.platform, + pid: process.pid, + cwd: process.cwd(), + execPath: process.execPath, + }); + + // Log all environment variables (be careful with sensitive data) + console.error('[DEBUG TEST] Environment variables count:', Object.keys(process.env).length); + console.error('[DEBUG TEST] NODE_ENV:', process.env.NODE_ENV); + console.error('[DEBUG TEST] PORT:', process.env.PORT); + console.error('[DEBUG TEST] AUDIO_MICROSERVICE_URL:', process.env.AUDIO_MICROSERVICE_URL); + + // Check if dist files exist + const fs = require('fs'); + const path = require('path'); + const mainPath = path.join(__dirname, 'main.js'); + console.error('[DEBUG TEST] Current file location:', __filename); + console.error('[DEBUG TEST] Main.js exists:', fs.existsSync(mainPath)); + + // Create the app to test NestJS logging + try { + const app = await NestFactory.create(AppModule, { + logger: ['error', 'warn', 'log', 'debug', 'verbose'], + }); + + console.error('[DEBUG TEST] NestJS app created successfully'); + + // Don't actually start the server, just test creation + await app.close(); + console.error('[DEBUG TEST] Test completed successfully'); + } catch (error) { + console.error('[DEBUG TEST] Error creating app:', error); + } + + process.exit(0); +} + +debugTest(); diff --git a/apps/memoro/apps/backend/src/decorators/user.decorator.ts b/apps/memoro/apps/backend/src/decorators/user.decorator.ts new file mode 100644 index 000000000..7146d5b6e --- /dev/null +++ b/apps/memoro/apps/backend/src/decorators/user.decorator.ts @@ -0,0 +1,12 @@ +import { createParamDecorator, ExecutionContext } from '@nestjs/common'; +import { JwtPayload } from '../types/jwt-payload.interface'; + +export const User = createParamDecorator( + (data: unknown, ctx: ExecutionContext): JwtPayload & { token: string } => { + const request = ctx.switchToHttp().getRequest(); + return { + ...request.user, + token: request.token, + }; + } +); diff --git a/apps/memoro/apps/backend/src/errors/README.md b/apps/memoro/apps/backend/src/errors/README.md new file mode 100644 index 000000000..84e2d3be4 --- /dev/null +++ b/apps/memoro/apps/backend/src/errors/README.md @@ -0,0 +1,66 @@ +# Standardized Error Handling + +This directory contains standardized error handling utilities for the memoro-service. + +## InsufficientCreditsException + +A custom exception class for handling insufficient credit scenarios with consistent error responses. + +### Features + +- **HTTP Status Code**: 402 Payment Required +- **Standardized Error Format**: Includes required credits, available credits, credit type, and operation details +- **Type Safety**: Strongly typed error data structure +- **Consistent Responses**: All insufficient credit errors follow the same format + +### Usage + +```typescript +import { InsufficientCreditsException } from '../errors/insufficient-credits.error'; + +// Throw when insufficient credits detected +throw new InsufficientCreditsException({ + requiredCredits: 100, + availableCredits: 50, + creditType: 'user', // or 'space' + operation: 'transcription', + spaceId: 'space-uuid' // optional +}); +``` + +### Error Response Format + +```json +{ + "statusCode": 402, + "error": "InsufficientCredits", + "message": "Insufficient user credits. Required: 100, Available: 50", + "details": { + "requiredCredits": 100, + "availableCredits": 50, + "creditType": "user", + "operation": "transcription", + "spaceId": null + } +} +``` + +### Helper Functions + +- `createInsufficientCreditsError()`: Factory function to create the exception +- `isInsufficientCreditsError()`: Type guard to check if an error is an insufficient credits error +- `extractCreditInfoFromError()`: Extract credit information from various error types + +## Global Exception Filter + +The `HttpExceptionFilter` in `/filters/http-exception.filter.ts` ensures all exceptions are properly formatted and InsufficientCreditsException returns the correct 402 status code. + +## Migration Notes + +All credit-consuming endpoints have been updated to use this standardized error handling: +- Transcription endpoints +- Question memo processing +- Memo combination +- All credit consumption operations + +Legacy `ForbiddenException` and `BadRequestException` for insufficient credits have been replaced with `InsufficientCreditsException`. \ No newline at end of file diff --git a/apps/memoro/apps/backend/src/errors/insufficient-credits.error.ts b/apps/memoro/apps/backend/src/errors/insufficient-credits.error.ts new file mode 100644 index 000000000..88eeebd6a --- /dev/null +++ b/apps/memoro/apps/backend/src/errors/insufficient-credits.error.ts @@ -0,0 +1,90 @@ +import { HttpException, HttpStatus } from '@nestjs/common'; + +export interface InsufficientCreditsErrorData { + requiredCredits: number; + availableCredits: number; + creditType: 'user' | 'space'; + operation?: string; + spaceId?: string; +} + +/** + * Custom exception for insufficient credits scenarios + * Uses HTTP 402 Payment Required status code + */ +export class InsufficientCreditsException extends HttpException { + constructor(data: InsufficientCreditsErrorData) { + const message = `Insufficient ${data.creditType} credits. Required: ${data.requiredCredits}, Available: ${data.availableCredits}`; + + const response = { + statusCode: HttpStatus.PAYMENT_REQUIRED, + error: 'InsufficientCredits', + message, + details: { + requiredCredits: data.requiredCredits, + availableCredits: data.availableCredits, + creditType: data.creditType, + operation: data.operation, + spaceId: data.spaceId, + }, + }; + + super(response, HttpStatus.PAYMENT_REQUIRED); + } +} + +/** + * Helper function to create standardized insufficient credits error + */ +export function createInsufficientCreditsError( + requiredCredits: number, + availableCredits: number, + creditType: 'user' | 'space' = 'user', + operation?: string, + spaceId?: string +): InsufficientCreditsException { + return new InsufficientCreditsException({ + requiredCredits, + availableCredits, + creditType, + operation, + spaceId, + }); +} + +/** + * Type guard to check if an error is an insufficient credits error + */ +export function isInsufficientCreditsError(error: any): error is InsufficientCreditsException { + return ( + error instanceof InsufficientCreditsException || + (error instanceof HttpException && error.getStatus() === HttpStatus.PAYMENT_REQUIRED) || + error?.message?.toLowerCase().includes('insufficient credits') + ); +} + +/** + * Extract credit information from various error types + */ +export function extractCreditInfoFromError(error: any): { + requiredCredits?: number; + availableCredits?: number; + creditType?: 'user' | 'space'; +} | null { + if (error instanceof InsufficientCreditsException) { + const response = error.getResponse() as any; + return response.details || null; + } + + // Try to parse from error message + const messageMatch = error?.message?.match(/Required:\s*(\d+),\s*Available:\s*(\d+)/); + if (messageMatch) { + return { + requiredCredits: parseInt(messageMatch[1]), + availableCredits: parseInt(messageMatch[2]), + creditType: error.message.includes('space') ? 'space' : 'user', + }; + } + + return null; +} diff --git a/apps/memoro/apps/backend/src/filters/http-exception.filter.ts b/apps/memoro/apps/backend/src/filters/http-exception.filter.ts new file mode 100644 index 000000000..a96f03857 --- /dev/null +++ b/apps/memoro/apps/backend/src/filters/http-exception.filter.ts @@ -0,0 +1,37 @@ +import { + ExceptionFilter, + Catch, + ArgumentsHost, + HttpException, + HttpStatus, + Logger, +} from '@nestjs/common'; +import { Response } from 'express'; +import { InsufficientCreditsException } from '../errors/insufficient-credits.error'; + +/** + * Global exception filter to handle HTTP exceptions + * Ensures proper error responses, especially for InsufficientCreditsException + */ +@Catch(HttpException) +export class HttpExceptionFilter implements ExceptionFilter { + private readonly logger = new Logger(HttpExceptionFilter.name); + + catch(exception: HttpException, host: ArgumentsHost) { + const ctx = host.switchToHttp(); + const response = ctx.getResponse(); + const status = exception.getStatus(); + const exceptionResponse = exception.getResponse(); + + // Log the error for debugging + this.logger.error(`HTTP ${status} Error: ${exception.message}`, exception.stack); + + // Ensure InsufficientCreditsException returns 402 status + if (exception instanceof InsufficientCreditsException) { + return response.status(HttpStatus.PAYMENT_REQUIRED).json(exceptionResponse); + } + + // For other exceptions, return the standard response + response.status(status).json(exceptionResponse); + } +} diff --git a/apps/memoro/apps/backend/src/guards/auth.guard.spec.ts b/apps/memoro/apps/backend/src/guards/auth.guard.spec.ts new file mode 100644 index 000000000..39149b09d --- /dev/null +++ b/apps/memoro/apps/backend/src/guards/auth.guard.spec.ts @@ -0,0 +1,230 @@ +import { Test, TestingModule } from '@nestjs/testing'; +import { ExecutionContext, UnauthorizedException } from '@nestjs/common'; +import { AuthGuard } from './auth.guard'; +import { AuthClientService } from '../auth/auth-client.service'; +import { JwtPayload } from '../types/jwt-payload.interface'; + +describe('AuthGuard', () => { + let guard: AuthGuard; + let authClientService: jest.Mocked; + + const mockJwtPayload: JwtPayload = { + sub: 'user-123', + email: 'test@example.com', + role: 'authenticated', + app_id: 'test-app', + aud: 'authenticated', + iat: Math.floor(Date.now() / 1000), + exp: Math.floor(Date.now() / 1000) + 3600, + }; + + const mockToken = 'mock-jwt-token'; + + const createMockExecutionContext = (headers: Record = {}) => { + const request = { + headers, + user: undefined, + token: undefined, + }; + + return { + switchToHttp: () => ({ + getRequest: () => request, + }), + getRequest: () => request, // Helper method to get request in tests + } as any; + }; + + beforeEach(async () => { + const module: TestingModule = await Test.createTestingModule({ + providers: [ + AuthGuard, + { + provide: AuthClientService, + useValue: { + validateToken: jest.fn(), + }, + }, + ], + }).compile(); + + guard = module.get(AuthGuard); + authClientService = module.get(AuthClientService); + }); + + it('should be defined', () => { + expect(guard).toBeDefined(); + }); + + describe('canActivate', () => { + it('should return true and attach user/token to request when token is valid', async () => { + const mockContext = createMockExecutionContext({ + authorization: `Bearer ${mockToken}`, + }); + + authClientService.validateToken.mockResolvedValue(mockJwtPayload); + + const result = await guard.canActivate(mockContext); + const request = mockContext.getRequest(); + + expect(result).toBe(true); + expect(authClientService.validateToken).toHaveBeenCalledWith(mockToken); + expect(request.user).toEqual(mockJwtPayload); + expect(request.token).toBe(mockToken); + }); + + it('should throw UnauthorizedException when no authorization header is provided', async () => { + const mockContext = createMockExecutionContext({}); + + await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException); + + await expect(guard.canActivate(mockContext)).rejects.toThrow( + 'No authorization header provided' + ); + }); + + it('should throw UnauthorizedException when authorization header is empty', async () => { + const mockContext = createMockExecutionContext({ + authorization: '', + }); + + await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException); + }); + + it('should throw UnauthorizedException when token type is not Bearer', async () => { + const mockContext = createMockExecutionContext({ + authorization: `Basic ${mockToken}`, + }); + + await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException); + + await expect(guard.canActivate(mockContext)).rejects.toThrow('Invalid token type'); + }); + + it('should throw UnauthorizedException when no token is provided after Bearer', async () => { + const mockContext = createMockExecutionContext({ + authorization: 'Bearer', + }); + + await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException); + + await expect(guard.canActivate(mockContext)).rejects.toThrow('No token provided'); + }); + + it('should throw UnauthorizedException when token is only whitespace', async () => { + const mockContext = createMockExecutionContext({ + authorization: 'Bearer ', + }); + + await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException); + + await expect(guard.canActivate(mockContext)).rejects.toThrow('No token provided'); + }); + + it('should throw UnauthorizedException when token validation fails', async () => { + const mockContext = createMockExecutionContext({ + authorization: `Bearer ${mockToken}`, + }); + + authClientService.validateToken.mockRejectedValue(new Error('Token validation failed')); + + await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException); + + await expect(guard.canActivate(mockContext)).rejects.toThrow('Invalid token'); + }); + + it('should handle various token validation errors', async () => { + const mockContext = createMockExecutionContext({ + authorization: `Bearer ${mockToken}`, + }); + + const testCases = [ + { error: new Error('Token expired'), message: 'Invalid token' }, + { error: new Error('Invalid signature'), message: 'Invalid token' }, + { error: new UnauthorizedException('Custom auth error'), message: 'Invalid token' }, + ]; + + for (const testCase of testCases) { + authClientService.validateToken.mockRejectedValue(testCase.error); + + await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException); + + await expect(guard.canActivate(mockContext)).rejects.toThrow(testCase.message); + } + }); + + it('should handle malformed authorization headers gracefully', async () => { + const testCases = [ + 'Bearer', + 'Bearer ', + 'Bearer ', + 'BearerToken', + 'Token ' + mockToken, + ' Bearer ' + mockToken, + 'Bearer ' + mockToken + ' extra', + ]; + + for (const authHeader of testCases) { + const mockContext = createMockExecutionContext({ + authorization: authHeader, + }); + + if (authHeader.trim().startsWith('Bearer ') && authHeader.split(' ')[1]?.trim()) { + authClientService.validateToken.mockResolvedValue(mockJwtPayload); + const result = await guard.canActivate(mockContext); + expect(result).toBe(true); + } else { + await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException); + } + } + }); + + it('should preserve original request properties when attaching user and token', async () => { + const originalRequest = { + headers: { + authorization: `Bearer ${mockToken}`, + 'content-type': 'application/json', + }, + body: { data: 'test' }, + params: { id: '123' }, + query: { filter: 'active' }, + }; + + const mockContext = { + switchToHttp: () => ({ + getRequest: () => originalRequest, + }), + } as ExecutionContext; + + authClientService.validateToken.mockResolvedValue(mockJwtPayload); + + await guard.canActivate(mockContext); + + expect(originalRequest.headers).toEqual({ + authorization: `Bearer ${mockToken}`, + 'content-type': 'application/json', + }); + expect(originalRequest.body).toEqual({ data: 'test' }); + expect(originalRequest.params).toEqual({ id: '123' }); + expect(originalRequest.query).toEqual({ filter: 'active' }); + expect((originalRequest as any).user).toEqual(mockJwtPayload); + expect((originalRequest as any).token).toBe(mockToken); + }); + + it('should log error details when token validation fails', async () => { + const consoleSpy = jest.spyOn(console, 'error').mockImplementation(); + const mockContext = createMockExecutionContext({ + authorization: `Bearer ${mockToken}`, + }); + + const validationError = new Error('Token signature invalid'); + authClientService.validateToken.mockRejectedValue(validationError); + + await expect(guard.canActivate(mockContext)).rejects.toThrow(UnauthorizedException); + + expect(consoleSpy).toHaveBeenCalledWith('Auth error:', 'Token signature invalid'); + + consoleSpy.mockRestore(); + }); + }); +}); diff --git a/apps/memoro/apps/backend/src/guards/auth.guard.ts b/apps/memoro/apps/backend/src/guards/auth.guard.ts new file mode 100644 index 000000000..e4c9b2070 --- /dev/null +++ b/apps/memoro/apps/backend/src/guards/auth.guard.ts @@ -0,0 +1,47 @@ +import { Injectable, CanActivate, ExecutionContext, UnauthorizedException } from '@nestjs/common'; +import { Observable } from 'rxjs'; +import { AuthClientService } from '../auth/auth-client.service'; +import { JwtPayload } from '../types/jwt-payload.interface'; + +@Injectable() +export class AuthGuard implements CanActivate { + constructor(private authClientService: AuthClientService) {} + + canActivate(context: ExecutionContext): boolean | Promise | Observable { + const request = context.switchToHttp().getRequest(); + return this.validateRequest(request); + } + + private async validateRequest(request: any): Promise { + const authHeader = request.headers.authorization; + + if (!authHeader) { + throw new UnauthorizedException('No authorization header provided'); + } + + const [type, token] = authHeader.split(' '); + + if (type !== 'Bearer') { + throw new UnauthorizedException('Invalid token type'); + } + + if (!token) { + throw new UnauthorizedException('No token provided'); + } + + try { + // Validate the token with the Auth service + const payload = await this.authClientService.validateToken(token); + + // Attach the user payload to the request for controllers to use + request.user = payload as JwtPayload; + // Also attach the token for potential forwarding to other services + request.token = token; + + return true; + } catch (error) { + console.error('Auth error:', error.message); + throw new UnauthorizedException('Invalid token'); + } + } +} diff --git a/apps/memoro/apps/backend/src/guards/internal-service.guard.ts b/apps/memoro/apps/backend/src/guards/internal-service.guard.ts new file mode 100644 index 000000000..45da28518 --- /dev/null +++ b/apps/memoro/apps/backend/src/guards/internal-service.guard.ts @@ -0,0 +1,35 @@ +import { Injectable, CanActivate, ExecutionContext, UnauthorizedException } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; + +/** + * Guard for internal service-to-service communication. + * Validates requests using the X-Internal-API-Key header. + * Used for scheduled jobs and internal microservice calls. + */ +@Injectable() +export class InternalServiceGuard implements CanActivate { + constructor(private configService: ConfigService) {} + + async canActivate(context: ExecutionContext): Promise { + const request = context.switchToHttp().getRequest(); + const apiKey = request.headers['x-internal-api-key']; + + if (!apiKey) { + throw new UnauthorizedException('Missing X-Internal-API-Key header'); + } + + const internalApiKey = this.configService.get('INTERNAL_API_KEY'); + + if (!internalApiKey) { + throw new UnauthorizedException('Internal API key not configured'); + } + + if (apiKey !== internalApiKey) { + throw new UnauthorizedException('Invalid internal API key'); + } + + // Mark request as internal service call + request.isInternalService = true; + return true; + } +} diff --git a/apps/memoro/apps/backend/src/guards/service-auth.guard.spec.ts b/apps/memoro/apps/backend/src/guards/service-auth.guard.spec.ts new file mode 100644 index 000000000..a8ca101fa --- /dev/null +++ b/apps/memoro/apps/backend/src/guards/service-auth.guard.spec.ts @@ -0,0 +1,225 @@ +import { Test, TestingModule } from '@nestjs/testing'; +import { ConfigService } from '@nestjs/config'; +import { ExecutionContext, UnauthorizedException } from '@nestjs/common'; +import { ServiceAuthGuard } from './service-auth.guard'; +import { createClient } from '@supabase/supabase-js'; + +jest.mock('@supabase/supabase-js'); + +describe('ServiceAuthGuard', () => { + let guard: ServiceAuthGuard; + let configService: jest.Mocked; + + const mockConfigService = { + get: jest.fn(), + }; + + const mockSupabaseClient = { + from: jest.fn().mockReturnThis(), + select: jest.fn().mockReturnThis(), + limit: jest.fn().mockReturnThis(), + }; + + const createMockExecutionContext = (headers: any = {}): ExecutionContext => + ({ + switchToHttp: () => ({ + getRequest: () => ({ + headers, + }), + }), + }) as ExecutionContext; + + beforeEach(async () => { + const module: TestingModule = await Test.createTestingModule({ + providers: [ + ServiceAuthGuard, + { + provide: ConfigService, + useValue: mockConfigService, + }, + ], + }).compile(); + + guard = module.get(ServiceAuthGuard); + configService = module.get(ConfigService); + + (createClient as jest.Mock).mockReturnValue(mockSupabaseClient); + }); + + afterEach(() => { + jest.clearAllMocks(); + }); + + describe('canActivate', () => { + it('should return true for valid MEMORO_SUPABASE_SERVICE_KEY', async () => { + const serviceKey = 'valid-memoro-service-key'; + mockConfigService.get.mockImplementation((key: string) => { + if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return serviceKey; + if (key === 'SUPABASE_SERVICE_KEY') return 'other-key'; + if (key === 'MEMORO_SUPABASE_URL') return 'https://example.supabase.co'; + return null; + }); + + const context = createMockExecutionContext({ + authorization: `Bearer ${serviceKey}`, + }); + + const request = context.switchToHttp().getRequest(); + const result = await guard.canActivate(context); + + expect(result).toBe(true); + expect(request.isServiceAuth).toBe(true); + expect(request.serviceKey).toBe(serviceKey); + }); + + it('should return true for valid SUPABASE_SERVICE_KEY', async () => { + const serviceKey = 'valid-supabase-service-key'; + mockConfigService.get.mockImplementation((key: string) => { + if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return 'other-key'; + if (key === 'SUPABASE_SERVICE_KEY') return serviceKey; + if (key === 'MEMORO_SUPABASE_URL') return 'https://example.supabase.co'; + return null; + }); + + const context = createMockExecutionContext({ + authorization: `Bearer ${serviceKey}`, + }); + + const request = context.switchToHttp().getRequest(); + const result = await guard.canActivate(context); + + expect(result).toBe(true); + expect(request.isServiceAuth).toBe(true); + expect(request.serviceKey).toBe(serviceKey); + }); + + it('should validate token with Supabase when not matching config keys', async () => { + const serviceKey = 'unknown-service-key'; + mockConfigService.get.mockImplementation((key: string) => { + if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return 'memoro-key'; + if (key === 'SUPABASE_SERVICE_KEY') return 'supabase-key'; + if (key === 'MEMORO_SUPABASE_URL') return 'https://example.supabase.co'; + return null; + }); + + mockSupabaseClient.limit.mockResolvedValue({ error: null }); + + const context = createMockExecutionContext({ + authorization: `Bearer ${serviceKey}`, + }); + + const request = context.switchToHttp().getRequest(); + const result = await guard.canActivate(context); + + expect(result).toBe(true); + expect(createClient).toHaveBeenCalledWith( + 'https://example.supabase.co', + serviceKey, + expect.any(Object) + ); + expect(mockSupabaseClient.from).toHaveBeenCalledWith('memos'); + expect(mockSupabaseClient.select).toHaveBeenCalledWith('id'); + expect(mockSupabaseClient.limit).toHaveBeenCalledWith(1); + expect(request.isServiceAuth).toBe(true); + expect(request.serviceKey).toBe(serviceKey); + }); + + it('should throw UnauthorizedException when no authorization header', async () => { + const context = createMockExecutionContext({}); + + await expect(guard.canActivate(context)).rejects.toThrow( + new UnauthorizedException('No authorization header provided') + ); + }); + + it('should throw UnauthorizedException for invalid token type', async () => { + const context = createMockExecutionContext({ + authorization: 'Basic invalidtoken', + }); + + await expect(guard.canActivate(context)).rejects.toThrow( + new UnauthorizedException('Invalid token type') + ); + }); + + it('should throw UnauthorizedException when no token provided', async () => { + const context = createMockExecutionContext({ + authorization: 'Bearer ', + }); + + await expect(guard.canActivate(context)).rejects.toThrow( + new UnauthorizedException('No token provided') + ); + }); + + it('should throw UnauthorizedException when Supabase validation fails', async () => { + const serviceKey = 'invalid-service-key'; + mockConfigService.get.mockImplementation((key: string) => { + if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return 'memoro-key'; + if (key === 'SUPABASE_SERVICE_KEY') return 'supabase-key'; + if (key === 'MEMORO_SUPABASE_URL') return 'https://example.supabase.co'; + return null; + }); + + mockSupabaseClient.limit.mockResolvedValue({ + error: { message: 'Invalid service key', code: 'PGRST301' }, + }); + + const context = createMockExecutionContext({ + authorization: `Bearer ${serviceKey}`, + }); + + await expect(guard.canActivate(context)).rejects.toThrow( + new UnauthorizedException('Invalid service key') + ); + }); + + it('should throw UnauthorizedException when Supabase client throws error', async () => { + const serviceKey = 'error-service-key'; + mockConfigService.get.mockImplementation((key: string) => { + if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return 'memoro-key'; + if (key === 'SUPABASE_SERVICE_KEY') return 'supabase-key'; + if (key === 'MEMORO_SUPABASE_URL') return 'https://example.supabase.co'; + return null; + }); + + mockSupabaseClient.limit.mockRejectedValue(new Error('Network error')); + + const context = createMockExecutionContext({ + authorization: `Bearer ${serviceKey}`, + }); + + await expect(guard.canActivate(context)).rejects.toThrow( + new UnauthorizedException('Invalid service key') + ); + }); + + it('should handle edge case with empty Bearer token', async () => { + const context = createMockExecutionContext({ + authorization: 'Bearer', + }); + + await expect(guard.canActivate(context)).rejects.toThrow( + new UnauthorizedException('No token provided') + ); + }); + + it('should handle multiple spaces in authorization header', async () => { + const serviceKey = 'valid-memoro-service-key'; + mockConfigService.get.mockImplementation((key: string) => { + if (key === 'MEMORO_SUPABASE_SERVICE_KEY') return serviceKey; + return null; + }); + + const context = createMockExecutionContext({ + authorization: `Bearer ${serviceKey}`, // Normal spacing + }); + + const request = context.switchToHttp().getRequest(); + const result = await guard.canActivate(context); + + expect(result).toBe(true); + expect(request.isServiceAuth).toBe(true); + }); + }); +}); diff --git a/apps/memoro/apps/backend/src/guards/service-auth.guard.ts b/apps/memoro/apps/backend/src/guards/service-auth.guard.ts new file mode 100644 index 000000000..f03a55994 --- /dev/null +++ b/apps/memoro/apps/backend/src/guards/service-auth.guard.ts @@ -0,0 +1,65 @@ +import { Injectable, CanActivate, ExecutionContext, UnauthorizedException } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { createClient } from '@supabase/supabase-js'; + +@Injectable() +export class ServiceAuthGuard implements CanActivate { + constructor(private configService: ConfigService) {} + + async canActivate(context: ExecutionContext): Promise { + const request = context.switchToHttp().getRequest(); + const authHeader = request.headers.authorization; + + if (!authHeader) { + throw new UnauthorizedException('No authorization header provided'); + } + + const [type, token] = authHeader.split(' '); + + if (type !== 'Bearer') { + throw new UnauthorizedException('Invalid token type'); + } + + if (!token) { + throw new UnauthorizedException('No token provided'); + } + + // Check if the token is the service role key + // Accept both MEMORO_SUPABASE_SERVICE_KEY and SUPABASE_SERVICE_KEY for compatibility + const memoroServiceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'); + const supabaseServiceKey = this.configService.get('SUPABASE_SERVICE_KEY'); + + if (token === memoroServiceKey || token === supabaseServiceKey) { + // This is a valid service-to-service request + // Attach a service identifier to the request + request.isServiceAuth = true; + request.serviceKey = token; + return true; + } + + // Optionally, validate the token with Supabase to ensure it's a valid service key + try { + const supabaseUrl = this.configService.get('MEMORO_SUPABASE_URL'); + const supabase = createClient(supabaseUrl, token, { + auth: { + autoRefreshToken: false, + persistSession: false, + }, + }); + + // Try to access a protected resource to validate the service key + const { error } = await supabase.from('memos').select('id').limit(1); + + if (!error) { + // Valid service key + request.isServiceAuth = true; + request.serviceKey = token; + return true; + } + } catch (error) { + // Token validation failed + } + + throw new UnauthorizedException('Invalid service key'); + } +} diff --git a/apps/memoro/apps/backend/src/health/health.controller.ts b/apps/memoro/apps/backend/src/health/health.controller.ts new file mode 100644 index 000000000..560713236 --- /dev/null +++ b/apps/memoro/apps/backend/src/health/health.controller.ts @@ -0,0 +1,36 @@ +import { Controller, Get } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; + +@Controller('health') +export class HealthController { + constructor(private readonly configService: ConfigService) {} + + @Get() + checkHealth() { + // Log debug info when health check is called + console.error('[HEALTH CHECK DEBUG] Environment check:'); + console.error( + '[HEALTH CHECK DEBUG] AUDIO_MICROSERVICE_URL from env:', + process.env.AUDIO_MICROSERVICE_URL + ); + console.error( + '[HEALTH CHECK DEBUG] AUDIO_MICROSERVICE_URL from ConfigService:', + this.configService.get('AUDIO_MICROSERVICE_URL') + ); + console.error('[HEALTH CHECK DEBUG] NODE_ENV:', process.env.NODE_ENV); + + return { + status: 'ok', + timestamp: new Date().toISOString(), + service: 'memoro-service', + debug: { + nodeEnv: process.env.NODE_ENV, + audioServiceUrl: this.configService.get('AUDIO_MICROSERVICE_URL'), + audioServiceUrlEnv: process.env.AUDIO_MICROSERVICE_URL, + port: process.env.PORT || 3001, + cwd: process.cwd(), + nodeVersion: process.version, + }, + }; + } +} diff --git a/apps/memoro/apps/backend/src/health/health.module.ts b/apps/memoro/apps/backend/src/health/health.module.ts new file mode 100644 index 000000000..a61d8b044 --- /dev/null +++ b/apps/memoro/apps/backend/src/health/health.module.ts @@ -0,0 +1,7 @@ +import { Module } from '@nestjs/common'; +import { HealthController } from './health.controller'; + +@Module({ + controllers: [HealthController], +}) +export class HealthModule {} diff --git a/apps/memoro/apps/backend/src/interfaces/memoro.interfaces.ts b/apps/memoro/apps/backend/src/interfaces/memoro.interfaces.ts new file mode 100644 index 000000000..8cb5a974a --- /dev/null +++ b/apps/memoro/apps/backend/src/interfaces/memoro.interfaces.ts @@ -0,0 +1,57 @@ +export interface MemoroSpaceDto { + id: string; + name: string; + owner_id: string; + app_id: string; + roles: any; + credits: number; + created_at: string; + updated_at: string; + memo_count?: number; + isOwner?: boolean; // Added for frontend ownership indication +} + +export interface LinkMemoSpaceDto { + memoId: string; + spaceId: string; +} + +export interface UnlinkMemoSpaceDto { + memoId: string; + spaceId: string; +} + +export interface SuccessResponseDto { + success: boolean; + message?: string; +} + +// Video-related interfaces +export interface VideoMetadata { + width?: number; + height?: number; + fps?: number; + videoCodec?: string; + audioCodec?: string; + audioChannels?: number; + audioSampleRate?: number; + fileSize?: number; + bitrate?: number; + hasAudioTrack?: boolean; +} + +export type MediaType = 'audio' | 'video'; + +export interface ProcessMediaDto { + filePath: string; + duration: number; + spaceId?: string; + blueprintId?: string | null; + recordingLanguages?: string[]; + memoId?: string; + location?: any; + recordingStartedAt?: string; + enableDiarization?: boolean; + mediaType?: MediaType; + videoMetadata?: VideoMetadata; +} diff --git a/apps/memoro/apps/backend/src/interfaces/spaces.interfaces.ts b/apps/memoro/apps/backend/src/interfaces/spaces.interfaces.ts new file mode 100644 index 000000000..88f2a4dbf --- /dev/null +++ b/apps/memoro/apps/backend/src/interfaces/spaces.interfaces.ts @@ -0,0 +1,26 @@ +export interface SpaceDto { + id: string; + name: string; + owner_id: string; + app_id: string; + roles: any; + credits: number; + created_at: string; + updated_at: string; + memo_count?: number; // Added for compatibility with MemoroSpaceDto +} + +export interface SpaceInviteDto { + id: string; + space_id: string; + space?: SpaceDto; + user_email: string; + role: string; + status: string; + created_at: string; + updated_at: string; +} + +export interface PendingInvitesResponseDto { + invites: SpaceInviteDto[]; +} diff --git a/apps/memoro/apps/backend/src/main.ts b/apps/memoro/apps/backend/src/main.ts new file mode 100644 index 000000000..9dd5fc31e --- /dev/null +++ b/apps/memoro/apps/backend/src/main.ts @@ -0,0 +1,60 @@ +import { NestFactory } from '@nestjs/core'; +import { AppModule } from './app.module'; +import { HttpExceptionFilter } from './filters/http-exception.filter'; + +async function bootstrap() { + // Debug: Log environment variables at startup - using console.error for Cloud Run visibility + console.error('[STARTUP DEBUG] Environment variables check:'); + console.error('[STARTUP DEBUG] AUDIO_MICROSERVICE_URL:', process.env.AUDIO_MICROSERVICE_URL); + console.error( + '[STARTUP DEBUG] All env vars with AUDIO:', + Object.keys(process.env).filter((key) => key.includes('AUDIO')) + ); + console.error('[STARTUP DEBUG] NODE_ENV:', process.env.NODE_ENV); + console.error('[STARTUP DEBUG] Current working directory:', process.cwd()); + console.error('[STARTUP DEBUG] __dirname:', __dirname); + + const app = await NestFactory.create(AppModule); + app.enableCors(); + + // Apply global exception filter for standardized error responses + app.useGlobalFilters(new HttpExceptionFilter()); + + // Increase request body size limit to handle rich speaker diarization data + // NestJS default is 100KB, our speaker data can be ~150KB+ + const bodyLimit = '10mb'; // More reasonable limit + + app.use( + require('express').json({ + limit: bodyLimit, + verify: (req, res, buf, encoding) => { + console.log(`[Body Parser] Received ${buf.length} bytes on ${req.url}`); + if (buf.length > 1024 * 1024) { + // Log if >1MB + console.log( + `[Body Parser] Large payload detected: ${(buf.length / 1024 / 1024).toFixed(2)}MB` + ); + } + }, + }) + ); + + app.use( + require('express').urlencoded({ + extended: true, + limit: bodyLimit, + verify: (req, res, buf, encoding) => { + console.log(`[Body Parser URL] Received ${buf.length} bytes on ${req.url}`); + }, + }) + ); + + console.log(`[NestJS] Body parser configured with limit: ${bodyLimit}`); + + // Use PORT environment variable provided by Cloud Run, default to 3001 + // Using 3001 instead of 3000 to avoid conflicts with the main middleware service in development + const port = process.env.PORT || 3001; + await app.listen(port); + console.log(`Memoro microservice listening on port ${port}`); +} +bootstrap(); diff --git a/apps/memoro/apps/backend/src/meetings/dto/create-bot.dto.ts b/apps/memoro/apps/backend/src/meetings/dto/create-bot.dto.ts new file mode 100644 index 000000000..9e705bd13 --- /dev/null +++ b/apps/memoro/apps/backend/src/meetings/dto/create-bot.dto.ts @@ -0,0 +1,33 @@ +/** + * DTO for creating a meeting bot + */ +export class CreateBotDto { + meeting_url: string; + space_id?: string; +} + +/** + * DTO for stopping a meeting bot + */ +export class StopBotDto { + bot_id: string; +} + +/** + * Query params for listing bots + */ +export class ListBotsQueryDto { + state?: string; + space_id?: string; + limit?: number; + offset?: number; +} + +/** + * Query params for listing recordings + */ +export class ListRecordingsQueryDto { + space_id?: string; + limit?: number; + offset?: number; +} diff --git a/apps/memoro/apps/backend/src/meetings/dto/webhook-event.dto.ts b/apps/memoro/apps/backend/src/meetings/dto/webhook-event.dto.ts new file mode 100644 index 000000000..eb2af7c3a --- /dev/null +++ b/apps/memoro/apps/backend/src/meetings/dto/webhook-event.dto.ts @@ -0,0 +1,30 @@ +/** + * DTO for webhook events from meeting-bot service + */ +export class WebhookEventDto { + event: 'recording.completed' | 'recording.failed'; + timestamp: string; + bot: { + id: string; + external_bot_id: string; + user_id: string; + space_id?: string; + state: string; + completed_at?: string; + failed_at?: string; + }; + recording?: { + id: string; + video_url?: string; + audio_url?: string; + file_url?: string; + transcript?: string; + speakers?: object; + duration_seconds?: number; + created_at: string; + }; + error?: { + code: string; + message: string; + }; +} diff --git a/apps/memoro/apps/backend/src/meetings/interfaces/meeting.interfaces.ts b/apps/memoro/apps/backend/src/meetings/interfaces/meeting.interfaces.ts new file mode 100644 index 000000000..28c75f000 --- /dev/null +++ b/apps/memoro/apps/backend/src/meetings/interfaces/meeting.interfaces.ts @@ -0,0 +1,135 @@ +/** + * Meeting Bot States - matches meeting-bot service enum + */ +export type MeetingBotState = + | 'registering' + | 'provisioning' + | 'joining' + | 'waiting_room' + | 'joined' + | 'recording' + | 'recording_error' + | 'leaving' + | 'left' + | 'error'; + +/** + * Meeting platform/vendor + */ +export type MeetingVendor = 'teams' | 'meet' | 'zoom'; + +/** + * Meeting Bot record from database + */ +export interface MeetingBot { + id: string; + created_at: string; + ended_at?: string; + updated_at: string; + vendor: MeetingVendor; + state: MeetingBotState; + meeting_id?: string; + meeting_code: string; + meeting_url?: string; + external_bot_id?: string; + user_id: string; + space_id?: string; + credits_consumed?: number; + duration_seconds?: number; +} + +/** + * Recording record from database + */ +export interface MeetingRecording { + id: string; + created_at: string; + updated_at: string; + file_url?: string; + video_url?: string; + audio_url?: string; + transcript?: string; + duration_seconds?: number; + bot_id: string; + user_id: string; + space_id?: string; + // Signed URLs for playback (generated at runtime) + audio_signed_url?: string | null; + video_signed_url?: string | null; +} + +/** + * Create bot request to meeting-bot service + */ +export interface CreateBotRequest { + user_id: string; + space_id?: string; + meeting_url: string; + completed_webhook_url?: string; + failed_webhook_url?: string; +} + +/** + * Create bot response from meeting-bot service + */ +export interface CreateBotResponse { + id: string; + external_bot_id: string; + meeting_url: string; + state: MeetingBotState; + created_at: string; +} + +/** + * Webhook event payload from meeting-bot + */ +export interface MeetingWebhookPayload { + event: 'recording.completed' | 'recording.failed'; + timestamp: string; + bot: { + id: string; + external_bot_id: string; + user_id: string; + space_id?: string; + state: MeetingBotState; + completed_at?: string; + failed_at?: string; + }; + recording?: { + id: string; + video_url?: string; + audio_url?: string; + file_url?: string; + transcript?: string; + speakers?: object; + duration_seconds?: number; + created_at: string; + }; + error?: { + code: string; + message: string; + }; +} + +/** + * Bot with recording details + */ +export interface MeetingBotWithRecording extends MeetingBot { + recording?: MeetingRecording; +} + +/** + * List bots response + */ +export interface ListBotsResponse { + bots: MeetingBotWithRecording[]; + total: number; +} + +/** + * List recordings response + */ +export interface ListRecordingsResponse { + recordings: MeetingRecording[]; + total: number; +} diff --git a/apps/memoro/apps/backend/src/meetings/meetings-proxy.service.ts b/apps/memoro/apps/backend/src/meetings/meetings-proxy.service.ts new file mode 100644 index 000000000..c4d686d63 --- /dev/null +++ b/apps/memoro/apps/backend/src/meetings/meetings-proxy.service.ts @@ -0,0 +1,372 @@ +import { Injectable, Logger, BadRequestException, NotFoundException } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { createClient, SupabaseClient } from '@supabase/supabase-js'; +import { + MeetingBot, + MeetingRecording, + MeetingBotWithRecording, + CreateBotRequest, + CreateBotResponse, + MeetingVendor, +} from './interfaces/meeting.interfaces'; + +@Injectable() +export class MeetingsProxyService { + private readonly logger = new Logger(MeetingsProxyService.name); + private readonly meetingBotApiUrl: string; + private readonly meetingBotApiKey: string; + private readonly memoroServiceUrl: string; + private readonly supabaseClient: SupabaseClient; + + constructor(private configService: ConfigService) { + this.meetingBotApiUrl = this.configService.get('MEETING_BOT_API_URL') || ''; + this.meetingBotApiKey = this.configService.get('MEETING_BOT_API_KEY') || ''; + this.memoroServiceUrl = + this.configService.get('MEMORO_SERVICE_URL') || 'http://localhost:3001'; + + // Initialize Supabase client for direct database access + const supabaseUrl = this.configService.get('MEMORO_SUPABASE_URL'); + const supabaseServiceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'); + + if (supabaseUrl && supabaseServiceKey) { + this.supabaseClient = createClient(supabaseUrl, supabaseServiceKey); + } + + if (!this.meetingBotApiUrl) { + this.logger.warn('MEETING_BOT_API_URL not configured - meeting bot proxy disabled'); + } + } + + /** + * Detect meeting platform from URL + */ + detectPlatform(meetingUrl: string): MeetingVendor { + if (/teams\.microsoft\.com/i.test(meetingUrl)) return 'teams'; + if (/meet\.google\.com/i.test(meetingUrl)) return 'meet'; + if (/zoom\.(us|com)/i.test(meetingUrl)) return 'zoom'; + throw new BadRequestException('Unsupported meeting platform. Use Teams, Meet, or Zoom.'); + } + + /** + * Create a new meeting bot via meeting-bot service + */ + async createBot( + userId: string, + meetingUrl: string, + spaceId?: string + ): Promise { + this.logger.log(`[createBot] Creating bot for user ${userId}, meeting: ${meetingUrl}`); + + if (!this.meetingBotApiUrl || !this.meetingBotApiKey) { + throw new BadRequestException('Meeting bot service not configured'); + } + + const platform = this.detectPlatform(meetingUrl); + this.logger.log(`[createBot] Detected platform: ${platform}`); + + const webhookBaseUrl = this.memoroServiceUrl.replace(/\/$/, ''); + + const requestBody: CreateBotRequest = { + user_id: userId, + space_id: spaceId, + meeting_url: meetingUrl, + completed_webhook_url: `${webhookBaseUrl}/meetings/webhooks/bot-events`, + failed_webhook_url: `${webhookBaseUrl}/meetings/webhooks/bot-events`, + }; + + try { + const response = await fetch(`${this.meetingBotApiUrl}/bots`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'x-api-key': this.meetingBotApiKey, + }, + body: JSON.stringify(requestBody), + }); + + if (!response.ok) { + const errorData = await response.json().catch(() => ({})); + this.logger.error(`[createBot] Meeting bot service error: ${response.status}`, errorData); + throw new BadRequestException( + errorData.message || `Failed to create meeting bot: ${response.statusText}` + ); + } + + const result = await response.json(); + this.logger.log(`[createBot] Bot created successfully: ${result.id}`); + + return result; + } catch (error) { + if (error instanceof BadRequestException) throw error; + this.logger.error(`[createBot] Error creating bot:`, error); + throw new BadRequestException('Failed to connect to meeting bot service'); + } + } + + /** + * Stop a meeting bot + */ + async stopBot(botId: string, userId: string): Promise<{ success: boolean }> { + this.logger.log(`[stopBot] Stopping bot ${botId} for user ${userId}`); + + // First verify the bot belongs to this user + const bot = await this.getBotById(botId, userId); + if (!bot) { + throw new NotFoundException('Bot not found'); + } + + if (!this.meetingBotApiUrl || !this.meetingBotApiKey) { + throw new BadRequestException('Meeting bot service not configured'); + } + + try { + const response = await fetch(`${this.meetingBotApiUrl}/bots/${botId}/stop`, { + method: 'POST', + headers: { + 'x-api-key': this.meetingBotApiKey, + }, + }); + + if (!response.ok) { + const errorData = await response.json().catch(() => ({})); + this.logger.error(`[stopBot] Meeting bot service error: ${response.status}`, errorData); + throw new BadRequestException( + errorData.message || `Failed to stop meeting bot: ${response.statusText}` + ); + } + + this.logger.log(`[stopBot] Bot ${botId} stopped successfully`); + return { success: true }; + } catch (error) { + if (error instanceof BadRequestException || error instanceof NotFoundException) throw error; + this.logger.error(`[stopBot] Error stopping bot:`, error); + throw new BadRequestException('Failed to stop meeting bot'); + } + } + + /** + * Get bots for a user directly from database + */ + async getBots( + userId: string, + spaceId?: string, + limit = 50, + offset = 0 + ): Promise { + this.logger.log(`[getBots] Getting bots for user ${userId}`); + + if (!this.supabaseClient) { + throw new BadRequestException('Database not configured'); + } + + let query = this.supabaseClient + .from('meeting_bots') + .select('*') + .eq('user_id', userId) + .order('created_at', { ascending: false }) + .range(offset, offset + limit - 1); + + if (spaceId) { + query = query.eq('space_id', spaceId); + } + + const { data: bots, error } = await query; + + if (error) { + this.logger.error(`[getBots] Database error:`, error); + throw new BadRequestException('Failed to fetch bots'); + } + + // Fetch recordings for each bot + const botsWithRecordings: MeetingBotWithRecording[] = await Promise.all( + (bots || []).map(async (bot: MeetingBot) => { + const { data: recordings } = await this.supabaseClient + .from('meeting_recordings') + .select('*') + .eq('bot_id', bot.id) + .limit(1); + + return { + ...bot, + recording: recordings?.[0] || undefined, + }; + }) + ); + + return botsWithRecordings; + } + + /** + * Get a specific bot by ID + */ + async getBotById(botId: string, userId: string): Promise { + this.logger.log(`[getBotById] Getting bot ${botId} for user ${userId}`); + + if (!this.supabaseClient) { + throw new BadRequestException('Database not configured'); + } + + const { data: bot, error } = await this.supabaseClient + .from('meeting_bots') + .select('*') + .eq('id', botId) + .eq('user_id', userId) + .single(); + + if (error || !bot) { + return null; + } + + // Fetch recording + const { data: recordings } = await this.supabaseClient + .from('meeting_recordings') + .select('*') + .eq('bot_id', botId) + .limit(1); + + return { + ...bot, + recording: recordings?.[0] || undefined, + }; + } + + /** + * Generate signed URL for a storage path + */ + private async generateSignedUrl(storagePath: string): Promise { + if (!storagePath || !this.supabaseClient) return null; + + try { + const bucket = this.configService.get('USER_UPLOADS_BUCKET') || 'user-uploads'; + const { data, error } = await this.supabaseClient.storage + .from(bucket) + .createSignedUrl(storagePath, 3600); // 1 hour expiry + + if (error) { + this.logger.error(`[generateSignedUrl] Error creating signed URL:`, error); + return null; + } + + return data?.signedUrl || null; + } catch (error) { + this.logger.error(`[generateSignedUrl] Error:`, error); + return null; + } + } + + /** + * Add signed URLs to a recording + */ + private async addSignedUrls(recording: MeetingRecording): Promise { + const [audioSignedUrl, videoSignedUrl] = await Promise.all([ + recording.audio_url ? this.generateSignedUrl(recording.audio_url) : null, + recording.video_url ? this.generateSignedUrl(recording.video_url) : null, + ]); + + return { + ...recording, + audio_signed_url: audioSignedUrl, + video_signed_url: videoSignedUrl, + }; + } + + /** + * Get recordings for a user + */ + async getRecordings( + userId: string, + spaceId?: string, + limit = 50, + offset = 0 + ): Promise { + this.logger.log(`[getRecordings] Getting recordings for user ${userId}`); + + if (!this.supabaseClient) { + throw new BadRequestException('Database not configured'); + } + + let query = this.supabaseClient + .from('meeting_recordings') + .select('*') + .eq('user_id', userId) + .order('created_at', { ascending: false }) + .range(offset, offset + limit - 1); + + if (spaceId) { + query = query.eq('space_id', spaceId); + } + + const { data: recordings, error } = await query; + + if (error) { + this.logger.error(`[getRecordings] Database error:`, error); + throw new BadRequestException('Failed to fetch recordings'); + } + + // Add signed URLs to each recording + const recordingsWithUrls = await Promise.all( + (recordings || []).map((recording) => this.addSignedUrls(recording)) + ); + + return recordingsWithUrls; + } + + /** + * Get a specific recording by ID + */ + async getRecordingById(recordingId: string, userId: string): Promise { + this.logger.log(`[getRecordingById] Getting recording ${recordingId} for user ${userId}`); + + if (!this.supabaseClient) { + throw new BadRequestException('Database not configured'); + } + + const { data: recording, error } = await this.supabaseClient + .from('meeting_recordings') + .select('*') + .eq('id', recordingId) + .eq('user_id', userId) + .single(); + + if (error || !recording) { + return null; + } + + // Add signed URLs + return this.addSignedUrls(recording); + } + + /** + * Update bot with credits consumed + */ + async updateBotCredits( + botId: string, + creditsConsumed: number, + durationSeconds?: number + ): Promise { + this.logger.log(`[updateBotCredits] Updating bot ${botId} with ${creditsConsumed} credits`); + + if (!this.supabaseClient) { + throw new BadRequestException('Database not configured'); + } + + const updateData: Partial = { + credits_consumed: creditsConsumed, + updated_at: new Date().toISOString(), + }; + + if (durationSeconds !== undefined) { + updateData.duration_seconds = durationSeconds; + } + + const { error } = await this.supabaseClient + .from('meeting_bots') + .update(updateData) + .eq('id', botId); + + if (error) { + this.logger.error(`[updateBotCredits] Failed to update bot:`, error); + throw new BadRequestException('Failed to update bot credits'); + } + } +} diff --git a/apps/memoro/apps/backend/src/meetings/meetings-webhook.controller.ts b/apps/memoro/apps/backend/src/meetings/meetings-webhook.controller.ts new file mode 100644 index 000000000..d5c591a7a --- /dev/null +++ b/apps/memoro/apps/backend/src/meetings/meetings-webhook.controller.ts @@ -0,0 +1,224 @@ +import { + Controller, + Post, + Body, + Headers, + Logger, + BadRequestException, + UnauthorizedException, +} from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { createHmac, timingSafeEqual } from 'crypto'; +import { MeetingsProxyService } from './meetings-proxy.service'; +import { CreditConsumptionService } from '../credits/credit-consumption.service'; +import { WebhookEventDto } from './dto/webhook-event.dto'; +import { OPERATION_COSTS } from '../credits/pricing.constants'; + +@Controller('meetings/webhooks') +export class MeetingsWebhookController { + private readonly logger = new Logger(MeetingsWebhookController.name); + private readonly webhookSecret: string; + private readonly processedEvents: Set = new Set(); + + constructor( + private readonly configService: ConfigService, + private readonly meetingsProxyService: MeetingsProxyService, + private readonly creditConsumptionService: CreditConsumptionService + ) { + this.webhookSecret = this.configService.get('MEETING_BOT_WEBHOOK_SECRET') || ''; + + if (!this.webhookSecret) { + this.logger.warn( + 'MEETING_BOT_WEBHOOK_SECRET not configured - webhook signature verification disabled' + ); + } + } + + /** + * Verify HMAC signature from meeting-bot service + */ + private verifySignature(payload: string, signature: string): boolean { + if (!this.webhookSecret) { + this.logger.warn('[verifySignature] No webhook secret configured, skipping verification'); + return true; + } + + if (!signature) { + this.logger.error('[verifySignature] No signature provided'); + return false; + } + + try { + // Signature format: sha256= + const [algorithm, providedSignature] = signature.split('='); + + if (algorithm !== 'sha256' || !providedSignature) { + this.logger.error('[verifySignature] Invalid signature format'); + return false; + } + + const expectedSignature = createHmac('sha256', this.webhookSecret) + .update(payload) + .digest('hex'); + + const providedBuffer = Buffer.from(providedSignature, 'hex'); + const expectedBuffer = Buffer.from(expectedSignature, 'hex'); + + if (providedBuffer.length !== expectedBuffer.length) { + return false; + } + + return timingSafeEqual(providedBuffer, expectedBuffer); + } catch (error) { + this.logger.error('[verifySignature] Error verifying signature:', error); + return false; + } + } + + /** + * Generate idempotency key from event + */ + private getIdempotencyKey(event: WebhookEventDto): string { + return `${event.bot.id}:${event.event}:${event.timestamp}`; + } + + /** + * Calculate credits from duration + */ + private calculateCredits(durationSeconds: number): number { + const minutes = durationSeconds / 60; + const cost = Math.ceil(minutes * OPERATION_COSTS.TRANSCRIPTION_PER_MINUTE); + return Math.max(cost, 2); // Minimum 2 credits + } + + /** + * Handle webhook events from meeting-bot service + * POST /meetings/webhooks/bot-events + */ + @Post('bot-events') + async handleBotEvent( + @Body() payload: WebhookEventDto, + @Headers('x-webhook-signature') signature: string + ) { + this.logger.log(`[handleBotEvent] Received event: ${payload.event} for bot ${payload.bot.id}`); + + // Verify signature if secret is configured + if (this.webhookSecret) { + const payloadString = JSON.stringify(payload); + if (!this.verifySignature(payloadString, signature)) { + this.logger.error('[handleBotEvent] Invalid webhook signature'); + throw new UnauthorizedException('Invalid webhook signature'); + } + } + + // Check for duplicate event (idempotency) + const idempotencyKey = this.getIdempotencyKey(payload); + if (this.processedEvents.has(idempotencyKey)) { + this.logger.log(`[handleBotEvent] Duplicate event ignored: ${idempotencyKey}`); + return { success: true, message: 'Event already processed' }; + } + + // Mark as processed (in memory - for production use a database) + this.processedEvents.add(idempotencyKey); + + // Clean up old events (keep last 1000) + if (this.processedEvents.size > 1000) { + const iterator = this.processedEvents.values(); + for (let i = 0; i < 500; i++) { + this.processedEvents.delete(iterator.next().value); + } + } + + try { + if (payload.event === 'recording.completed') { + return await this.handleRecordingCompleted(payload); + } else if (payload.event === 'recording.failed') { + return await this.handleRecordingFailed(payload); + } + + this.logger.warn(`[handleBotEvent] Unknown event type: ${payload.event}`); + return { success: true, message: 'Unknown event type' }; + } catch (error) { + this.logger.error(`[handleBotEvent] Error processing event:`, error); + // Remove from processed set so it can be retried + this.processedEvents.delete(idempotencyKey); + throw new BadRequestException('Failed to process webhook event'); + } + } + + /** + * Handle recording completed event + */ + private async handleRecordingCompleted(payload: WebhookEventDto) { + this.logger.log( + `[handleRecordingCompleted] Processing completed recording for bot ${payload.bot.id}` + ); + + const { bot, recording } = payload; + const durationSeconds = recording?.duration_seconds || 0; + const creditsToConsume = this.calculateCredits(durationSeconds); + + this.logger.log( + `[handleRecordingCompleted] Duration: ${durationSeconds}s, Credits: ${creditsToConsume}` + ); + + // Consume credits + try { + const creditResult = await this.creditConsumptionService.consumeCreditsForOperation( + bot.user_id, + 'meeting_recording' as any, + creditsToConsume, + `Meeting recording completed - ${Math.round(durationSeconds / 60)} minutes`, + { + botId: bot.id, + recordingId: recording?.id, + durationSeconds, + durationMinutes: Math.round(durationSeconds / 60), + }, + bot.space_id + ); + + this.logger.log(`[handleRecordingCompleted] Credits consumed:`, creditResult); + + // Update bot with credits consumed + await this.meetingsProxyService.updateBotCredits(bot.id, creditsToConsume, durationSeconds); + } catch (error) { + this.logger.error(`[handleRecordingCompleted] Failed to consume credits:`, error); + // Don't fail the webhook - recording is still valid + // Credits can be reconciled manually if needed + } + + return { + success: true, + message: 'Recording completed processed', + botId: bot.id, + recordingId: recording?.id, + creditsConsumed: creditsToConsume, + }; + } + + /** + * Handle recording failed event + */ + private async handleRecordingFailed(payload: WebhookEventDto) { + this.logger.log( + `[handleRecordingFailed] Processing failed recording for bot ${payload.bot.id}` + ); + + const { bot, error } = payload; + + this.logger.error( + `[handleRecordingFailed] Bot ${bot.id} failed: ${error?.code} - ${error?.message}` + ); + + // No credits consumed on failure + // Update bot state if needed (should already be updated by meeting-bot) + + return { + success: true, + message: 'Recording failure processed', + botId: bot.id, + error: error?.message, + }; + } +} diff --git a/apps/memoro/apps/backend/src/meetings/meetings.controller.ts b/apps/memoro/apps/backend/src/meetings/meetings.controller.ts new file mode 100644 index 000000000..609136a97 --- /dev/null +++ b/apps/memoro/apps/backend/src/meetings/meetings.controller.ts @@ -0,0 +1,276 @@ +import { + Controller, + Post, + Get, + Param, + Body, + Query, + Req, + UseGuards, + Logger, + NotFoundException, + BadRequestException, +} from '@nestjs/common'; +import { AuthGuard } from '../guards/auth.guard'; +import { User } from '../decorators/user.decorator'; +import { JwtPayload } from '../types/jwt-payload.interface'; +import { MeetingsProxyService } from './meetings-proxy.service'; +import { CreditConsumptionService } from '../credits/credit-consumption.service'; +import { CreateBotDto, ListBotsQueryDto, ListRecordingsQueryDto } from './dto/create-bot.dto'; +import { OPERATION_COSTS } from '../credits/pricing.constants'; + +// Minimum credits required to start a recording (5 minutes worth) +const MINIMUM_RECORDING_CREDITS = 10; + +@Controller('meetings') +@UseGuards(AuthGuard) +export class MeetingsController { + private readonly logger = new Logger(MeetingsController.name); + + constructor( + private readonly meetingsProxyService: MeetingsProxyService, + private readonly creditConsumptionService: CreditConsumptionService + ) {} + + /** + * Validate meeting URL format + */ + private validateMeetingUrl(url: string): boolean { + if (!url) return false; + // Check for supported platforms + return /(teams\.microsoft\.com|meet\.google\.com|zoom\.(us|com))/i.test(url); + } + + /** + * Create a new meeting bot to record a meeting + * POST /meetings/bots + */ + @Post('bots') + async createBot(@User() user: JwtPayload & { token: string }, @Body() dto: CreateBotDto) { + this.logger.log(`[createBot] User ${user.sub} creating bot for: ${dto.meeting_url}`); + + // Validate meeting URL + if (!dto.meeting_url || !this.validateMeetingUrl(dto.meeting_url)) { + throw new BadRequestException( + 'Please provide a valid Teams, Google Meet, or Zoom meeting URL' + ); + } + + // Validate user has minimum credits + const creditCheck = await this.creditConsumptionService.validateCreditsForOperation( + user.sub, + 'meeting_recording' as any, + MINIMUM_RECORDING_CREDITS, + dto.space_id + ); + + if (!creditCheck.hasEnoughCredits) { + throw new BadRequestException({ + error: 'InsufficientCredits', + message: `Not enough credits to start recording. Need at least ${MINIMUM_RECORDING_CREDITS} credits.`, + details: { + requiredCredits: MINIMUM_RECORDING_CREDITS, + availableCredits: creditCheck.availableCredits, + }, + }); + } + + // Create the bot via meeting-bot service + const bot = await this.meetingsProxyService.createBot(user.sub, dto.meeting_url, dto.space_id); + + return { + success: true, + bot, + message: 'Meeting bot created. It will join the meeting shortly.', + creditInfo: { + estimatedCostPerMinute: OPERATION_COSTS.TRANSCRIPTION_PER_MINUTE, + minimumCredits: MINIMUM_RECORDING_CREDITS, + availableCredits: creditCheck.availableCredits, + }, + }; + } + + /** + * List user's meeting bots + * GET /meetings/bots + */ + @Get('bots') + async listBots(@User() user: JwtPayload & { token: string }, @Query() query: ListBotsQueryDto) { + this.logger.log(`[listBots] User ${user.sub} listing bots`); + + const bots = await this.meetingsProxyService.getBots( + user.sub, + query.space_id, + query.limit || 50, + query.offset || 0 + ); + + return { + success: true, + bots, + total: bots.length, + }; + } + + /** + * Get a specific bot by ID + * GET /meetings/bots/:id + */ + @Get('bots/:id') + async getBot(@User() user: JwtPayload & { token: string }, @Param('id') botId: string) { + this.logger.log(`[getBot] User ${user.sub} getting bot ${botId}`); + + const bot = await this.meetingsProxyService.getBotById(botId, user.sub); + + if (!bot) { + throw new NotFoundException('Bot not found'); + } + + return { + success: true, + bot, + }; + } + + /** + * Stop a meeting bot + * POST /meetings/bots/:id/stop + */ + @Post('bots/:id/stop') + async stopBot(@User() user: JwtPayload & { token: string }, @Param('id') botId: string) { + this.logger.log(`[stopBot] User ${user.sub} stopping bot ${botId}`); + + const result = await this.meetingsProxyService.stopBot(botId, user.sub); + + return { + success: result.success, + message: 'Bot stop signal sent. Recording will end shortly.', + }; + } + + /** + * List user's recordings + * GET /meetings/recordings + */ + @Get('recordings') + async listRecordings( + @User() user: JwtPayload & { token: string }, + @Query() query: ListRecordingsQueryDto + ) { + this.logger.log(`[listRecordings] User ${user.sub} listing recordings`); + + const recordings = await this.meetingsProxyService.getRecordings( + user.sub, + query.space_id, + query.limit || 50, + query.offset || 0 + ); + + return { + success: true, + recordings, + total: recordings.length, + }; + } + + /** + * Get a specific recording by ID + * GET /meetings/recordings/:id + */ + @Get('recordings/:id') + async getRecording( + @User() user: JwtPayload & { token: string }, + @Param('id') recordingId: string + ) { + this.logger.log(`[getRecording] User ${user.sub} getting recording ${recordingId}`); + + const recording = await this.meetingsProxyService.getRecordingById(recordingId, user.sub); + + if (!recording) { + throw new NotFoundException('Recording not found'); + } + + return { + success: true, + recording, + }; + } + + /** + * Convert a meeting recording to a memo + * POST /meetings/recordings/:id/to-memo + * + * This endpoint triggers the full transcription/processing pipeline + * by calling the internal /memoro/process-uploaded-audio endpoint. + */ + @Post('recordings/:id/to-memo') + async convertToMemo( + @User() user: JwtPayload & { token: string }, + @Param('id') recordingId: string, + @Body() body: { blueprintId?: string }, + @Req() req + ) { + const token = req.token; + this.logger.log(`[convertToMemo] User ${user.sub} converting recording ${recordingId} to memo`); + + // Get the recording + const recording = await this.meetingsProxyService.getRecordingById(recordingId, user.sub); + + if (!recording) { + throw new NotFoundException('Recording not found'); + } + + // Use audio_url if available, otherwise fall back to video_url + const filePath = recording.audio_url || recording.video_url; + + if (!filePath) { + throw new BadRequestException('Recording has no audio or video file'); + } + + // Get duration from recording or default to a reasonable estimate + const duration = recording.duration_seconds || 45; + + try { + // Call the internal process-uploaded-audio endpoint + // This handles everything: credit check, memo creation, transcription + const port = process.env.PORT || 3001; + const response = await fetch(`http://localhost:${port}/memoro/process-uploaded-audio`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${token}`, + }, + body: JSON.stringify({ + filePath: filePath, + duration: duration, + spaceId: recording.space_id, + blueprintId: body.blueprintId, + }), + }); + + if (!response.ok) { + const errorData = await response.json().catch(() => ({ message: 'Unknown error' })); + this.logger.error(`[convertToMemo] Failed to process audio:`, errorData); + throw new BadRequestException(errorData.message || 'Failed to process recording'); + } + + const result = await response.json(); + + this.logger.log( + `[convertToMemo] Created memo ${result.memoId} from recording ${recordingId}` + ); + + return { + success: true, + memoId: result.memoId, + memo: result.memo, + audioPath: filePath, + status: result.status, + message: 'Recording converted to memo. Transcription in progress.', + }; + } catch (error) { + this.logger.error(`[convertToMemo] Error converting recording to memo:`, error); + throw new BadRequestException(error.message || 'Failed to convert recording to memo'); + } + } +} diff --git a/apps/memoro/apps/backend/src/meetings/meetings.module.ts b/apps/memoro/apps/backend/src/meetings/meetings.module.ts new file mode 100644 index 000000000..daf3884e4 --- /dev/null +++ b/apps/memoro/apps/backend/src/meetings/meetings.module.ts @@ -0,0 +1,16 @@ +import { Module, forwardRef } from '@nestjs/common'; +import { ConfigModule } from '@nestjs/config'; +import { AuthModule } from '../auth/auth.module'; +import { CreditsModule } from '../credits/credits.module'; +import { MemoroModule } from '../memoro/memoro.module'; +import { MeetingsController } from './meetings.controller'; +import { MeetingsWebhookController } from './meetings-webhook.controller'; +import { MeetingsProxyService } from './meetings-proxy.service'; + +@Module({ + imports: [ConfigModule, AuthModule, CreditsModule, forwardRef(() => MemoroModule)], + controllers: [MeetingsController, MeetingsWebhookController], + providers: [MeetingsProxyService], + exports: [MeetingsProxyService], +}) +export class MeetingsModule {} diff --git a/apps/memoro/apps/backend/src/memoro/__tests__/video-upload.e2e.test.ts b/apps/memoro/apps/backend/src/memoro/__tests__/video-upload.e2e.test.ts new file mode 100644 index 000000000..9c29776da --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/__tests__/video-upload.e2e.test.ts @@ -0,0 +1,585 @@ +/** + * End-to-End Tests for Video Upload and Processing + * Tests the complete flow from upload to transcription + */ + +import { Test, TestingModule } from '@nestjs/testing'; +import { MemoroController } from '../memoro.controller'; +import { MemoroService } from '../memoro.service'; +import { CreditClientService } from '../../credits/credit-client.service'; +import { ConfigService } from '@nestjs/config'; +import { BadRequestException, NotFoundException } from '@nestjs/common'; +import { JwtPayload } from '../../types/jwt-payload.interface'; + +describe('Video Upload E2E Tests', () => { + let controller: MemoroController; + let service: MemoroService; + let creditService: CreditClientService; + let configService: ConfigService; + + const mockUser: JwtPayload = { + sub: 'test-user-123', + email: 'test@example.com', + role: 'authenticated', + app_id: 'test-app-id', + aud: 'authenticated', + iat: Date.now(), + exp: Date.now() + 3600000, + }; + + const mockRequest = { + token: 'mock-jwt-token', + }; + + beforeEach(async () => { + const module: TestingModule = await Test.createTestingModule({ + controllers: [MemoroController], + providers: [ + { + provide: MemoroService, + useValue: { + createMemoFromUploadedFile: jest.fn(), + updateMemoTranscriptionStatus: jest.fn(), + validateMemoForAppend: jest.fn(), + updateAppendTranscriptionStatus: jest.fn(), + getSupabaseUrl: jest.fn().mockReturnValue('https://test.supabase.co'), + getSupabaseKey: jest.fn().mockReturnValue('test-key'), + }, + }, + { + provide: CreditClientService, + useValue: { + checkUserCredits: jest.fn(), + checkSpaceCredits: jest.fn(), + checkAndConsumeCredits: jest.fn(), + }, + }, + { + provide: ConfigService, + useValue: { + get: jest.fn((key: string) => { + if (key === 'AUDIO_MICROSERVICE_URL') { + return 'https://audio-microservice.test'; + } + return null; + }), + }, + }, + ], + }).compile(); + + controller = module.get(MemoroController); + service = module.get(MemoroService); + creditService = module.get(CreditClientService); + configService = module.get(ConfigService); + }); + + afterEach(() => { + jest.clearAllMocks(); + }); + + describe('Video file processing', () => { + it('should process MP4 video file successfully', async () => { + const processData = { + filePath: 'user-123/memo-456/video_2025-01-01.mp4', + duration: 180, // 3 minutes + spaceId: undefined, + blueprintId: null, + recordingLanguages: ['en-US'], + memoId: 'memo-456', + location: null, + recordingStartedAt: new Date().toISOString(), + enableDiarization: true, + mediaType: 'video' as const, + }; + + const mockMemoResult = { + memoId: 'memo-456', + memo: { + id: 'memo-456', + user_id: mockUser.sub, + created_at: new Date().toISOString(), + metadata: { + processing: { + transcription: { status: 'pending' }, + }, + }, + source: { + audio_path: processData.filePath, + duration: processData.duration, + type: 'video', + }, + }, + audioPath: processData.filePath, + }; + + // Mock credit check + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 30, + currentCredits: 100, + creditType: 'user', + }); + + // Mock memo creation + jest.spyOn(service, 'createMemoFromUploadedFile').mockResolvedValue(mockMemoResult); + + const result = await controller.processUploadedAudio(mockUser, processData, mockRequest); + + expect(result).toMatchObject({ + success: true, + memoId: 'memo-456', + status: 'processing', + mediaType: 'video', + }); + + expect(service.createMemoFromUploadedFile).toHaveBeenCalledWith( + mockUser.sub, + processData.filePath, + processData.duration, + processData.spaceId, + processData.blueprintId, + processData.memoId, + mockRequest.token, + processData.recordingStartedAt, + processData.location, + 'video', + undefined + ); + }); + + it('should auto-detect video type from file extension', async () => { + const processData = { + filePath: 'user-123/memo-456/recording.mov', + duration: 240, // 4 minutes + mediaType: undefined, // Not specified + }; + + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 40, + currentCredits: 100, + creditType: 'user', + }); + + jest.spyOn(service, 'createMemoFromUploadedFile').mockResolvedValue({ + memoId: 'memo-456', + memo: { id: 'memo-456' } as any, + audioPath: processData.filePath, + }); + + const result = await controller.processUploadedAudio(mockUser, processData, mockRequest); + + expect(result.mediaType).toBe('video'); + }); + + it('should handle various video formats', async () => { + const videoFormats = [ + { ext: 'mp4', mime: 'video/mp4' }, + { ext: 'mov', mime: 'video/quicktime' }, + { ext: 'avi', mime: 'video/x-msvideo' }, + { ext: 'mkv', mime: 'video/x-matroska' }, + { ext: 'webm', mime: 'video/webm' }, + { ext: 'm4v', mime: 'video/x-m4v' }, + ]; + + for (const format of videoFormats) { + const processData = { + filePath: `user-123/memo-456/video.${format.ext}`, + duration: 180, + mediaType: 'video' as const, + }; + + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 30, + currentCredits: 100, + creditType: 'user', + }); + + jest.spyOn(service, 'createMemoFromUploadedFile').mockResolvedValue({ + memoId: `memo-${format.ext}`, + memo: { id: `memo-${format.ext}` } as any, + audioPath: processData.filePath, + }); + + const result = await controller.processUploadedAudio(mockUser, processData, mockRequest); + + expect(result.success).toBe(true); + expect(result.mediaType).toBe('video'); + } + }); + }); + + describe('Credit validation for video files', () => { + it('should calculate credits correctly for video duration', async () => { + const processData = { + filePath: 'user-123/memo-456/long-video.mp4', + duration: 7200, // 2 hours = 120 minutes + mediaType: 'video' as const, + }; + + // 120 minutes * 100 credits/60 minutes = 200 credits + const expectedCredits = 200; + + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: expectedCredits, + currentCredits: 500, + creditType: 'user', + }); + + jest.spyOn(service, 'createMemoFromUploadedFile').mockResolvedValue({ + memoId: 'memo-456', + memo: { id: 'memo-456' } as any, + audioPath: processData.filePath, + }); + + await controller.processUploadedAudio(mockUser, processData, mockRequest); + + expect(creditService.checkUserCredits).toHaveBeenCalledWith( + mockUser.sub, + expectedCredits, + mockRequest.token + ); + }); + + it('should reject video upload with insufficient credits', async () => { + const processData = { + filePath: 'user-123/memo-456/video.mp4', + duration: 3600, // 1 hour + mediaType: 'video' as const, + }; + + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: false, + requiredCredits: 100, + currentCredits: 50, + creditType: 'user', + }); + + await expect( + controller.processUploadedAudio(mockUser, processData, mockRequest) + ).rejects.toThrow('Insufficient credits'); + }); + }); + + describe('Error handling', () => { + it('should reject invalid file path', async () => { + const processData = { + filePath: '', + duration: 180, + mediaType: 'video' as const, + }; + + await expect( + controller.processUploadedAudio(mockUser, processData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + + it('should reject invalid duration', async () => { + const processData = { + filePath: 'user-123/memo-456/video.mp4', + duration: 0, + mediaType: 'video' as const, + }; + + await expect( + controller.processUploadedAudio(mockUser, processData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + + it('should reject negative duration', async () => { + const processData = { + filePath: 'user-123/memo-456/video.mp4', + duration: -100, + mediaType: 'video' as const, + }; + + await expect( + controller.processUploadedAudio(mockUser, processData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + + it('should reject unsupported file type', async () => { + const processData = { + filePath: 'user-123/memo-456/document.pdf', + duration: 180, + mediaType: undefined, + }; + + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 30, + currentCredits: 100, + creditType: 'user', + }); + + await expect( + controller.processUploadedAudio(mockUser, processData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + }); + + describe('Large file handling', () => { + it('should process large video files with batch transcription', async () => { + const processData = { + filePath: 'user-123/memo-456/long-presentation.mp4', + duration: 7200, // 2 hours - should trigger batch processing + mediaType: 'video' as const, + recordingLanguages: ['en-US'], + }; + + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 200, + currentCredits: 500, + creditType: 'user', + }); + + jest.spyOn(service, 'createMemoFromUploadedFile').mockResolvedValue({ + memoId: 'memo-456', + memo: { + id: 'memo-456', + source: { + audio_path: processData.filePath, + duration: processData.duration, + type: 'video', + }, + } as any, + audioPath: processData.filePath, + }); + + const result = await controller.processUploadedAudio(mockUser, processData, mockRequest); + + expect(result.success).toBe(true); + expect(result.estimatedDuration).toBe(120); // 7200 seconds = 120 minutes + }); + + it('should handle file size limits', async () => { + const processData = { + filePath: 'user-123/memo-456/huge-video.mp4', + duration: 86400, // 24 hours + mediaType: 'video' as const, + }; + + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 1440, // 24 hours * 100 / 60 + currentCredits: 2000, + creditType: 'user', + }); + + jest.spyOn(service, 'createMemoFromUploadedFile').mockResolvedValue({ + memoId: 'memo-456', + memo: { id: 'memo-456' } as any, + audioPath: processData.filePath, + }); + + const result = await controller.processUploadedAudio(mockUser, processData, mockRequest); + + expect(result.success).toBe(true); + expect(result.estimatedDuration).toBe(1440); // 24 hours in minutes + }); + }); + + describe('Append video recording', () => { + it('should append video recording to existing memo', async () => { + const appendData = { + memoId: 'existing-memo-123', + filePath: 'user-123/memo-123/additional-video.mp4', + duration: 120, + enableDiarization: true, + }; + + const mockMemo = { + id: 'existing-memo-123', + user_id: mockUser.sub, + metadata: { + spaceId: undefined, + }, + source: { + transcript: 'Original transcript', + audio_path: 'user-123/memo-123/original-audio.m4a', + duration: 180, + }, + }; + + jest.spyOn(service, 'validateMemoForAppend').mockResolvedValue(mockMemo as any); + + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 20, + currentCredits: 100, + creditType: 'user', + }); + + const result = await controller.appendTranscription(mockUser, appendData, mockRequest); + + expect(result).toMatchObject({ + success: true, + memoId: 'existing-memo-123', + status: 'processing', + }); + + expect(service.validateMemoForAppend).toHaveBeenCalledWith( + mockUser.sub, + appendData.memoId, + mockRequest.token + ); + }); + + it('should reject append to non-existent memo', async () => { + const appendData = { + memoId: 'non-existent-memo', + filePath: 'user-123/memo-123/video.mp4', + duration: 120, + }; + + jest.spyOn(service, 'validateMemoForAppend').mockResolvedValue(null); + + await expect( + controller.appendTranscription(mockUser, appendData, mockRequest) + ).rejects.toThrow(NotFoundException); + }); + }); + + describe('Performance tests', () => { + it('should handle concurrent video uploads', async () => { + const uploadPromises = Array(5) + .fill(null) + .map((_, index) => { + const processData = { + filePath: `user-123/memo-${index}/video.mp4`, + duration: 180, + mediaType: 'video' as const, + }; + + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 30, + currentCredits: 500, + creditType: 'user', + }); + + jest.spyOn(service, 'createMemoFromUploadedFile').mockResolvedValue({ + memoId: `memo-${index}`, + memo: { id: `memo-${index}` } as any, + audioPath: processData.filePath, + }); + + return controller.processUploadedAudio(mockUser, processData, mockRequest); + }); + + const results = await Promise.all(uploadPromises); + + expect(results).toHaveLength(5); + results.forEach((result, index) => { + expect(result.success).toBe(true); + expect(result.memoId).toBe(`memo-${index}`); + }); + }); + + it('should process video upload within acceptable time', async () => { + const processData = { + filePath: 'user-123/memo-456/video.mp4', + duration: 300, + mediaType: 'video' as const, + }; + + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 50, + currentCredits: 100, + creditType: 'user', + }); + + jest.spyOn(service, 'createMemoFromUploadedFile').mockResolvedValue({ + memoId: 'memo-456', + memo: { id: 'memo-456' } as any, + audioPath: processData.filePath, + }); + + const start = Date.now(); + await controller.processUploadedAudio(mockUser, processData, mockRequest); + const duration = Date.now() - start; + + // Should complete within 1 second (excluding actual transcription) + expect(duration).toBeLessThan(1000); + }); + }); + + describe('Security tests', () => { + it('should validate user authorization', async () => { + const processData = { + filePath: 'other-user/memo-456/video.mp4', + duration: 180, + mediaType: 'video' as const, + }; + + // Service should validate that file path matches user + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 30, + currentCredits: 100, + creditType: 'user', + }); + + jest.spyOn(service, 'createMemoFromUploadedFile').mockResolvedValue({ + memoId: 'memo-456', + memo: { id: 'memo-456' } as any, + audioPath: processData.filePath, + }); + + await controller.processUploadedAudio(mockUser, processData, mockRequest); + + // Verify service was called with correct user ID + expect(service.createMemoFromUploadedFile).toHaveBeenCalledWith( + mockUser.sub, + expect.any(String), + expect.any(Number), + expect.anything(), + expect.anything(), + expect.anything(), + expect.any(String), + expect.anything(), + expect.anything(), + expect.any(String), + expect.anything() + ); + }); + + it('should reject path traversal attempts', async () => { + const maliciousPaths = [ + '../../../etc/passwd', + '..\\..\\..\\windows\\system32', + 'user-123/../other-user/memo-456/video.mp4', + ]; + + for (const path of maliciousPaths) { + const processData = { + filePath: path, + duration: 180, + mediaType: 'video' as const, + }; + + // The service should handle path validation + jest.spyOn(creditService, 'checkUserCredits').mockResolvedValue({ + hasEnoughCredits: true, + requiredCredits: 30, + currentCredits: 100, + creditType: 'user', + }); + + // Assuming service validates and rejects + jest + .spyOn(service, 'createMemoFromUploadedFile') + .mockRejectedValue(new BadRequestException('Invalid file path')); + + await expect( + controller.processUploadedAudio(mockUser, processData, mockRequest) + ).rejects.toThrow(); + } + }); + }); +}); diff --git a/apps/memoro/apps/backend/src/memoro/combine-memos.controller.ts b/apps/memoro/apps/backend/src/memoro/combine-memos.controller.ts new file mode 100644 index 000000000..d18c5d34c --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/combine-memos.controller.ts @@ -0,0 +1,114 @@ +import { Controller, Post, Body, UseGuards, BadRequestException } from '@nestjs/common'; +import { AuthGuard } from '../guards/auth.guard'; +import { User } from '../decorators/user.decorator'; +import { CreditConsumptionService } from '../credits/credit-consumption.service'; +import { calculateMemoCombineCost } from '../credits/pricing.constants'; +import { InsufficientCreditsException } from '../errors/insufficient-credits.error'; + +class CombineMemosDto { + memo_ids: string[]; + blueprint_id: string; + custom_prompt?: string; +} + +@Controller('memoro/combine-memos') +@UseGuards(AuthGuard) +export class CombineMemosController { + constructor(private readonly creditConsumptionService: CreditConsumptionService) {} + + @Post() + async processCombineMemos(@User() user: any, @Body() dto: CombineMemosDto) { + if (!dto.memo_ids || !Array.isArray(dto.memo_ids) || dto.memo_ids.length === 0) { + throw new BadRequestException('memo_ids must be a non-empty array'); + } + + if (!dto.blueprint_id) { + throw new BadRequestException('blueprint_id is required'); + } + + // Extract token from request + const token = user.token; + const requiredCredits = calculateMemoCombineCost(dto.memo_ids.length); + + try { + // Check and consume credits first using centralized service + const creditResult = await this.creditConsumptionService.consumeCombinationCredits( + user.sub, + dto.memo_ids, + undefined, // spaceId + token + ); + + if (!creditResult.success) { + throw new BadRequestException(creditResult.message || creditResult.error); + } + + // Now call the Supabase Edge Function to do the AI processing + // Create an authenticated Supabase client with the user's JWT token + const { createClient } = require('@supabase/supabase-js'); + const supabaseUrl = + process.env.MEMORO_SUPABASE_URL || 'https://npgifbrwhftlbrbaglmi.supabase.co'; + const anonKey = process.env.MEMORO_SUPABASE_ANON_KEY; + + // Create a Supabase client with user's JWT token + const supabase = createClient(supabaseUrl, anonKey, { + global: { + headers: { + Authorization: `Bearer ${token}`, + }, + }, + }); + + console.log('CombineMemosController - Calling Supabase function with authenticated client'); + + const requestBody: any = { + memo_ids: dto.memo_ids, + blueprint_id: dto.blueprint_id, + }; + + if (dto.custom_prompt) { + requestBody.custom_prompt = dto.custom_prompt; + } + + console.log('CombineMemosController - Request body:', requestBody); + + const { data, error: functionError } = await supabase.functions.invoke('combine-memos', { + body: requestBody, + }); + + if (functionError) { + console.error('CombineMemosController - Supabase function error:', functionError); + throw new Error(`Memo combination failed: ${functionError.message}`); + } + + console.log('CombineMemosController - Supabase function result:', data); + + const result = data; + + return { + success: true, + memo_id: result.memo_id, + combined_memos_count: result.combined_memos_count, + processed_prompts_count: result.processed_prompts_count, + total_prompts_count: result.total_prompts_count, + creditsConsumed: requiredCredits, + creditType: creditResult.creditType, + }; + } catch (error) { + if (error instanceof InsufficientCreditsException) { + throw error; // Let the exception propagate with 402 status + } + + if (error.message?.includes('insufficient credits')) { + // Fallback for any legacy insufficient credit errors + throw new InsufficientCreditsException({ + requiredCredits, + availableCredits: 0, + creditType: 'user', // CombineMemosDto doesn't have space_id + operation: 'combination', + }); + } + throw new BadRequestException(error.message); + } + } +} diff --git a/apps/memoro/apps/backend/src/memoro/memoro-service.controller.ts b/apps/memoro/apps/backend/src/memoro/memoro-service.controller.ts new file mode 100644 index 000000000..1972475fa --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/memoro-service.controller.ts @@ -0,0 +1,131 @@ +import { Controller, Post, Body, UseGuards, HttpCode, BadRequestException } from '@nestjs/common'; +import { MemoroService } from './memoro.service'; +import { ServiceAuthGuard } from '../guards/service-auth.guard'; + +@Controller('memoro/service') +@UseGuards(ServiceAuthGuard) +export class MemoroServiceController { + constructor(private readonly memoroService: MemoroService) {} + + /** + * Service-to-service endpoint for transcription completion + * Used by audio microservice with service role key authentication + */ + @Post('transcription-completed') + @HttpCode(200) + async handleTranscriptionCompleted( + @Body() + callbackData: { + memoId: string; + userId: string; + transcriptionResult?: any; + route?: 'fast' | 'batch'; + success?: boolean; + error?: string; + } + ) { + try { + console.log(`[Service Auth] Received transcription callback for memo ${callbackData.memoId}`); + + if (!callbackData.memoId || !callbackData.userId) { + throw new BadRequestException('memoId and userId are required'); + } + + // Process the transcription using the existing method + // The service will use service role key and validate ownership + const result = await this.memoroService.handleTranscriptionCompleted( + callbackData.memoId, + callbackData.userId, + callbackData.transcriptionResult, + callbackData.route, + callbackData.success, + callbackData.error, + null // No user token - will use service role key + ); + + return result; + } catch (error) { + console.error(`[Service Auth] Error processing callback:`, error); + throw new BadRequestException(`Failed to process transcription callback: ${error.message}`); + } + } + + /** + * Service-to-service endpoint for append transcription completion + */ + @Post('append-transcription-completed') + @HttpCode(200) + async handleAppendTranscriptionCompleted( + @Body() + callbackData: { + memoId: string; + userId: string; + transcriptionResult?: any; + route?: 'fast' | 'batch'; + success?: boolean; + error?: string; + recordingIndex?: number; + } + ) { + try { + console.log( + `[Service Auth] Received append transcription callback for memo ${callbackData.memoId}` + ); + + if (!callbackData.memoId || !callbackData.userId) { + throw new BadRequestException('memoId and userId are required'); + } + + const result = await this.memoroService.handleAppendTranscriptionCompleted( + callbackData.memoId, + callbackData.userId, + callbackData.transcriptionResult, + callbackData.route || 'fast', + callbackData.success !== false, + callbackData.error || null, + null // No user token - will use service role key + ); + + return result; + } catch (error) { + console.error(`[Service Auth] Error processing append callback:`, error); + throw new BadRequestException( + `Failed to process append transcription callback: ${error.message}` + ); + } + } + + /** + * Service-to-service endpoint for updating batch metadata + */ + @Post('update-batch-metadata') + @HttpCode(200) + async updateBatchMetadata( + @Body() + body: { + memoId: string; + jobId: string; + batchTranscription: boolean; + userId?: string; // Optional for backward compatibility + } + ) { + try { + const { memoId, jobId, batchTranscription, userId } = body; + console.log(`[Service Auth] Updating batch metadata for memo ${memoId}`); + + const result = await this.memoroService.updateBatchMetadataByMemoId( + memoId, + jobId, + batchTranscription, + null, // No user token needed - service will use service role key + undefined, // userSelectedLanguages + userId // Pass userId for ownership validation + ); + + return result; + } catch (error) { + console.error(`[Service Auth] Error updating batch metadata:`, error); + throw new BadRequestException(`Failed to update batch metadata: ${error.message}`); + } + } +} diff --git a/apps/memoro/apps/backend/src/memoro/memoro.controller.spec.ts b/apps/memoro/apps/backend/src/memoro/memoro.controller.spec.ts new file mode 100644 index 000000000..a1459e2d3 --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/memoro.controller.spec.ts @@ -0,0 +1,644 @@ +import { Test, TestingModule } from '@nestjs/testing'; +import { MemoroController } from './memoro.controller'; +import { MemoroService } from './memoro.service'; +import { CreditClientService, CreditCheckResponse } from '../credits/credit-client.service'; +import { AuthGuard } from '../guards/auth.guard'; +import { BadRequestException, ForbiddenException, NotFoundException } from '@nestjs/common'; +import { JwtPayload } from '../types/jwt-payload.interface'; +import { MemoroSpaceDto } from '../interfaces/memoro.interfaces'; + +describe('MemoroController', () => { + let controller: MemoroController; + let memoroService: jest.Mocked; + let creditClientService: jest.Mocked; + + const mockUser: JwtPayload = { + sub: 'user-123', + email: 'test@example.com', + role: 'user', + app_id: 'test-app-id', + aud: 'authenticated', + iat: 1234567890, + exp: 1234567890, + }; + + const mockRequest = { + token: 'mock-token', + user: mockUser, + }; + + beforeEach(async () => { + const module: TestingModule = await Test.createTestingModule({ + controllers: [MemoroController], + providers: [ + { + provide: MemoroService, + useValue: { + getMemoroSpaces: jest.fn(), + createMemoroSpace: jest.fn(), + getMemoroSpaceDetails: jest.fn(), + deleteMemoroSpace: jest.fn(), + linkMemoToSpace: jest.fn(), + unlinkMemoFromSpace: jest.fn(), + getSpaceInvites: jest.fn(), + inviteUserToSpace: jest.fn(), + resendSpaceInvite: jest.fn(), + cancelSpaceInvite: jest.fn(), + getSpaceMemos: jest.fn(), + leaveSpace: jest.fn(), + getUserPendingInvites: jest.fn(), + acceptSpaceInvite: jest.fn(), + declineSpaceInvite: jest.fn(), + validateMemoForRetry: jest.fn(), + retryTranscription: jest.fn(), + retryHeadline: jest.fn(), + createMemoFromUploadedFile: jest.fn(), + updateMemoWithJobId: jest.fn(), + updateMemoTranscriptionStatus: jest.fn(), + updateBatchMetadataByMemoId: jest.fn(), + handleTranscriptionCompleted: jest.fn(), + getSupabaseUrl: jest.fn().mockReturnValue('https://test.supabase.co'), + getSupabaseKey: jest.fn().mockReturnValue('test-key'), + }, + }, + { + provide: CreditClientService, + useValue: { + checkSpaceCredits: jest.fn(), + checkUserCredits: jest.fn(), + checkAndConsumeCredits: jest.fn(), + }, + }, + ], + }) + .overrideGuard(AuthGuard) + .useValue({ + canActivate: jest.fn().mockReturnValue(true), + }) + .compile(); + + controller = module.get(MemoroController); + memoroService = module.get(MemoroService); + creditClientService = module.get(CreditClientService); + }); + + it('should be defined', () => { + expect(controller).toBeDefined(); + }); + + describe('getMemoroSpaces', () => { + it('should return spaces in correct format', async () => { + const mockSpaces: MemoroSpaceDto[] = [ + { + id: '1', + name: 'Space 1', + owner_id: 'user-123', + app_id: 'test-app-id', + roles: {}, + credits: 100, + created_at: '2024-01-01T00:00:00Z', + updated_at: '2024-01-01T00:00:00Z', + }, + { + id: '2', + name: 'Space 2', + owner_id: 'user-456', + app_id: 'test-app-id', + roles: {}, + credits: 200, + created_at: '2024-01-01T00:00:00Z', + updated_at: '2024-01-01T00:00:00Z', + }, + ]; + memoroService.getMemoroSpaces.mockResolvedValue(mockSpaces); + + const result = await controller.getMemoroSpaces(mockUser, mockRequest); + + expect(result).toEqual({ spaces: mockSpaces }); + expect(memoroService.getMemoroSpaces).toHaveBeenCalledWith(mockUser.sub, mockRequest.token); + }); + }); + + describe('createMemoroSpace', () => { + it('should create a space successfully', async () => { + const spaceName = 'New Space'; + const mockSpace: MemoroSpaceDto = { + id: 'space-123', + name: spaceName, + owner_id: mockUser.sub, + app_id: 'test-app-id', + roles: {}, + credits: 0, + created_at: '2024-01-01T00:00:00Z', + updated_at: '2024-01-01T00:00:00Z', + }; + memoroService.createMemoroSpace.mockResolvedValue(mockSpace); + + const result = await controller.createMemoroSpace(mockUser, spaceName, mockRequest); + + expect(result).toEqual({ space: mockSpace }); + expect(memoroService.createMemoroSpace).toHaveBeenCalledWith( + mockUser.sub, + spaceName, + mockRequest.token + ); + }); + + it('should throw BadRequestException if name is missing', async () => { + await expect(controller.createMemoroSpace(mockUser, '', mockRequest)).rejects.toThrow( + BadRequestException + ); + }); + }); + + describe('getMemoroSpaceDetails', () => { + it('should return space details when response already has space property', async () => { + const spaceId = 'space-123'; + const mockSpace: MemoroSpaceDto = { + id: spaceId, + name: 'Test Space', + owner_id: mockUser.sub, + app_id: 'test-app-id', + roles: {}, + credits: 100, + created_at: '2024-01-01T00:00:00Z', + updated_at: '2024-01-01T00:00:00Z', + }; + const mockSpaceData = { space: mockSpace }; + memoroService.getMemoroSpaceDetails.mockResolvedValue(mockSpaceData); + + const result = await controller.getMemoroSpaceDetails(mockUser, spaceId, mockRequest); + + expect(result).toEqual(mockSpaceData); + expect(memoroService.getMemoroSpaceDetails).toHaveBeenCalledWith( + mockUser.sub, + spaceId, + mockRequest.token + ); + }); + + it('should wrap response in space property if not already wrapped', async () => { + const spaceId = 'space-123'; + const mockSpaceData: MemoroSpaceDto = { + id: spaceId, + name: 'Test Space', + owner_id: mockUser.sub, + app_id: 'test-app-id', + roles: {}, + credits: 100, + created_at: '2024-01-01T00:00:00Z', + updated_at: '2024-01-01T00:00:00Z', + }; + memoroService.getMemoroSpaceDetails.mockResolvedValue(mockSpaceData); + + const result = await controller.getMemoroSpaceDetails(mockUser, spaceId, mockRequest); + + expect(result).toEqual({ space: mockSpaceData }); + }); + + it('should throw BadRequestException if spaceId is missing', async () => { + await expect(controller.getMemoroSpaceDetails(mockUser, '', mockRequest)).rejects.toThrow( + BadRequestException + ); + }); + }); + + describe('deleteMemoroSpace', () => { + it('should delete space successfully', async () => { + const spaceId = 'space-123'; + memoroService.deleteMemoroSpace.mockResolvedValue(undefined); + + const result = await controller.deleteMemoroSpace(mockUser, spaceId, mockRequest); + + expect(result).toEqual({ success: true, message: 'Space deleted successfully' }); + expect(memoroService.deleteMemoroSpace).toHaveBeenCalledWith( + mockUser.sub, + spaceId, + mockRequest.token + ); + }); + + it('should throw NotFoundException when space not found', async () => { + const spaceId = 'space-123'; + memoroService.deleteMemoroSpace.mockRejectedValue(new NotFoundException()); + + await expect(controller.deleteMemoroSpace(mockUser, spaceId, mockRequest)).rejects.toThrow( + NotFoundException + ); + }); + + it('should throw ForbiddenException when user lacks permission', async () => { + const spaceId = 'space-123'; + memoroService.deleteMemoroSpace.mockRejectedValue(new ForbiddenException()); + + await expect(controller.deleteMemoroSpace(mockUser, spaceId, mockRequest)).rejects.toThrow( + ForbiddenException + ); + }); + + it('should throw BadRequestException for other errors', async () => { + const spaceId = 'space-123'; + memoroService.deleteMemoroSpace.mockRejectedValue(new Error('Unknown error')); + + await expect(controller.deleteMemoroSpace(mockUser, spaceId, mockRequest)).rejects.toThrow( + BadRequestException + ); + }); + }); + + describe('linkMemoToSpace', () => { + it('should link memo to space successfully', async () => { + const linkData = { memoId: 'memo-123', spaceId: 'space-123' }; + const mockResult = { success: true, message: 'Memo linked successfully' }; + memoroService.linkMemoToSpace.mockResolvedValue(mockResult); + + const result = await controller.linkMemoToSpace(mockUser, linkData, mockRequest); + + expect(result).toEqual(mockResult); + expect(memoroService.linkMemoToSpace).toHaveBeenCalledWith( + mockUser.sub, + linkData, + mockRequest.token + ); + }); + + it('should return success even if service throws error', async () => { + const linkData = { memoId: 'memo-123', spaceId: 'space-123' }; + memoroService.linkMemoToSpace.mockRejectedValue(new Error('Space not found')); + + const result = await controller.linkMemoToSpace(mockUser, linkData, mockRequest); + + expect(result).toEqual({ + success: true, + message: 'Memo linked to space (direct DB operation)', + }); + }); + }); + + describe('inviteUserToSpace', () => { + it('should invite user successfully', async () => { + const spaceId = 'space-123'; + const inviteData = { email: 'invitee@example.com', role: 'member' }; + const mockResult = { inviteId: 'invite-123' }; + memoroService.inviteUserToSpace.mockResolvedValue(mockResult); + + const result = await controller.inviteUserToSpace(mockUser, spaceId, inviteData, mockRequest); + + expect(result).toEqual({ + success: true, + message: `Successfully invited ${inviteData.email} to the space`, + inviteId: mockResult.inviteId, + }); + expect(memoroService.inviteUserToSpace).toHaveBeenCalledWith( + mockUser.sub, + spaceId, + inviteData.email, + inviteData.role, + mockRequest.token + ); + }); + + it('should throw BadRequestException if spaceId is missing', async () => { + const inviteData = { email: 'invitee@example.com', role: 'member' }; + + await expect( + controller.inviteUserToSpace(mockUser, '', inviteData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + + it('should throw BadRequestException if email is missing', async () => { + const spaceId = 'space-123'; + const inviteData = { email: '', role: 'member' }; + + await expect( + controller.inviteUserToSpace(mockUser, spaceId, inviteData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + + it('should throw BadRequestException if role is missing', async () => { + const spaceId = 'space-123'; + const inviteData = { email: 'invitee@example.com', role: '' }; + + await expect( + controller.inviteUserToSpace(mockUser, spaceId, inviteData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + }); + + describe('checkTranscriptionCredits', () => { + it('should check user credits successfully', async () => { + const checkData = { durationSeconds: 300 }; + const mockCreditCheck: CreditCheckResponse = { + hasEnoughCredits: true, + requiredCredits: 5, + currentCredits: 100, + creditType: 'user', + }; + creditClientService.checkUserCredits.mockResolvedValue(mockCreditCheck); + + const result = await controller.checkTranscriptionCredits(mockUser, checkData, mockRequest); + + expect(result).toEqual({ + hasEnoughCredits: true, + requiredCredits: 5, + currentCredits: 100, + creditType: 'user', + durationMinutes: 5, + estimatedCostPerHour: 100, + }); + }); + + it('should check space credits when spaceId provided', async () => { + const checkData = { durationSeconds: 300, spaceId: 'space-123' }; + const mockCreditCheck: CreditCheckResponse = { + hasEnoughCredits: true, + requiredCredits: 5, + currentCredits: 200, + creditType: 'space', + }; + creditClientService.checkSpaceCredits.mockResolvedValue(mockCreditCheck); + + const result = await controller.checkTranscriptionCredits(mockUser, checkData, mockRequest); + + expect(result.creditType).toBe('space'); + expect(creditClientService.checkSpaceCredits).toHaveBeenCalledWith( + 'space-123', + 10, + mockRequest.token + ); + }); + + it('should fall back to user credits if space credit check fails', async () => { + const checkData = { durationSeconds: 300, spaceId: 'space-123' }; + const mockUserCreditCheck: CreditCheckResponse = { + hasEnoughCredits: true, + requiredCredits: 5, + currentCredits: 100, + creditType: 'user', + }; + creditClientService.checkSpaceCredits.mockRejectedValue(new Error('Space not found')); + creditClientService.checkUserCredits.mockResolvedValue(mockUserCreditCheck); + + const result = await controller.checkTranscriptionCredits(mockUser, checkData, mockRequest); + + expect(result.creditType).toBe('user'); + expect(creditClientService.checkUserCredits).toHaveBeenCalled(); + }); + + it('should throw BadRequestException for invalid duration', async () => { + const checkData = { durationSeconds: -1 }; + + await expect( + controller.checkTranscriptionCredits(mockUser, checkData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + }); + + describe('processUploadedAudio', () => { + it('should process uploaded audio successfully', async () => { + const processData = { + filePath: '/uploads/audio.mp3', + duration: 300, + spaceId: 'space-123', + enableDiarization: true, + }; + + const mockCreditCheck: CreditCheckResponse = { + hasEnoughCredits: true, + requiredCredits: 5, + currentCredits: 100, + creditType: 'space', + }; + + const mockMemoResult = { + memoId: 'memo-123', + audioPath: processData.filePath, + memo: { id: 'memo-123', created_at: '2025-06-26T17:00:00Z' }, + }; + + creditClientService.checkSpaceCredits.mockResolvedValue(mockCreditCheck); + memoroService.createMemoFromUploadedFile.mockResolvedValue(mockMemoResult); + + const result = await controller.processUploadedAudio(mockUser, processData, mockRequest); + + expect(result).toEqual({ + success: true, + memoId: 'memo-123', + memo: { id: 'memo-123', created_at: '2025-06-26T17:00:00Z' }, + filePath: processData.filePath, + status: 'processing', + estimatedDuration: 5, + message: 'Memo created successfully. Transcription in progress.', + estimatedCredits: 10, + }); + }); + + it('should throw ForbiddenException for insufficient credits', async () => { + const processData = { + filePath: '/uploads/audio.mp3', + duration: 300, + }; + + const mockCreditCheck: CreditCheckResponse = { + hasEnoughCredits: false, + requiredCredits: 5, + currentCredits: 2, + creditType: 'user', + }; + + creditClientService.checkUserCredits.mockResolvedValue(mockCreditCheck); + + await expect( + controller.processUploadedAudio(mockUser, processData, mockRequest) + ).rejects.toThrow(ForbiddenException); + }); + + it('should throw BadRequestException for missing file path', async () => { + const processData = { + filePath: '', + duration: 300, + }; + + await expect( + controller.processUploadedAudio(mockUser, processData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + + it('should throw BadRequestException for invalid duration', async () => { + const processData = { + filePath: '/uploads/audio.mp3', + duration: 0, + }; + + await expect( + controller.processUploadedAudio(mockUser, processData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + + it('should use batch transcription for Swiss German', async () => { + const processData = { + filePath: '/uploads/audio.mp3', + duration: 300, + recordingLanguages: ['de-CH'], + }; + + const mockCreditCheck: CreditCheckResponse = { + hasEnoughCredits: true, + requiredCredits: 5, + currentCredits: 100, + creditType: 'user', + }; + + const mockMemoResult = { + memoId: 'memo-123', + audioPath: processData.filePath, + memo: { id: 'memo-123', created_at: '2025-06-26T17:00:00Z' }, + }; + + creditClientService.checkUserCredits.mockResolvedValue(mockCreditCheck); + memoroService.createMemoFromUploadedFile.mockResolvedValue(mockMemoResult); + + const result = await controller.processUploadedAudio(mockUser, processData, mockRequest); + + expect(result.success).toBe(true); + // Note: The actual transcription happens asynchronously + }); + }); + + describe('retryTranscription', () => { + it('should retry transcription successfully', async () => { + const retryData = { memoId: 'memo-123' }; + const mockMemo = { + id: 'memo-123', + metadata: { + processing: { + transcription: { + status: 'error', + retryAttempts: 1, + }, + }, + }, + }; + + memoroService.validateMemoForRetry.mockResolvedValue(mockMemo); + memoroService.retryTranscription.mockResolvedValue({ success: true }); + + const result = await controller.retryTranscription(mockUser, retryData, mockRequest); + + expect(result).toEqual({ + success: true, + message: 'Transcription retry initiated successfully', + memoId: 'memo-123', + retryAttempt: 2, + }); + }); + + it('should throw BadRequestException if memoId is missing', async () => { + const retryData = { memoId: '' }; + + await expect(controller.retryTranscription(mockUser, retryData, mockRequest)).rejects.toThrow( + BadRequestException + ); + }); + + it('should throw NotFoundException if memo not found', async () => { + const retryData = { memoId: 'memo-123' }; + memoroService.validateMemoForRetry.mockResolvedValue(null); + + await expect(controller.retryTranscription(mockUser, retryData, mockRequest)).rejects.toThrow( + NotFoundException + ); + }); + + it('should throw BadRequestException if transcription did not fail', async () => { + const retryData = { memoId: 'memo-123' }; + const mockMemo = { + id: 'memo-123', + metadata: { + processing: { + transcription: { + status: 'completed', + }, + }, + }, + }; + + memoroService.validateMemoForRetry.mockResolvedValue(mockMemo); + + await expect(controller.retryTranscription(mockUser, retryData, mockRequest)).rejects.toThrow( + BadRequestException + ); + }); + + it('should throw BadRequestException if max retries exceeded', async () => { + const retryData = { memoId: 'memo-123' }; + const mockMemo = { + id: 'memo-123', + metadata: { + processing: { + transcription: { + status: 'error', + retryAttempts: 3, + }, + }, + }, + }; + + memoroService.validateMemoForRetry.mockResolvedValue(mockMemo); + + await expect(controller.retryTranscription(mockUser, retryData, mockRequest)).rejects.toThrow( + BadRequestException + ); + }); + }); + + describe('handleTranscriptionCompleted', () => { + it('should handle transcription completion successfully', async () => { + const callbackData = { + memoId: 'memo-123', + userId: 'user-123', + transcriptionResult: { text: 'Hello world' }, + route: 'fast' as const, + success: true, + }; + + const mockResult = { success: true, message: 'Transcription completed' }; + memoroService.handleTranscriptionCompleted.mockResolvedValue(mockResult); + + const result = await controller.handleTranscriptionCompleted(callbackData, mockRequest); + + expect(result).toEqual(mockResult); + expect(memoroService.handleTranscriptionCompleted).toHaveBeenCalledWith( + callbackData.memoId, + callbackData.userId, + callbackData.transcriptionResult, + callbackData.route, + callbackData.success, + undefined, + mockRequest.token + ); + }); + + it('should throw BadRequestException if memoId is missing', async () => { + const callbackData = { + memoId: '', + userId: 'user-123', + }; + + await expect( + controller.handleTranscriptionCompleted(callbackData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + + it('should throw BadRequestException if userId is missing', async () => { + const callbackData = { + memoId: 'memo-123', + userId: '', + }; + + await expect( + controller.handleTranscriptionCompleted(callbackData, mockRequest) + ).rejects.toThrow(BadRequestException); + }); + }); +}); diff --git a/apps/memoro/apps/backend/src/memoro/memoro.controller.ts b/apps/memoro/apps/backend/src/memoro/memoro.controller.ts new file mode 100644 index 000000000..8314462c4 --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/memoro.controller.ts @@ -0,0 +1,1963 @@ +import { + Controller, + Get, + Post, + Delete, + Body, + Param, + UseGuards, + Req, + BadRequestException, + HttpCode, + NotFoundException, + ForbiddenException, +} from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { createClient } from '@supabase/supabase-js'; +import { MemoroService } from './memoro.service'; +import { User } from '../decorators/user.decorator'; +import { AuthGuard } from '../guards/auth.guard'; +import { JwtPayload } from '../types/jwt-payload.interface'; +import { LinkMemoSpaceDto, UnlinkMemoSpaceDto } from '../interfaces/memoro.interfaces'; +import { CreditClientService } from '../credits/credit-client.service'; +import { calculateTranscriptionCost, getOperationCost } from '../credits/pricing.constants'; +import { + InsufficientCreditsException, + isInsufficientCreditsError, +} from '../errors/insufficient-credits.error'; + +@Controller('memoro') +@UseGuards(AuthGuard) +export class MemoroController { + constructor( + private readonly memoroService: MemoroService, + private readonly creditClientService: CreditClientService, + private readonly configService: ConfigService + ) {} + + @Get('spaces') + async getMemoroSpaces(@User() user: JwtPayload, @Req() req) { + const token = req.token; // This is set by the AuthGuard + console.log('Token: ', token); + console.log('User: ', user); + + // Get spaces from service + const spaces = await this.memoroService.getMemoroSpaces(user.sub, token); + + // Return in the format expected by the frontend: { spaces: [...] } + return { spaces }; + } + + @Post('spaces') + async createMemoroSpace(@User() user: JwtPayload, @Body('name') name: string, @Req() req) { + if (!name) { + throw new BadRequestException('Space name is required'); + } + const token = req.token; + + // Get the created space from service + const space = await this.memoroService.createMemoroSpace(user.sub, name, token); + + // Return in the format expected by the frontend: { space: {...} } + return { space }; + } + + @Get('spaces/:id') + async getMemoroSpaceDetails(@User() user: JwtPayload, @Param('id') spaceId: string, @Req() req) { + if (!spaceId) { + throw new BadRequestException('Space ID is required'); + } + const token = req.token; + + // Get the space details from service + const spaceData = await this.memoroService.getMemoroSpaceDetails(user.sub, spaceId, token); + + // Check if the response already contains a space property to avoid double nesting + if (spaceData && typeof spaceData === 'object' && 'space' in spaceData) { + // The response is already in the format { space: {...} } + return spaceData; + } else { + // Wrap the space data in the format expected by the frontend: { space: {...} } + return { space: spaceData }; + } + } + + @Delete('spaces/:id') + async deleteMemoroSpace(@User() user: JwtPayload, @Param('id') spaceId: string, @Req() req) { + if (!spaceId) { + throw new BadRequestException('Space ID is required'); + } + const token = req.token; + + try { + // Call service to delete the space + const result = await this.memoroService.deleteMemoroSpace(user.sub, spaceId, token); + + // Return success response + return { + success: true, + message: 'Space deleted successfully', + }; + } catch (error) { + if (error instanceof NotFoundException) { + throw error; + } else if (error instanceof ForbiddenException) { + throw error; + } else { + throw new BadRequestException(`Failed to delete space: ${error.message}`); + } + } + } + + @Post('link-memo') + @HttpCode(200) + async linkMemoToSpace( + @User() user: JwtPayload, + @Body() linkMemoSpaceDto: LinkMemoSpaceDto, + @Req() req + ) { + const token = req.token; + try { + return await this.memoroService.linkMemoToSpace(user.sub, linkMemoSpaceDto, token); + } catch (error) { + console.warn(`Error in linkMemoToSpace: ${error.message}`); + // Return success even if there was an error with verification + // This is a temporary workaround for spaces that exist in Memoro but not in mana-core + return { success: true, message: 'Memo linked to space (direct DB operation)' }; + } + } + + @Post('unlink-memo') + @HttpCode(200) + async unlinkMemoFromSpace( + @User() user: JwtPayload, + @Body() unlinkMemoSpaceDto: UnlinkMemoSpaceDto, + @Req() req + ) { + const token = req.token; + try { + return await this.memoroService.unlinkMemoFromSpace(user.sub, unlinkMemoSpaceDto, token); + } catch (error) { + console.warn(`Error in unlinkMemoFromSpace: ${error.message}`); + + // Create a direct database connection for emergency fallback + try { + // Get values from DTO + const { memoId, spaceId } = unlinkMemoSpaceDto; + + // Get MEMORO_SUPABASE_URL and MEMORO_SUPABASE_ANON_KEY + const memoroUrl = this.memoroService.getSupabaseUrl(); + const memoroKey = this.memoroService.getSupabaseKey(); + + if (!memoroUrl || !memoroKey) { + throw new Error('Missing Supabase credentials'); + } + + // Create a direct Supabase client + const supabase = createClient(memoroUrl, memoroKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + // Delete the link directly + console.log( + `[EMERGENCY FALLBACK] Deleting memo_spaces link directly: memo_id=${memoId}, space_id=${spaceId}` + ); + const { error: deleteError } = await supabase + .from('memo_spaces') + .delete() + .eq('memo_id', memoId) + .eq('space_id', spaceId); + + if (deleteError) { + console.error(`Direct DB delete error: ${deleteError.message}`); + throw deleteError; + } + + return { + success: true, + message: 'Memo unlinked from space (emergency direct DB operation)', + }; + } catch (dbError) { + console.error(`Failed direct DB operation: ${dbError.message}`); + // Finally return success to avoid UI confusion + return { success: true, message: 'Attempted to unlink memo (frontend should refresh)' }; + } + } + } + + @Get('spaces/:id/invites') + async getSpaceInvites(@User() user: JwtPayload, @Param('id') spaceId: string, @Req() req) { + if (!spaceId) { + throw new BadRequestException('Space ID is required'); + } + + try { + const token = req.token; // This is set by the AuthGuard + + // Call the spaces service to get invites for the space + const result = await this.memoroService.getSpaceInvites(spaceId, token); + + // Return the invites in the format expected by the frontend + return result; + } catch (error) { + console.error(`Failed to get invites for space ${spaceId}:`, error); + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + throw new Error(`Failed to get invites for space ${spaceId}: ${error.message}`); + } + } + + @Post('spaces/:id/invite') + async inviteUserToSpace( + @User() user: JwtPayload, + @Param('id') spaceId: string, + @Body() inviteData: { email: string; role: string }, + @Req() req + ) { + if (!spaceId) { + throw new BadRequestException('Space ID is required'); + } + + if (!inviteData.email) { + throw new BadRequestException('Email is required'); + } + + if (!inviteData.role) { + throw new BadRequestException('Role is required'); + } + + try { + const token = req.token; // This is set by the AuthGuard + + // Call the service to invite the user to the space + const result = await this.memoroService.inviteUserToSpace( + user.sub, + spaceId, + inviteData.email, + inviteData.role, + token + ); + + // Return a success response + return { + success: true, + message: `Successfully invited ${inviteData.email} to the space`, + inviteId: result.inviteId, + }; + } catch (error) { + console.error(`Failed to invite user to space ${spaceId}:`, error); + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + throw new Error(`Failed to invite user to space: ${error.message}`); + } + } + + @Post('spaces/invites/:inviteId/resend') + async resendInvite(@User() user: JwtPayload, @Param('inviteId') inviteId: string, @Req() req) { + if (!inviteId) { + throw new BadRequestException('Invite ID is required'); + } + + try { + const token = req.token; // This is set by the AuthGuard + + // Call the service to resend the invitation + await this.memoroService.resendSpaceInvite(user.sub, inviteId, token); + + // Return a success response + return { + success: true, + message: 'Invitation resent successfully', + }; + } catch (error) { + console.error(`Failed to resend invitation ${inviteId}:`, error); + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + throw new Error(`Failed to resend invitation: ${error.message}`); + } + } + + @Delete('spaces/invites/:inviteId') + async cancelInvite(@User() user: JwtPayload, @Param('inviteId') inviteId: string, @Req() req) { + if (!inviteId) { + throw new BadRequestException('Invite ID is required'); + } + + try { + const token = req.token; // This is set by the AuthGuard + + // Call the service to cancel the invitation + await this.memoroService.cancelSpaceInvite(user.sub, inviteId, token); + + // Return a success response + return { + success: true, + message: 'Invitation canceled successfully', + }; + } catch (error) { + console.error(`Failed to cancel invitation ${inviteId}:`, error); + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + throw new Error(`Failed to cancel invitation: ${error.message}`); + } + } + + @Get('spaces/:id/memos') + async getSpaceMemos(@User() user: JwtPayload, @Param('id') spaceId: string, @Req() req) { + if (!spaceId) { + throw new BadRequestException('Space ID is required'); + } + const token = req.token; + return this.memoroService.getSpaceMemos(user.sub, spaceId, token); + } + + @Post('spaces/:id/leave') + async leaveSpace(@User() user: JwtPayload, @Param('id') spaceId: string, @Req() req) { + if (!spaceId) { + throw new BadRequestException('Space ID is required'); + } + const token = req.token; + + try { + // Call the spaces service to leave the space + const result = await this.memoroService.leaveSpace(user.sub, spaceId, token); + + // Return success response + return { + success: true, + message: 'Successfully left the space', + }; + } catch (error) { + console.error(`Error in leaveSpace: ${error.message}`); + if (error instanceof NotFoundException) { + throw error; + } else if (error instanceof ForbiddenException) { + throw error; + } else { + throw new BadRequestException(`Failed to leave space: ${error.message}`); + } + } + } + + @Get('invites/pending') + async getPendingInvites(@User() user: JwtPayload, @Req() req) { + try { + const token = req.token; // This is set by the AuthGuard + + // Call the service to get pending invites for the user + const result = await this.memoroService.getUserPendingInvites(user.sub, token); + console.log('INVITES PENDING RES: ', result); + // Return the invites in the format expected by the frontend + return result; + } catch (error) { + console.error(`Failed to get pending invites:`, error); + if (error instanceof NotFoundException) { + // Return empty invites array instead of throwing an error if not found + return { invites: [] }; + } else if (error instanceof ForbiddenException || error instanceof BadRequestException) { + throw error; + } else { + // For any other errors, log but return empty array + console.error(`Error fetching pending invites: ${error.message}`); + return { invites: [] }; + } + } + } + + @Post('spaces/invites/accept') + async acceptInvite( + @User() user: JwtPayload, + @Body() acceptData: { inviteId: string }, + @Req() req + ) { + if (!acceptData.inviteId) { + throw new BadRequestException('Invite ID is required'); + } + + try { + const token = req.token; // This is set by the AuthGuard + + // Call the service to accept the invitation + await this.memoroService.acceptSpaceInvite(user.sub, acceptData.inviteId, token); + + // Return a success response + return { + success: true, + message: 'Invitation accepted successfully', + }; + } catch (error) { + console.error(`Failed to accept invitation ${acceptData.inviteId}:`, error); + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + throw new Error(`Failed to accept invitation: ${error.message}`); + } + } + + @Post('spaces/invites/decline') + async declineInvite( + @User() user: JwtPayload, + @Body() declineData: { inviteId: string }, + @Req() req + ) { + if (!declineData.inviteId) { + throw new BadRequestException('Invite ID is required'); + } + + try { + const token = req.token; // This is set by the AuthGuard + + // Call the service to decline the invitation + await this.memoroService.declineSpaceInvite(user.sub, declineData.inviteId, token); + + // Return a success response + return { + success: true, + message: 'Invitation declined successfully', + }; + } catch (error) { + console.error(`Failed to decline invitation ${declineData.inviteId}:`, error); + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + throw new Error(`Failed to decline invitation: ${error.message}`); + } + } + + @Post('credits/check-transcription') + async checkTranscriptionCredits( + @User() user: JwtPayload, + @Body() + checkData: { + durationSeconds: number; + spaceId?: string; + }, + @Req() req + ) { + const { durationSeconds, spaceId } = checkData; + const token = req.token; + + if (!durationSeconds || durationSeconds <= 0) { + throw new BadRequestException('Valid duration in seconds is required'); + } + + try { + const requiredCredits = calculateTranscriptionCost(durationSeconds); + + let creditCheck; + if (spaceId) { + // Try space credits first, then fall back to user credits + try { + creditCheck = await this.creditClientService.checkSpaceCredits( + spaceId, + requiredCredits, + token + ); + } catch (error) { + console.warn(`Space credit check failed, checking user credits: ${error.message}`); + creditCheck = await this.creditClientService.checkUserCredits( + user.sub, + requiredCredits, + token + ); + } + } else { + creditCheck = await this.creditClientService.checkUserCredits( + user.sub, + requiredCredits, + token + ); + } + + return { + hasEnoughCredits: creditCheck.hasEnoughCredits, + requiredCredits: creditCheck.requiredCredits, + currentCredits: creditCheck.currentCredits, + creditType: creditCheck.creditType, + durationMinutes: Math.ceil(durationSeconds / 60), + estimatedCostPerHour: 100, + }; + } catch (error) { + console.error('Error checking transcription credits:', error); + + if (error instanceof InsufficientCreditsException) { + throw error; // Let the exception propagate with 402 status + } + + if (error instanceof ForbiddenException || error instanceof BadRequestException) { + throw error; + } + + throw new BadRequestException(`Failed to check credits: ${error.message}`); + } + } + + @Post('credits/consume-transcription') + async consumeTranscriptionCredits( + @User() user: JwtPayload, + @Body() + consumeData: { + durationSeconds: number; + spaceId?: string; + memoId?: string; + description?: string; + }, + @Req() req + ) { + const { durationSeconds, spaceId, memoId, description } = consumeData; + const token = req.token; + + if (!durationSeconds || durationSeconds <= 0) { + throw new BadRequestException('Valid duration in seconds is required'); + } + + try { + const requiredCredits = calculateTranscriptionCost(durationSeconds); + const operationDescription = + description || + `Transcription for ${Math.ceil(durationSeconds / 60)} minutes of audio${memoId ? ` (Memo: ${memoId})` : ''}`; + + const result = await this.creditClientService.checkAndConsumeCredits( + user.sub, + requiredCredits, + token, + { + spaceId, + description: operationDescription, + operation: 'transcription', + } + ); + + return { + success: true, + message: result.message, + creditsConsumed: requiredCredits, + creditType: result.creditType, + durationMinutes: Math.ceil(durationSeconds / 60), + }; + } catch (error) { + console.error('Error consuming transcription credits:', error); + + if (error instanceof InsufficientCreditsException) { + throw error; // Let the exception propagate with 402 status + } + + if (error instanceof ForbiddenException) { + throw error; + } + + if (error instanceof BadRequestException) { + throw error; + } + + throw new BadRequestException(`Failed to consume credits: ${error.message}`); + } + } + + @Post('credits/consume-operation') + async consumeOperationCredits( + @User() user: JwtPayload, + @Body() + consumeData: { + operation: + | 'HEADLINE_GENERATION' + | 'MEMORY_CREATION' + | 'BLUEPRINT_PROCESSING' + | 'MEMO_SHARING' + | 'SPACE_OPERATION'; + spaceId?: string; + memoId?: string; + description?: string; + }, + @Req() req + ) { + const { operation, spaceId, memoId, description } = consumeData; + const token = req.token; + + if (!operation) { + throw new BadRequestException('Operation type is required'); + } + + try { + const requiredCredits = getOperationCost(operation); + const operationDescription = + description || + `${operation.toLowerCase().replace('_', ' ')}${memoId ? ` (Memo: ${memoId})` : ''}`; + + const result = await this.creditClientService.checkAndConsumeCredits( + user.sub, + requiredCredits, + token, + { + spaceId, + description: operationDescription, + operation: operation.toLowerCase(), + } + ); + + return { + success: true, + message: result.message, + creditsConsumed: requiredCredits, + creditType: result.creditType, + operation, + }; + } catch (error) { + console.error(`Error consuming credits for ${operation}:`, error); + + if (error instanceof InsufficientCreditsException) { + throw error; // Let the exception propagate with 402 status + } + + if (error instanceof ForbiddenException) { + throw error; + } + + if (error instanceof BadRequestException) { + throw error; + } + + throw new BadRequestException(`Failed to consume credits: ${error.message}`); + } + } + + @Post('retry-transcription') + async retryTranscription( + @User() user: JwtPayload, + @Body() + retryData: { + memoId: string; + }, + @Req() req + ) { + const { memoId } = retryData; + const token = req.token; + + if (!memoId) { + throw new BadRequestException('Memo ID is required'); + } + + try { + // Get memo and validate it belongs to user and failed transcription + const memo = await this.memoroService.validateMemoForRetry(user.sub, memoId, token); + + if (!memo) { + throw new NotFoundException('Memo not found or access denied'); + } + + // Check if transcription actually failed + if (memo.metadata?.processing?.transcription?.status !== 'error') { + throw new BadRequestException('Memo transcription did not fail - retry not needed'); + } + + // Check retry limits (max 3 retries) + const currentAttempts = memo.metadata?.processing?.transcription?.retryAttempts || 0; + if (currentAttempts >= 3) { + throw new BadRequestException('Maximum retry attempts (3) exceeded for this memo'); + } + + // Call the retry logic + const result = await this.memoroService.retryTranscription( + user.sub, + memoId, + token, + currentAttempts + 1 + ); + + return { + success: true, + message: 'Transcription retry initiated successfully', + memoId, + retryAttempt: currentAttempts + 1, + }; + } catch (error) { + console.error(`Error retrying transcription for memo ${memoId}:`, error); + + if ( + error instanceof NotFoundException || + error instanceof BadRequestException || + error instanceof ForbiddenException + ) { + throw error; + } + + throw new BadRequestException(`Failed to retry transcription: ${error.message}`); + } + } + + @Post('retry-headline') + async retryHeadline( + @User() user: JwtPayload, + @Body() + retryData: { + memoId: string; + }, + @Req() req + ) { + const { memoId } = retryData; + const token = req.token; + + if (!memoId) { + throw new BadRequestException('Memo ID is required'); + } + + try { + // Get memo and validate + const memo = await this.memoroService.validateMemoForRetry(user.sub, memoId, token); + + if (!memo) { + throw new NotFoundException('Memo not found or access denied'); + } + + // Check if headline generation actually failed + if (memo.metadata?.processing?.headline_and_intro?.status !== 'error') { + throw new BadRequestException('Memo headline generation did not fail - retry not needed'); + } + + // Check retry limits + const currentAttempts = memo.metadata?.processing?.headline_and_intro?.retryAttempts || 0; + if (currentAttempts >= 3) { + throw new BadRequestException( + 'Maximum retry attempts (3) exceeded for headline generation' + ); + } + + // Call the retry logic + const result = await this.memoroService.retryHeadline( + user.sub, + memoId, + token, + currentAttempts + 1 + ); + + return { + success: true, + message: 'Headline generation retry initiated successfully', + memoId, + retryAttempt: currentAttempts + 1, + }; + } catch (error) { + console.error(`Error retrying headline for memo ${memoId}:`, error); + + if ( + error instanceof NotFoundException || + error instanceof BadRequestException || + error instanceof ForbiddenException + ) { + throw error; + } + + throw new BadRequestException(`Failed to retry headline generation: ${error.message}`); + } + } + + @Post('reprocess-memo') + async reprocessMemo( + @User() user: JwtPayload, + @Body() + reprocessData: { + memoId: string; + recordingLanguages?: string[]; + recordingStartedAt?: string; + blueprintId?: string | null; + enableDiarization?: boolean; + }, + @Req() req + ) { + const { memoId, recordingLanguages, recordingStartedAt, blueprintId, enableDiarization } = + reprocessData; + const token = req.token; + + if (!memoId) { + throw new BadRequestException('Memo ID is required'); + } + + try { + // Get memo and validate ownership + const memo = await this.memoroService.getMemoForReprocessing(user.sub, memoId, token); + + if (!memo) { + throw new NotFoundException('Memo not found or access denied'); + } + + // Get audio path and duration from original memo + const audioPath = memo.source?.audio_path; + const duration = memo.source?.duration; + + if (!audioPath || !duration) { + throw new BadRequestException('Original memo does not have audio information'); + } + + // Check credits before processing + const requiredCredits = calculateTranscriptionCost(duration); + + let creditCheck; + // Check if original memo was in a space + const spaceId = memo.space_id; + + if (spaceId) { + try { + creditCheck = await this.creditClientService.checkSpaceCredits( + spaceId, + requiredCredits, + token + ); + } catch (error) { + console.warn(`Space credit check failed, checking user credits: ${error.message}`); + creditCheck = await this.creditClientService.checkUserCredits( + user.sub, + requiredCredits, + token + ); + } + } else { + creditCheck = await this.creditClientService.checkUserCredits( + user.sub, + requiredCredits, + token + ); + } + + if (!creditCheck.hasEnoughCredits) { + throw new InsufficientCreditsException({ + requiredCredits: creditCheck.requiredCredits, + availableCredits: creditCheck.currentCredits, + creditType: creditCheck.creditType, + operation: 'transcription', + spaceId, + }); + } + + // Create a new memo with the same audio file but new parameters + const memoResult = await this.memoroService.createMemoFromUploadedFile( + user.sub, + audioPath, + duration, + spaceId, + blueprintId, + undefined, // Generate new memo ID + token, + recordingStartedAt || memo.created_at, // Use provided date or original creation date + memo.metadata?.address ? { address: memo.metadata.address } : undefined + ); + + const durationMinutes = duration / 60; + + // Check for Swiss German and Austrian German languages + const hasSwissOrAustrianGerman = recordingLanguages?.some( + (lang) => lang === 'de-CH' || lang === 'de-AT' + ); + + const shouldUseFastTranscribe = hasSwissOrAustrianGerman ? false : durationMinutes < 115; + + console.log( + `Reprocessing memo ${memoId} as new memo ${memoResult.memoId}. Duration: ${durationMinutes.toFixed(2)} minutes. Using ${shouldUseFastTranscribe ? 'fast' : 'batch'} transcription.` + ); + + // Start async transcription processing + setImmediate(() => { + this.processTranscriptionAsync( + memoResult.memoId, + audioPath, + duration, + user.sub, + spaceId, + blueprintId, + recordingLanguages || memo.source?.languages || [], + token, + recordingStartedAt || memo.created_at, + enableDiarization !== undefined ? enableDiarization : true, + shouldUseFastTranscribe, + hasSwissOrAustrianGerman + ).catch((error) => { + console.error( + `Async transcription failed for reprocessed memo ${memoResult.memoId}:`, + error + ); + // Update memo with error status + this.updateMemoTranscriptionStatus(memoResult.memoId, 'failed', token, { + error: error.message, + timestamp: new Date().toISOString(), + }); + }); + }); + + // Consume credits + await this.creditClientService.checkAndConsumeCredits(user.sub, requiredCredits, token, { + spaceId, + description: `Reprocessing memo ${memoId} as ${memoResult.memoId}`, + operation: 'transcription', + }); + + return { + success: true, + message: 'Memo reprocessing started successfully', + originalMemoId: memoId, + newMemoId: memoResult.memoId, + memo: memoResult.memo, + }; + } catch (error) { + console.error(`Error reprocessing memo ${memoId}:`, error); + + if ( + error instanceof NotFoundException || + error instanceof BadRequestException || + error instanceof ForbiddenException || + error instanceof InsufficientCreditsException + ) { + throw error; + } + + throw new BadRequestException(`Failed to reprocess memo: ${error.message}`); + } + } + + @Post('process-uploaded-audio') + async processUploadedAudio( + @User() user: JwtPayload, + @Body() + processData: { + filePath: string; + duration: number; + spaceId?: string; + blueprintId?: string | null; + recordingLanguages?: string[]; + memoId?: string; + location?: any; // Add location data parameter + recordingStartedAt?: string; + enableDiarization?: boolean; + mediaType?: 'audio' | 'video'; // Add media type field + videoMetadata?: any; // Add video metadata field + }, + @Req() req + ) { + const { + filePath, + duration, + spaceId, + blueprintId, + recordingLanguages, + memoId, + location, + recordingStartedAt, + enableDiarization, + mediaType, + videoMetadata, + } = processData; + const token = req.token; + + if (!filePath) { + throw new BadRequestException('File path is required'); + } + + if (!duration || duration <= 0) { + throw new BadRequestException('Valid duration is required'); + } + + // Detect media type if not provided + const detectedMediaType = mediaType || this.detectMediaType(filePath); + + if (detectedMediaType === 'unknown') { + throw new BadRequestException( + 'Unsupported file type. Only audio and video files are supported.' + ); + } + + console.log( + `Processing ${detectedMediaType} file: ${filePath}${detectedMediaType === 'video' ? ' (video detected)' : ''}` + ); + + try { + // Check credits before processing + const requiredCredits = calculateTranscriptionCost(duration); + + let creditCheck; + if (spaceId) { + try { + creditCheck = await this.creditClientService.checkSpaceCredits( + spaceId, + requiredCredits, + token + ); + } catch (error) { + console.warn(`Space credit check failed, checking user credits: ${error.message}`); + creditCheck = await this.creditClientService.checkUserCredits( + user.sub, + requiredCredits, + token + ); + } + } else { + creditCheck = await this.creditClientService.checkUserCredits( + user.sub, + requiredCredits, + token + ); + } + + if (!creditCheck.hasEnoughCredits) { + throw new InsufficientCreditsException({ + requiredCredits: creditCheck.requiredCredits, + availableCredits: creditCheck.currentCredits, + creditType: creditCheck.creditType, + operation: 'transcription', + spaceId, + }); + } + + // Create memo in database + const memoResult = await this.memoroService.createMemoFromUploadedFile( + user.sub, + filePath, + duration, + spaceId, + blueprintId, + memoId, + token, + recordingStartedAt, + location, + detectedMediaType, + videoMetadata + ); + + const durationMinutes = duration / 60; + + // Check for Swiss German and Austrian German languages - always use batch transcription + const hasSwissOrAustrianGerman = recordingLanguages?.some( + (lang) => + lang === 'de-CH' || // Swiss German + lang === 'de-AT' // Austrian German + ); + + let shouldUseFastTranscribe; + + if (hasSwissOrAustrianGerman) { + // Force batch transcription for Swiss German and Austrian German + shouldUseFastTranscribe = false; + console.log( + `Swiss German or Austrian German detected (${recordingLanguages?.join(', ')}). Forcing batch transcription for better accuracy.` + ); + } else { + // Speaker diarization now works correctly in fast transcription (fixed 2025-06-09) + // Use normal routing: fast (<115min) vs batch (≥115min) + shouldUseFastTranscribe = durationMinutes < 115; // Restored normal routing + } + + console.log( + `Audio duration: ${durationMinutes.toFixed(2)} minutes. Using ${shouldUseFastTranscribe ? 'fast' : 'batch'} transcription.` + ); + + // Start async transcription processing + setImmediate(() => { + this.processTranscriptionAsync( + memoResult.memoId, + filePath, + duration, + user.sub, + spaceId, + blueprintId, + recordingLanguages || [], + token, + recordingStartedAt, + enableDiarization, + shouldUseFastTranscribe, + hasSwissOrAustrianGerman + ).catch((error) => { + console.error(`Async transcription failed for memo ${memoResult.memoId}:`, error); + // Update memo with error status + this.updateMemoTranscriptionStatus(memoResult.memoId, 'failed', token, { + error: error.message, + timestamp: new Date().toISOString(), + }); + }); + }); + + // Return immediately with full memo object for state synchronization + return { + success: true, + memoId: memoResult.memoId, + memo: memoResult.memo, // Include full memo object + filePath, + status: 'processing', + estimatedDuration: Math.ceil(duration / 60), + message: `${detectedMediaType === 'video' ? 'Video' : 'Audio'} memo created successfully. Transcription in progress.`, + estimatedCredits: requiredCredits, + mediaType: detectedMediaType, + }; + } catch (error) { + console.error('Error processing uploaded audio:', error); + + if ( + error instanceof InsufficientCreditsException || + error instanceof ForbiddenException || + error instanceof BadRequestException || + error instanceof NotFoundException + ) { + throw error; + } + + throw new BadRequestException(`Failed to process audio: ${error.message}`); + } + } + + /** + * Process transcription asynchronously in the background + */ + private async processTranscriptionAsync( + memoId: string, + filePath: string, + duration: number, + userId: string, + spaceId: string | undefined, + blueprintId: string | null | undefined, + recordingLanguages: string[], + token: string, + recordingStartedAt: string | undefined, + enableDiarization: boolean | undefined, + shouldUseFastTranscribe: boolean, + hasSwissOrAustrianGerman: boolean + ): Promise { + try { + // Update status to processing + await this.updateMemoTranscriptionStatus(memoId, 'processing', token); + + // Check if this is a video file + const mediaType = this.detectMediaType(filePath); + + if (mediaType === 'video') { + console.log(`[processTranscriptionAsync] Detected video file: ${filePath}`); + + try { + // Call video processing endpoint + const videoResult = await this.callAudioMicroserviceVideoProcessing( + filePath, + memoId, + userId, + spaceId, + recordingLanguages, + token, + enableDiarization + ); + + // Determine the actual processing route from the result + let processingRoute = + videoResult.route === 'fast' ? 'fast_transcribe_video' : 'batch_transcribe_video'; + + // Store jobId if batch processing was used + if (videoResult.jobId) { + console.log(`Storing jobId ${videoResult.jobId} for video memo ${memoId}`); + await this.memoroService.updateMemoWithJobId( + memoId, + videoResult.jobId, + token, + recordingLanguages + ); + } + + // Update status to completed if fast route, or processing if batch + const finalStatus = videoResult.route === 'fast' ? 'completed' : 'processing'; + await this.updateMemoTranscriptionStatus(memoId, finalStatus, token, { + processingRoute, + source: 'video', + completedAt: videoResult.route === 'fast' ? new Date().toISOString() : undefined, + }); + + console.log( + `Video transcription ${finalStatus} for memo ${memoId} via ${processingRoute}` + ); + } catch (error) { + console.error('Video processing failed:', error); + throw new Error(`Video processing failed: ${error.message}`); + } + return; + } + + // Continue with normal audio processing if not a video + if (shouldUseFastTranscribe) { + // Use new audio microservice with built-in fallback system + try { + const transcribeResult = await this.callAudioMicroserviceRealtimeWithFallback( + filePath, + memoId, + userId, + spaceId, + recordingLanguages, + token, + enableDiarization + ); + + // Determine the actual processing route from the result + let processingRoute = 'fast_transcribe'; + if (transcribeResult.route === 'fast-conversion-retry') { + processingRoute = 'fast_transcribe_converted'; + } else if (transcribeResult.route === 'batch-fallback') { + processingRoute = 'batch_transcribe_fallback'; + // Store jobId if batch processing was used + if (transcribeResult.jobId) { + console.log(`Storing jobId ${transcribeResult.jobId} in memo ${memoId}`); + await this.memoroService.updateMemoWithJobId( + memoId, + transcribeResult.jobId, + token, + recordingLanguages + ); + } + } + + // Update status to completed + await this.updateMemoTranscriptionStatus(memoId, 'completed', token, { + processingRoute, + completedAt: new Date().toISOString(), + }); + + console.log(`Transcription completed for memo ${memoId} via ${processingRoute}`); + } catch (error) { + console.error('Audio microservice transcription with fallback failed:', error); + throw new Error(`Transcription failed after all fallback attempts: ${error.message}`); + } + } else { + // Use batch processing for long files + try { + const batchResult = await this.processBatchFromStoragePath( + filePath, + userId, + spaceId, + token, + recordingLanguages, + memoId, + enableDiarization + ); + + // Store the jobId in memo metadata for webhook callback lookup + if (batchResult.jobId) { + console.log(`Storing jobId ${batchResult.jobId} in memo ${memoId}`); + await this.memoroService.updateMemoWithJobId( + memoId, + batchResult.jobId, + token, + recordingLanguages + ); + } + + // Update status to processing (batch will update to completed via webhook) + await this.updateMemoTranscriptionStatus(memoId, 'processing', token, { + processingRoute: 'batch_transcribe', + batchJobId: batchResult.jobId, + }); + + console.log( + `Batch transcription started for memo ${memoId} with jobId ${batchResult.jobId}` + ); + } catch (batchError) { + console.error('Batch processing failed:', batchError); + throw new Error(`Batch processing failed: ${batchError.message}`); + } + } + } catch (error) { + console.error(`Error in processTranscriptionAsync for memo ${memoId}:`, error); + throw error; + } + } + + /** + * Update memo transcription status + */ + private async updateMemoTranscriptionStatus( + memoId: string, + status: 'pending' | 'processing' | 'completed' | 'failed', + token: string, + additionalData?: any + ): Promise { + try { + // Delegate to the service which has access to the Supabase credentials + await this.memoroService.updateMemoTranscriptionStatus(memoId, status, token, additionalData); + } catch (error) { + console.error(`Error updating transcription status for memo ${memoId}:`, error); + } + } + + // REMOVED: Legacy upload-audio endpoint for cleanup + // All uploads now use process-uploaded-audio with direct storage upload + + /** + * Call audio microservice for video processing + */ + private async callAudioMicroserviceVideoProcessing( + videoPath: string, + memoId: string, + userId: string, + spaceId: string | undefined, + recordingLanguages: string[], + token: string, + enableDiarization?: boolean + ) { + const audioServiceUrl = this.configService.get('AUDIO_MICROSERVICE_URL'); + + if (!audioServiceUrl) { + console.error('[CRITICAL ERROR] AUDIO_MICROSERVICE_URL is not configured'); + throw new Error('Missing required configuration: AUDIO_MICROSERVICE_URL'); + } + + const payload = { + videoPath, + memoId, + userId, + spaceId, + recordingLanguages, + enableDiarization: enableDiarization !== false, + }; + + console.log( + `[callAudioMicroserviceVideoProcessing] Processing video: ${audioServiceUrl}/audio/process-video` + ); + console.log(`[VIDEO_PROCESSING] Video path: ${videoPath}`); + + const response = await fetch(`${audioServiceUrl}/audio/process-video`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${token}`, + }, + body: JSON.stringify(payload), + }); + + if (!response.ok) { + const errorText = await response.text(); + const error = new Error(`Video processing failed: ${response.status} - ${errorText}`); + (error as any).status = response.status; + throw error; + } + + const result = await response.json(); + console.log(`[VIDEO_PROCESSING] Result:`, result); + return result; + } + + /** + * Call audio microservice with built-in fallback system + */ + private async callAudioMicroserviceRealtimeWithFallback( + audioPath: string, + memoId: string, + userId: string, + spaceId: string | undefined, + recordingLanguages: string[], + token: string, + enableDiarization?: boolean, + isAppend?: boolean + ) { + // Debug: Log the raw environment variable and ConfigService value + console.error( + '[CRITICAL DEBUG] process.env.AUDIO_MICROSERVICE_URL:', + process.env.AUDIO_MICROSERVICE_URL + ); + console.error( + '[CRITICAL DEBUG] ConfigService.get(AUDIO_MICROSERVICE_URL):', + this.configService.get('AUDIO_MICROSERVICE_URL') + ); + console.error('[CRITICAL DEBUG] NODE_ENV:', process.env.NODE_ENV); + + const audioServiceUrl = this.configService.get('AUDIO_MICROSERVICE_URL'); + + if (!audioServiceUrl) { + console.error('[CRITICAL ERROR] AUDIO_MICROSERVICE_URL is not configured'); + throw new Error('Missing required configuration: AUDIO_MICROSERVICE_URL'); + } + console.log('[DEBUG] Final audioServiceUrl:', audioServiceUrl); + + const payload = { + audioPath, + memoId, + userId, + spaceId, + recordingLanguages, + enableDiarization, + isAppend: isAppend || false, + }; + + console.log( + `Calling audio microservice realtime with fallback: ${audioServiceUrl}/audio/transcribe-realtime` + ); + console.log(`[AUDIO_MICROSERVICE_CALL] Sending audioPath: ${audioPath}`); + + const response = await fetch(`${audioServiceUrl}/audio/transcribe-realtime`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${token}`, + }, + body: JSON.stringify(payload), + }); + + if (!response.ok) { + const errorText = await response.text(); + const error = new Error( + `Audio microservice transcription failed: ${response.status} - ${errorText}` + ); + (error as any).status = response.status; + throw error; + } + + return await response.json(); + } + + private async callAudioMicroserviceStorageBatch( + audioPath: string, + userId: string, + spaceId: string | undefined, + recordingLanguages?: string[], + token?: string, + memoId?: string, + enableDiarization?: boolean, + isAppend?: boolean + ) { + const audioServiceUrl = this.configService.get('AUDIO_MICROSERVICE_URL'); + + if (!audioServiceUrl) { + console.error('[CRITICAL ERROR] AUDIO_MICROSERVICE_URL is not configured'); + throw new Error('Missing required configuration: AUDIO_MICROSERVICE_URL'); + } + + const payload = { + audioPath, + userId, + spaceId, + recordingLanguages, + memoId, + enableDiarization, + isAppend: isAppend || false, + }; + + console.log( + `Calling audio microservice storage batch: ${audioServiceUrl}/audio/transcribe-from-storage` + ); + + const response = await fetch(`${audioServiceUrl}/audio/transcribe-from-storage`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${token}`, + }, + body: JSON.stringify(payload), + }); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error(`Audio microservice storage batch failed: ${response.status} - ${errorText}`); + } + + return await response.json(); + } + + private async processBatchFromStoragePath( + filePath: string, + userId: string, + spaceId: string | undefined, + token: string, + recordingLanguages?: string[], + memoId?: string, + enableDiarization?: boolean, + isAppend?: boolean + ) { + try { + // Use the new storage-based endpoint instead of downloading and re-uploading + return await this.callAudioMicroserviceStorageBatch( + filePath, + userId, + spaceId, + recordingLanguages, + token, + memoId, + enableDiarization, + isAppend + ); + } catch (error) { + console.error('Error processing batch from storage path:', error); + throw error; + } + } + + /** + * Legacy method - no longer used since error detection is handled in audio microservice + */ + private isAudioFormatError(error: any): boolean { + if (!error || !error.message) return false; + + const errorMessage = error.message.toLowerCase(); + const formatErrorIndicators = [ + 'audio format', + 'audio stream could not be decoded', + 'invalidaudioformat', + 'unprocessableentity', + 'the audio stream could not be decoded with the provided configuration', + 'audio/x-m4a', + 'could not be decoded', + '422', + ]; + + return formatErrorIndicators.some((indicator) => errorMessage.includes(indicator)); + } + + /** + * Update batch transcription metadata for recovery tracking + */ + @Post('update-batch-metadata') + async updateBatchMetadata( + @Body() + body: { + memoId: string; + jobId: string; + batchTranscription: boolean; + }, + @Req() req + ) { + try { + const { memoId, jobId, batchTranscription } = body; + const token = req.token; // This is set by the AuthGuard + + // Delegate to service which has proper Supabase client initialization + const result = await this.memoroService.updateBatchMetadataByMemoId( + memoId, + jobId, + batchTranscription, + token + ); + + return result; + } catch (error) { + console.error('Error updating batch metadata:', error); + throw new BadRequestException(`Failed to update batch metadata: ${error.message}`); + } + } + + /** + * Handles transcription completion callback from audio microservice + */ + @Post('transcription-completed') + @HttpCode(200) + async handleTranscriptionCompleted( + @Body() + callbackData: { + memoId: string; + userId: string; + transcriptionResult?: any; + route?: 'fast' | 'batch'; + success?: boolean; + error?: string; + }, + @Req() req + ) { + try { + console.log( + `[handleTranscriptionCompleted] Received callback for memo ${callbackData.memoId}` + ); + + const token = req.token; // This is set by the AuthGuard + + if (!callbackData.memoId || !callbackData.userId) { + throw new BadRequestException('memoId and userId are required'); + } + + // Delegate to service to handle the callback + const result = await this.memoroService.handleTranscriptionCompleted( + callbackData.memoId, + callbackData.userId, + callbackData.transcriptionResult, + callbackData.route, + callbackData.success, + callbackData.error, + token + ); + + return result; + } catch (error) { + console.error(`[handleTranscriptionCompleted] Error processing callback:`, error); + throw new BadRequestException(`Failed to process transcription callback: ${error.message}`); + } + } + + /** + * Handles append transcription completion callback from audio microservice + */ + @Post('append-transcription-completed') + @HttpCode(200) + async handleAppendTranscriptionCompleted( + @Body() + callbackData: { + memoId: string; + userId: string; + transcriptionResult?: any; + route?: 'fast' | 'batch'; + success?: boolean; + error?: string; + }, + @Req() req + ) { + try { + console.log( + `[handleAppendTranscriptionCompleted] Received callback for memo ${callbackData.memoId}` + ); + + const token = req.token; // This is set by the AuthGuard + + if (!callbackData.memoId || !callbackData.userId) { + throw new BadRequestException('memoId and userId are required'); + } + + // The service will determine the correct recording index based on the current state + const result = await this.memoroService.handleAppendTranscriptionCompleted( + callbackData.memoId, + callbackData.userId, + callbackData.transcriptionResult, + callbackData.route || 'fast', + callbackData.success !== false, + callbackData.error || null, + token + ); + + return result; + } catch (error) { + console.error( + `[handleAppendTranscriptionCompleted] Error processing append callback:`, + error + ); + throw new BadRequestException( + `Failed to process append transcription callback: ${error.message}` + ); + } + } + + @Post('append-transcription') + async appendTranscription( + @User() user: JwtPayload, + @Body() + appendData: { + memoId: string; + filePath: string; + duration: number; + recordingIndex?: number; + recordingLanguages?: string[]; + enableDiarization?: boolean; + }, + @Req() req + ) { + const { memoId, filePath, duration, recordingIndex, recordingLanguages, enableDiarization } = + appendData; + const token = req.token; + + if (!memoId) { + throw new BadRequestException('Memo ID is required'); + } + + if (!filePath) { + throw new BadRequestException('File path is required'); + } + + if (!duration || duration <= 0) { + throw new BadRequestException('Valid duration is required'); + } + + try { + // Validate memo exists and belongs to user + const memo = await this.memoroService.validateMemoForAppend(user.sub, memoId, token); + + if (!memo) { + throw new NotFoundException('Memo not found or access denied'); + } + + // Check credits before processing + const requiredCredits = calculateTranscriptionCost(duration); + const spaceId = memo.metadata?.spaceId; + + let creditCheck; + if (spaceId) { + try { + creditCheck = await this.creditClientService.checkSpaceCredits( + spaceId, + requiredCredits, + token + ); + } catch (error) { + console.warn(`Space credit check failed, checking user credits: ${error.message}`); + creditCheck = await this.creditClientService.checkUserCredits( + user.sub, + requiredCredits, + token + ); + } + } else { + creditCheck = await this.creditClientService.checkUserCredits( + user.sub, + requiredCredits, + token + ); + } + + if (!creditCheck.hasEnoughCredits) { + throw new InsufficientCreditsException({ + requiredCredits: creditCheck.requiredCredits, + availableCredits: creditCheck.currentCredits, + creditType: creditCheck.creditType, + operation: 'transcription', + spaceId, + }); + } + + // Start async append transcription processing + const durationMinutes = duration / 60; + const shouldUseFastTranscribe = durationMinutes < 115; + + console.log( + `[appendTranscription] Audio duration: ${durationMinutes.toFixed(2)} minutes. Using ${shouldUseFastTranscribe ? 'fast' : 'batch'} transcription.` + ); + + // Process append transcription asynchronously + setImmediate(() => { + this.processAppendTranscriptionAsync( + memoId, + filePath, + duration, + user.sub, + spaceId, + recordingLanguages || [], + token, + enableDiarization, + shouldUseFastTranscribe, + recordingIndex + ).catch((error) => { + console.error(`Async append transcription failed for memo ${memoId}:`, error); + // Update memo with error status in additional_recordings + this.memoroService.updateAppendTranscriptionStatus( + memoId, + recordingIndex, + 'error', + token, + { + error: error.message, + timestamp: new Date().toISOString(), + } + ); + }); + }); + + // Return immediately + return { + success: true, + memoId, + filePath, + status: 'processing', + estimatedDuration: Math.ceil(duration / 60), + message: 'Append transcription in progress.', + estimatedCredits: requiredCredits, + }; + } catch (error) { + console.error('Error appending transcription:', error); + + if ( + error instanceof InsufficientCreditsException || + error instanceof ForbiddenException || + error instanceof BadRequestException || + error instanceof NotFoundException + ) { + throw error; + } + + throw new BadRequestException(`Failed to append transcription: ${error.message}`); + } + } + + /** + * Process append transcription asynchronously in the background + */ + private async processAppendTranscriptionAsync( + memoId: string, + filePath: string, + duration: number, + userId: string, + spaceId: string | undefined, + recordingLanguages: string[], + token: string, + enableDiarization: boolean | undefined, + shouldUseFastTranscribe: boolean, + recordingIndex?: number + ): Promise { + try { + // Update status to processing with file information + await this.memoroService.updateAppendTranscriptionStatus( + memoId, + recordingIndex, + 'processing', + token, + { + audio_path: filePath, + duration: duration, + type: 'audio', + } + ); + + if (shouldUseFastTranscribe) { + // Use audio microservice with built-in fallback system + try { + // Just call the audio microservice - it will send callbacks + await this.callAudioMicroserviceRealtimeWithFallback( + filePath, + memoId, + userId, + spaceId, + recordingLanguages, + token, + enableDiarization, + true // isAppend = true for append transcriptions + ); + + console.log( + `Append transcription initiated for memo ${memoId} via fast transcribe - waiting for callback` + ); + } catch (error) { + console.error('Audio microservice append transcription failed:', error); + throw new Error(`Append transcription failed: ${error.message}`); + } + } else { + // Use batch processing for long files + try { + const batchResult = await this.processBatchFromStoragePath( + filePath, + userId, + spaceId, + token, + recordingLanguages, + memoId, + enableDiarization, + true // isAppend = true for append transcriptions + ); + + // Store the jobId for batch tracking + if (batchResult.jobId) { + console.log( + `Batch append transcription started for memo ${memoId} with jobId ${batchResult.jobId}` + ); + await this.memoroService.updateAppendTranscriptionStatus( + memoId, + recordingIndex, + 'processing', + token, + { + batchJobId: batchResult.jobId, + processingRoute: 'batch_transcribe', + isAppend: true, + } + ); + } + } catch (batchError) { + console.error('Batch append transcription failed:', batchError); + throw new Error(`Batch append transcription failed: ${batchError.message}`); + } + } + } catch (error) { + console.error(`Error in processAppendTranscriptionAsync for memo ${memoId}:`, error); + throw error; + } + } + + /** + * Find memo details by batch job ID (used by audio microservice webhook) + */ + @Get('find-memo-by-job/:jobId') + async findMemoByJobId(@Param('jobId') jobId: string) { + try { + console.log(`[findMemoByJobId] Looking up memo for job ${jobId}`); + + if (!jobId) { + throw new BadRequestException('Job ID is required'); + } + + // Search for memo with this jobId in metadata + const authClient = createClient( + this.memoroService.getSupabaseUrl(), + this.memoroService['memoroServiceKey'] // Use service key for direct access + ); + + const { data: memos, error } = await authClient + .from('memos') + .select('id, user_id, metadata') + .like('metadata->>processing->>transcription->>jobId', jobId) + .limit(1); + + if (error) { + console.error(`[findMemoByJobId] Database error:`, error); + throw new BadRequestException(`Database error: ${error.message}`); + } + + if (!memos || memos.length === 0) { + console.warn(`[findMemoByJobId] No memo found for job ${jobId}`); + throw new NotFoundException(`No memo found for job ${jobId}`); + } + + const memo = memos[0]; + console.log(`[findMemoByJobId] Found memo ${memo.id} for job ${jobId}`); + + return { + memoId: memo.id, + userId: memo.user_id, + // Note: We don't have the original token for webhook callbacks + // The webhook will need to operate without user-specific token + }; + } catch (error) { + console.error(`[findMemoByJobId] Error finding memo for job ${jobId}:`, error); + if (error instanceof NotFoundException || error instanceof BadRequestException) { + throw error; + } + throw new BadRequestException(`Failed to find memo for job: ${error.message}`); + } + } + + /** + * Detect media type based on file extension + */ + private detectMediaType(filePath: string): 'audio' | 'video' | 'unknown' { + const audioExtensions = ['mp3', 'wav', 'aac', 'm4a', 'flac', 'ogg', 'wma', 'opus']; + const videoExtensions = ['mp4', 'avi', 'mov', 'mkv', 'wmv', 'flv', 'webm', 'm4v', '3gp']; + + const extension = filePath.split('.').pop()?.toLowerCase(); + + if (!extension) { + return 'unknown'; + } + + if (audioExtensions.includes(extension)) { + return 'audio'; + } else if (videoExtensions.includes(extension)) { + return 'video'; + } + + return 'unknown'; + } +} diff --git a/apps/memoro/apps/backend/src/memoro/memoro.module.ts b/apps/memoro/apps/backend/src/memoro/memoro.module.ts new file mode 100644 index 000000000..d87012c2a --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/memoro.module.ts @@ -0,0 +1,27 @@ +import { Module } from '@nestjs/common'; +import { ConfigModule } from '@nestjs/config'; +import { MemoroController } from './memoro.controller'; +import { MemoroServiceController } from './memoro-service.controller'; +import { SpaceSyncController } from './space-sync.controller'; +import { QuestionMemoController } from './question-memo.controller'; +import { CombineMemosController } from './combine-memos.controller'; +import { MemoroService } from './memoro.service'; +import { SyncSpaceMembersService } from './sync-space-members.service'; +import { SpacesModule } from '../spaces/spaces.module'; +import { AuthModule } from '../auth/auth.module'; +import { CreditsModule } from '../credits/credits.module'; +import { AiModule } from '../ai/ai.module'; + +@Module({ + imports: [ConfigModule, SpacesModule, AuthModule, CreditsModule, AiModule], + controllers: [ + MemoroController, + MemoroServiceController, + SpaceSyncController, + QuestionMemoController, + CombineMemosController, + ], + providers: [MemoroService, SyncSpaceMembersService], + exports: [MemoroService, SyncSpaceMembersService], +}) +export class MemoroModule {} diff --git a/apps/memoro/apps/backend/src/memoro/memoro.service.spec.ts b/apps/memoro/apps/backend/src/memoro/memoro.service.spec.ts new file mode 100644 index 000000000..bca97d644 --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/memoro.service.spec.ts @@ -0,0 +1,958 @@ +import { Test, TestingModule } from '@nestjs/testing'; +import { ConfigService } from '@nestjs/config'; +import { MemoroService } from './memoro.service'; +import { SpacesClientService } from '../spaces/spaces-client.service'; +import { SpaceSyncService } from '../spaces/space-sync.service'; +import { CreditConsumptionService } from '../credits/credit-consumption.service'; +import { BadRequestException, ForbiddenException, NotFoundException } from '@nestjs/common'; +import { createClient } from '@supabase/supabase-js'; + +jest.mock('@supabase/supabase-js'); +jest.mock('uuid', () => ({ + v4: jest.fn(() => 'memo-123'), +})); + +global.fetch = jest.fn(); + +describe('MemoroService', () => { + let service: MemoroService; + let configService: jest.Mocked; + let spacesService: jest.Mocked; + let spaceSyncService: jest.Mocked; + let creditConsumptionService: jest.Mocked; + let mockSupabaseClient: any; + let mockSupabaseServiceClient: any; + + const mockUserId = 'user-123'; + const mockToken = 'mock-token'; + const mockSpaceId = 'space-123'; + const mockMemoId = 'memo-123'; + + beforeEach(async () => { + mockSupabaseClient = { + from: jest.fn().mockReturnThis(), + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockReturnThis(), + insert: jest.fn().mockReturnThis(), + update: jest.fn().mockReturnThis(), + delete: jest.fn().mockReturnThis(), + order: jest.fn().mockReturnThis(), + limit: jest.fn().mockReturnThis(), + like: jest.fn().mockReturnThis(), + upsert: jest.fn().mockReturnThis(), + maybeSingle: jest.fn().mockReturnThis(), + storage: { + from: jest.fn().mockReturnValue({ + upload: jest.fn().mockResolvedValue({ data: { path: 'uploads/audio.mp3' }, error: null }), + getPublicUrl: jest + .fn() + .mockReturnValue({ data: { publicUrl: 'https://example.com/audio.mp3' } }), + }), + }, + }; + + mockSupabaseServiceClient = { + ...mockSupabaseClient, + from: jest.fn().mockReturnThis(), + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + }; + + (createClient as jest.Mock).mockImplementation((url, key, options) => { + if (options?.global?.headers?.Authorization) { + return mockSupabaseClient; + } + return mockSupabaseServiceClient; + }); + + const module: TestingModule = await Test.createTestingModule({ + providers: [ + MemoroService, + { + provide: ConfigService, + useValue: { + get: jest.fn((key: string) => { + const config: Record = { + MEMORO_SUPABASE_URL: 'https://test.supabase.co', + MEMORO_SUPABASE_ANON_KEY: 'test-anon-key', + MEMORO_SUPABASE_SERVICE_ROLE_KEY: 'test-service-key', + AUDIO_MICROSERVICE_URL: 'https://audio.microservice.com', + }; + return config[key]; + }), + }, + }, + { + provide: SpacesClientService, + useValue: { + getUserSpaces: jest.fn(), + createSpace: jest.fn(), + getSpaceDetails: jest.fn(), + addSpaceMember: jest.fn(), + acceptSpaceInvite: jest.fn(), + declineSpaceInvite: jest.fn(), + leaveSpace: jest.fn(), + deleteSpace: jest.fn(), + getSpaceInvites: jest.fn(), + resendSpaceInvite: jest.fn(), + cancelSpaceInvite: jest.fn(), + getUserPendingInvites: jest.fn(), + verifySpaceAccess: jest.fn(), + }, + }, + { + provide: SpaceSyncService, + useValue: { + syncSpaceMembership: jest.fn(), + removeSpaceMembership: jest.fn(), + }, + }, + { + provide: CreditConsumptionService, + useValue: { + consumeTranscriptionCredits: jest.fn(), + }, + }, + ], + }).compile(); + + service = module.get(MemoroService); + configService = module.get(ConfigService); + spacesService = module.get(SpacesClientService); + spaceSyncService = module.get(SpaceSyncService); + creditConsumptionService = module.get(CreditConsumptionService); + + (global.fetch as jest.Mock).mockClear(); + }); + + it('should be defined', () => { + expect(service).toBeDefined(); + }); + + describe('getMemoroSpaces', () => { + it('should return empty array', async () => { + const result = await service.getMemoroSpaces(mockUserId, mockToken); + expect(result).toEqual([]); + }); + }); + + describe('createMemoroSpace', () => { + it('should create space and sync membership', async () => { + const spaceName = 'Test Space'; + const mockSpace = { + id: mockSpaceId, + name: spaceName, + owner_id: mockUserId, + app_id: 'test-app', + roles: { [mockUserId]: 'owner' }, + credits: 1000, + created_at: '2024-01-01T00:00:00Z', + updated_at: '2024-01-01T00:00:00Z', + }; + + spacesService.createSpace.mockResolvedValue(mockSpace); + spaceSyncService.syncSpaceMembership.mockResolvedValue(undefined); + + const result = await service.createMemoroSpace(mockUserId, spaceName, mockToken); + + expect(result).toEqual(mockSpace); + expect(spacesService.createSpace).toHaveBeenCalledWith(mockUserId, spaceName, mockToken); + expect(spaceSyncService.syncSpaceMembership).toHaveBeenCalledWith( + mockSpaceId, + mockUserId, + 'owner' + ); + }); + + it('should throw error if space creation fails', async () => { + const spaceName = 'Test Space'; + spacesService.createSpace.mockRejectedValue(new Error('Failed to create space')); + + await expect(service.createMemoroSpace(mockUserId, spaceName, mockToken)).rejects.toThrow( + 'Failed to create space' + ); + }); + }); + + describe('getMemoroSpaceDetails', () => { + it('should get space details successfully', async () => { + const mockSpaceDetails = { space: { id: mockSpaceId, name: 'Test Space' } }; + spacesService.getSpaceDetails.mockResolvedValue(mockSpaceDetails); + + const result = await service.getMemoroSpaceDetails(mockUserId, mockSpaceId, mockToken); + + expect(result).toEqual(mockSpaceDetails); + expect(spacesService.getSpaceDetails).toHaveBeenCalledWith(mockSpaceId, mockToken); + }); + + it('should handle NotFoundException with fallback', async () => { + spacesService.getSpaceDetails + .mockRejectedValueOnce(new NotFoundException('Space not found')) + .mockResolvedValueOnce({ space: { id: mockSpaceId, name: 'Test Space' } }); + + mockSupabaseClient.from.mockReturnValue({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: { id: mockSpaceId, owner_id: mockUserId, roles: { [mockUserId]: 'owner' } }, + error: null, + }), + }); + + const result = await service.getMemoroSpaceDetails(mockUserId, mockSpaceId, mockToken); + + expect(result).toBeDefined(); + expect(spacesService.getSpaceDetails).toHaveBeenCalledTimes(2); + }); + + it('should throw ForbiddenException if user has no access', async () => { + spacesService.getSpaceDetails.mockRejectedValue(new NotFoundException('Space not found')); + + mockSupabaseClient.from.mockReturnValue({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: { id: mockSpaceId, owner_id: 'other-user', roles: {} }, + error: null, + }), + }); + + await expect( + service.getMemoroSpaceDetails(mockUserId, mockSpaceId, mockToken) + ).rejects.toThrow(ForbiddenException); + }); + }); + + describe('linkMemoToSpace', () => { + it('should link memo to space successfully', async () => { + const linkData = { memoId: mockMemoId, spaceId: mockSpaceId }; + + // First mock for memo verification - needs maybeSingle + mockSupabaseClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn(() => ({ + eq: jest.fn(() => ({ + maybeSingle: jest.fn().mockResolvedValue({ + data: { id: mockMemoId, user_id: mockUserId }, + error: null, + }), + })), + })), + }); + + // Second mock for space verification - needs maybeSingle + mockSupabaseClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn(() => ({ + maybeSingle: jest.fn().mockResolvedValue({ + data: { id: mockSpaceId, owner_id: mockUserId, roles: { [mockUserId]: 'owner' } }, + error: null, + }), + })), + }); + + mockSupabaseClient.from.mockReturnValueOnce({ + insert: jest.fn().mockResolvedValue({ data: {}, error: null }), + }); + + const result = await service.linkMemoToSpace(mockUserId, linkData, mockToken); + + expect(result).toEqual({ success: true, message: 'Memo linked to space successfully' }); + }); + + it('should handle duplicate link gracefully', async () => { + const linkData = { memoId: mockMemoId, spaceId: mockSpaceId }; + + mockSupabaseClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: { id: mockMemoId, user_id: mockUserId }, + error: null, + }), + }); + + mockSupabaseClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: { id: mockSpaceId, owner_id: mockUserId, roles: { [mockUserId]: 'owner' } }, + error: null, + }), + }); + + mockSupabaseClient.from.mockReturnValueOnce({ + insert: jest.fn().mockResolvedValue({ + data: null, + error: { code: '23505', message: 'duplicate key value' }, + }), + }); + + const result = await service.linkMemoToSpace(mockUserId, linkData, mockToken); + + expect(result).toEqual({ success: true, message: 'Memo is already linked to this space' }); + }); + + it('should throw NotFoundException if user lacks memo access', async () => { + const linkData = { memoId: mockMemoId, spaceId: mockSpaceId }; + + // Mock for verifyMemoAccess - needs single not maybeSingle + mockSupabaseClient.from.mockReturnValue({ + select: jest.fn().mockReturnThis(), + eq: jest.fn(() => ({ + single: jest.fn().mockResolvedValue({ + data: { id: mockMemoId, user_id: 'other-user' }, + error: null, + }), + })), + }); + + await expect(service.linkMemoToSpace(mockUserId, linkData, mockToken)).rejects.toThrow( + NotFoundException + ); + }); + }); + + describe('getSpaceMemos', () => { + it('should get space memos successfully', async () => { + const mockMemos = [ + { id: 'memo-1', title: 'Memo 1', user_id: mockUserId }, + { id: 'memo-2', title: 'Memo 2', user_id: 'other-user' }, + ]; + + mockSupabaseClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn(() => ({ + maybeSingle: jest.fn().mockResolvedValue({ + data: { id: mockSpaceId, owner_id: mockUserId, roles: { [mockUserId]: 'owner' } }, + error: null, + }), + })), + }); + + mockSupabaseServiceClient.from.mockReturnValue({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + order: jest.fn().mockResolvedValue({ + data: mockMemos, + error: null, + }), + }); + + const result = await service.getSpaceMemos(mockUserId, mockSpaceId, mockToken); + + expect(result).toEqual({ memos: mockMemos }); + }); + + it('should return empty array if no memos found', async () => { + mockSupabaseClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn(() => ({ + maybeSingle: jest.fn().mockResolvedValue({ + data: { id: mockSpaceId, owner_id: mockUserId, roles: { [mockUserId]: 'owner' } }, + error: null, + }), + })), + }); + + mockSupabaseServiceClient.from.mockReturnValue({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + order: jest.fn().mockResolvedValue({ + data: [], + error: null, + }), + }); + + const result = await service.getSpaceMemos(mockUserId, mockSpaceId, mockToken); + + expect(result).toEqual({ memos: [] }); + }); + }); + + describe('validateMemoForRetry', () => { + it('should validate memo successfully', async () => { + const mockMemo = { + id: mockMemoId, + user_id: mockUserId, + metadata: { + processing: { + transcription: { status: 'error' }, + }, + }, + }; + + mockSupabaseClient.from.mockReturnValue({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: mockMemo, + error: null, + }), + }); + + const result = await service.validateMemoForRetry(mockUserId, mockMemoId, mockToken); + + expect(result).toEqual(mockMemo); + }); + + it('should return null if memo not found', async () => { + mockSupabaseClient.from.mockReturnValue({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: null, + error: { code: 'PGRST116', message: 'not found' }, + }), + }); + + const result = await service.validateMemoForRetry(mockUserId, mockMemoId, mockToken); + + expect(result).toBeNull(); + }); + + it('should return memo even if user does not own it', async () => { + const mockMemo = { + id: mockMemoId, + user_id: 'other-user', + metadata: {}, + }; + + mockSupabaseClient.from.mockReturnValue({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: mockMemo, + error: null, + }), + }); + + const result = await service.validateMemoForRetry(mockUserId, mockMemoId, mockToken); + + expect(result).toEqual(mockMemo); + }); + }); + + describe('retryTranscription', () => { + it('should retry transcription successfully', async () => { + const mockMemo = { + id: mockMemoId, + user_id: mockUserId, + metadata: { + processing: { + transcription: { + status: 'error', + audioPath: 'uploads/audio.mp3', + }, + }, + }, + space_id: mockSpaceId, + location_data: null, + recording_started_at: '2024-01-01T00:00:00Z', + language_codes: ['en-US'], + }; + + mockSupabaseClient.from.mockReturnValueOnce({ + update: jest.fn().mockReturnThis(), + eq: jest.fn().mockResolvedValue({ data: {}, error: null }), + }); + + (global.fetch as jest.Mock).mockResolvedValueOnce({ + ok: true, + json: async () => ({ success: true }), + }); + + mockSupabaseClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: mockMemo, + error: null, + }), + }); + + const result = await service.retryTranscription(mockUserId, mockMemoId, mockToken, 1); + + expect(result).toEqual({ success: true }); + expect(global.fetch).toHaveBeenCalledWith( + expect.stringContaining('transcribe-fast'), + expect.objectContaining({ + method: 'POST', + headers: expect.objectContaining({ + Authorization: `Bearer ${mockToken}`, + }), + }) + ); + }); + + it('should handle edge function errors', async () => { + const mockMemo = { + id: mockMemoId, + user_id: mockUserId, + metadata: { + processing: { + transcription: { + status: 'error', + audioPath: 'uploads/audio.mp3', + }, + }, + }, + }; + + mockSupabaseClient.from.mockReturnValueOnce({ + update: jest.fn().mockReturnThis(), + eq: jest.fn().mockResolvedValue({ data: {}, error: null }), + }); + + (global.fetch as jest.Mock).mockResolvedValueOnce({ + ok: false, + status: 500, + text: async () => 'Internal Server Error', + }); + + mockSupabaseClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: mockMemo, + error: null, + }), + }); + + await expect( + service.retryTranscription(mockUserId, mockMemoId, mockToken, 1) + ).rejects.toThrow('Edge function call failed'); + }); + }); + + describe('updateMemoTranscriptionStatus', () => { + it('should update transcription status successfully', async () => { + const status = 'completed'; + const additionalData = { completedAt: '2024-01-01T00:00:00Z' }; + + // First mock for reading existing metadata + mockSupabaseClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: { + id: mockMemoId, + metadata: {}, + }, + error: null, + }), + }); + + // Second mock for updating metadata + mockSupabaseClient.from.mockReturnValueOnce({ + update: jest.fn().mockReturnThis(), + eq: jest.fn().mockResolvedValue({ data: {}, error: null }), + }); + + await service.updateMemoTranscriptionStatus(mockMemoId, status, mockToken, additionalData); + + expect(mockSupabaseClient.from).toHaveBeenCalledWith('memos'); + expect(mockSupabaseClient.update).toHaveBeenCalledWith({ + metadata: expect.objectContaining({ + processing: expect.objectContaining({ + transcription: expect.objectContaining({ + status, + ...additionalData, + }), + }), + }), + }); + }); + }); + + describe('handleTranscriptionCompleted', () => { + it('should handle successful transcription completion', async () => { + const transcriptionResult = { + text: 'Hello world', + segments: [], + utterances: [], + }; + + // Mock first call to get memo details + mockSupabaseServiceClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: { + id: mockMemoId, + metadata: { + location: { + source: 'manual', + }, + }, + duration_seconds: 300, + space_id: mockSpaceId, + }, + error: null, + }), + }); + + // Mock second call to update memo with transcription + mockSupabaseServiceClient.from.mockReturnValueOnce({ + update: jest.fn().mockReturnThis(), + eq: jest.fn().mockResolvedValue({ data: {}, error: null }), + }); + + // Mock third call to update memo with headline + mockSupabaseServiceClient.from.mockReturnValueOnce({ + update: jest.fn().mockReturnThis(), + eq: jest.fn().mockResolvedValue({ data: {}, error: null }), + }); + + creditConsumptionService.consumeTranscriptionCredits.mockResolvedValue({ + success: true, + creditsConsumed: 10, + creditType: 'space', + message: 'Credits consumed', + }); + + // Mock location source + const mockLocationResponse = { + source: 'manual', + }; + + // Mock first fetch call for location data source + (global.fetch as jest.Mock).mockImplementationOnce(() => + Promise.resolve({ + ok: true, + json: async () => mockLocationResponse, + }) + ); + + // Mock second fetch call for headline edge function + (global.fetch as jest.Mock).mockImplementationOnce(() => + Promise.resolve({ + ok: true, + json: async () => ({ headline: 'Test Headline', intro: 'Test Intro' }), + }) + ); + + const result = await service.handleTranscriptionCompleted( + mockMemoId, + mockUserId, + transcriptionResult, + 'fast', + true, + undefined, + mockToken + ); + + expect(result).toEqual({ success: true, message: 'Transcription processed successfully' }); + expect(creditConsumptionService.consumeTranscriptionCredits).toHaveBeenCalled(); + }); + + it('should handle failed transcription', async () => { + const error = 'Transcription failed'; + + const updateMock = jest.fn().mockResolvedValue({ data: {}, error: null }); + mockSupabaseServiceClient.from.mockReturnValue({ + update: jest.fn(() => ({ + eq: jest.fn().mockResolvedValue({ data: {}, error: null }), + })), + }); + + const result = await service.handleTranscriptionCompleted( + mockMemoId, + mockUserId, + undefined, + 'fast', + false, + error, + mockToken + ); + + expect(result).toEqual({ + success: false, + message: 'Transcription failed for memo memo-123: Transcription failed', + }); + + // Verify the update was called on the mock + const fromCall = mockSupabaseServiceClient.from.mock.calls[0]; + expect(fromCall[0]).toBe('memos'); + }); + }); + + describe('createMemoFromUploadedFile', () => { + it('should create memo from uploaded file successfully', async () => { + const filePath = 'uploads/audio.mp3'; + const duration = 300; + + mockSupabaseClient.from.mockReturnValueOnce({ + upsert: jest.fn().mockResolvedValue({ + data: null, + error: null, + }), + }); + + mockSupabaseClient.from.mockReturnValueOnce({ + select: jest.fn().mockReturnThis(), + eq: jest.fn().mockReturnThis(), + single: jest.fn().mockResolvedValue({ + data: { id: mockMemoId }, + error: null, + }), + }); + + const result = await service.createMemoFromUploadedFile( + mockUserId, + filePath, + duration, + mockSpaceId, + null, + undefined, + mockToken + ); + + expect(result).toEqual({ memoId: mockMemoId, audioPath: filePath }); + expect(mockSupabaseClient.from).toHaveBeenCalledWith('memos'); + expect(mockSupabaseClient.upsert).toHaveBeenCalledWith( + expect.objectContaining({ + user_id: mockUserId, + space_id: mockSpaceId, + duration_seconds: duration, + }), + { + onConflict: 'id', + ignoreDuplicates: false, + } + ); + }); + }); + + describe('Space Management Methods', () => { + describe('deleteMemoroSpace', () => { + it('should delete space and cleanup memo links', async () => { + spacesService.deleteSpace.mockResolvedValue({ success: true }); + spaceSyncService.removeSpaceMembership.mockResolvedValue(undefined); + + mockSupabaseServiceClient.from.mockReturnValue({ + delete: jest.fn().mockReturnThis(), + eq: jest.fn().mockResolvedValue({ data: {}, error: null }), + }); + + const result = await service.deleteMemoroSpace(mockUserId, mockSpaceId, mockToken); + + expect(result).toEqual({ success: true }); + expect(spacesService.deleteSpace).toHaveBeenCalledWith(mockUserId, mockSpaceId, mockToken); + // Note: removeSpaceMembership might not be called in all cases + }); + }); + + describe('inviteUserToSpace', () => { + it('should invite user to space successfully', async () => { + const email = 'invitee@example.com'; + const role = 'viewer'; + + spacesService.addSpaceMember.mockResolvedValue({ + inviteId: 'invite-123', + message: 'Invitation sent', + }); + + const result = await service.inviteUserToSpace( + mockUserId, + mockSpaceId, + email, + role, + mockToken + ); + + expect(result).toEqual({ inviteId: 'invite-123', message: 'Invitation sent' }); + expect(spacesService.addSpaceMember).toHaveBeenCalledWith( + mockSpaceId, + email, + role, + mockToken + ); + }); + }); + + describe('leaveSpace', () => { + it('should leave space successfully', async () => { + spacesService.leaveSpace.mockResolvedValue({ success: true }); + spaceSyncService.removeSpaceMembership.mockResolvedValue(undefined); + + const result = await service.leaveSpace(mockUserId, mockSpaceId, mockToken); + + expect(result).toEqual({ success: true }); + expect(spacesService.leaveSpace).toHaveBeenCalledWith(mockUserId, mockSpaceId, mockToken); + // Note: removeSpaceMembership might not be called in all cases + }); + }); + }); + + describe('safeSourceMerge', () => { + it('should merge updates into existing source', () => { + const existingSource = { + type: 'audio', + path: 'uploads/original.mp3', + format: 'mp3', + duration: 120, + }; + + const updates = { + primary_language: 'en', + languages: ['en', 'es'], + speakers: { '0': 'Speaker 1' }, + }; + + const result = service['safeSourceMerge'](existingSource, updates); + + expect(result).toEqual({ + type: 'audio', + path: 'uploads/original.mp3', + format: 'mp3', + duration: 120, + primary_language: 'en', + languages: ['en', 'es'], + speakers: { '0': 'Speaker 1' }, + }); + }); + + it('should handle empty existing source', () => { + const updates = { + type: 'audio', + path: 'uploads/new.mp3', + primary_language: 'en', + }; + + const result = service['safeSourceMerge'](null, updates); + + expect(result).toEqual({ + type: 'audio', + path: 'uploads/new.mp3', + primary_language: 'en', + }); + }); + + it('should flatten nested source properties', () => { + const existingSource = { + source: { + type: 'audio', + path: 'uploads/nested.mp3', + }, + duration: 180, + }; + + const updates = { + primary_language: 'fr', + }; + + const result = service['safeSourceMerge'](existingSource, updates); + + expect(result).toEqual({ + type: 'audio', + path: 'uploads/nested.mp3', + duration: 180, + primary_language: 'fr', + }); + expect(result.source).toBeUndefined(); + }); + + it('should fix invalid type property', () => { + const existingSource = { + type: { invalid: 'object' }, + path: 'uploads/file.mp3', + }; + + const updates = { + duration: 240, + }; + + const result = service['safeSourceMerge'](existingSource, updates); + + expect(result).toEqual({ + type: 'audio', + path: 'uploads/file.mp3', + duration: 240, + }); + }); + + it('should convert invalid path property to string', () => { + const existingSource = { + type: 'audio', + path: { invalid: 'object' }, + }; + + const updates = { + format: 'wav', + }; + + const result = service['safeSourceMerge'](existingSource, updates); + + expect(result).toEqual({ + type: 'audio', + path: '[object Object]', + format: 'wav', + }); + }); + + it('should preserve additional_recordings array', () => { + const existingSource = { + type: 'audio', + path: 'uploads/main.mp3', + additional_recordings: [{ path: 'uploads/additional1.mp3', status: 'completed' }], + }; + + const updates = { + additional_recordings: [ + { path: 'uploads/additional1.mp3', status: 'completed' }, + { path: 'uploads/additional2.mp3', status: 'processing' }, + ], + }; + + const result = service['safeSourceMerge'](existingSource, updates); + + expect(result).toEqual({ + type: 'audio', + path: 'uploads/main.mp3', + additional_recordings: [ + { path: 'uploads/additional1.mp3', status: 'completed' }, + { path: 'uploads/additional2.mp3', status: 'processing' }, + ], + }); + }); + + it('should handle complex nested structure with all properties', () => { + const existingSource = { + source: { + type: 'audio', + path: 'uploads/complex.mp3', + speakers: { '0': 'John' }, + }, + utterances: [{ speaker: '0', text: 'Hello' }], + additional_recordings: [], + }; + + const updates = { + primary_language: 'en', + languages: ['en'], + utterances: [ + { speaker: '0', text: 'Hello' }, + { speaker: '1', text: 'Hi there' }, + ], + speakers: { '0': 'John', '1': 'Jane' }, + }; + + const result = service['safeSourceMerge'](existingSource, updates); + + expect(result).toEqual({ + type: 'audio', + path: 'uploads/complex.mp3', + speakers: { '0': 'John', '1': 'Jane' }, + utterances: [ + { speaker: '0', text: 'Hello' }, + { speaker: '1', text: 'Hi there' }, + ], + additional_recordings: [], + primary_language: 'en', + languages: ['en'], + }); + }); + }); +}); diff --git a/apps/memoro/apps/backend/src/memoro/memoro.service.ts b/apps/memoro/apps/backend/src/memoro/memoro.service.ts new file mode 100644 index 000000000..6ee32c174 --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/memoro.service.ts @@ -0,0 +1,3045 @@ +import { + Injectable, + NotFoundException, + ForbiddenException, + BadRequestException, +} from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { SpacesClientService } from '../spaces/spaces-client.service'; +import { SpaceSyncService } from '../spaces/space-sync.service'; +import { createClient, SupabaseClient } from '@supabase/supabase-js'; +import { + MemoroSpaceDto, + LinkMemoSpaceDto, + UnlinkMemoSpaceDto, +} from '../interfaces/memoro.interfaces'; +import { PendingInvitesResponseDto } from '../interfaces/spaces.interfaces'; +import { CreditConsumptionService } from '../credits/credit-consumption.service'; +import { calculateTranscriptionCost } from '../credits/pricing.constants'; +import { InsufficientCreditsException } from '../errors/insufficient-credits.error'; +import { HeadlineService } from '../ai/headline/headline.service'; +import { randomUUID } from 'crypto'; +const { v4: uuidv4 } = require('uuid'); + +@Injectable() +export class MemoroService { + private readonly MEMORO_APP_ID: string; + private memoroClient: SupabaseClient; + private memoroServiceClient: SupabaseClient; // Service role client for RLS bypass + private readonly memoroUrl: string; + private readonly memoroKey: string; + private readonly memoroServiceKey: string; + + constructor( + private configService: ConfigService, + private spacesService: SpacesClientService, + private spaceSyncService: SpaceSyncService, + private creditConsumptionService: CreditConsumptionService, + private headlineService: HeadlineService + ) { + this.MEMORO_APP_ID = this.configService.get( + 'MEMORO_APP_ID', + '973da0c1-b479-4dac-a1b0-ed09c72caca8' + ); + + // Initialize Memoro-specific clients + this.memoroUrl = this.configService.get('MEMORO_SUPABASE_URL'); + this.memoroKey = this.configService.get('MEMORO_SUPABASE_ANON_KEY'); + this.memoroServiceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'); + + if (this.memoroUrl && this.memoroKey) { + console.log('Creating memoroClient with Memoro-specific credentials'); + this.memoroClient = createClient(this.memoroUrl, this.memoroKey); + + if (this.memoroServiceKey) { + console.log('Creating memoroServiceClient with service role credentials'); + this.memoroServiceClient = createClient(this.memoroUrl, this.memoroServiceKey); + } else { + console.warn('MEMORO_SUPABASE_SERVICE_KEY not provided, falling back to anon key'); + this.memoroServiceClient = this.memoroClient; + } + } else { + throw new Error('MEMORO_SUPABASE_URL or MEMORO_SUPABASE_ANON_KEY not provided'); + } + } + + // Getter methods for Supabase connection info (used for direct DB operations in emergency) + getSupabaseUrl(): string { + return this.memoroUrl; + } + + getSupabaseKey(): string { + return this.memoroKey; + } + + async getMemoroSpaces(userId: string, token: string): Promise { + try { + console.info('WE DONT GET SPACES YET, FUTURE IMPLEMENTATION'); + return []; + console.log(`[getMemoroSpaces] Starting request for userId: ${userId}`); + + // Get spaces accessible to this user using the SpacesService + console.log(`[getMemoroSpaces] Calling spacesService.getUserSpaces for userId: ${userId}`); + const spaces = await this.spacesService.getUserSpaces(userId, token); + + console.log(`[getMemoroSpaces] Successfully filtered spaces. Count: ${spaces?.length || 0}`); + if (spaces && spaces.length > 0) { + console.log('[getMemoroSpaces] First space sample:', JSON.stringify(spaces[0], null, 2)); + } + + // If we have spaces, get the memo counts for each + if (spaces && spaces.length > 0) { + const spaceIds = spaces.map((space) => space.id); + console.log( + `[getMemoroSpaces] Getting memo counts for spaceIds: ${JSON.stringify(spaceIds)}` + ); + + try { + // Get the Memoro-specific client with JWT authentication if available + let memoroClient; + if (token) { + console.log(`[getMemoroSpaces] Using authenticated Memoro client with JWT`); + memoroClient = this.getMemoroClientWithAuth(token); + } else { + console.log(`[getMemoroSpaces] Using unauthenticated Memoro client (no JWT provided)`); + memoroClient = this.memoroClient; + } + + // Check if the memo_spaces table exists first + console.log(`[getMemoroSpaces] Checking if memo_spaces table exists`); + const { error: tableCheckError } = await memoroClient + .from('memo_spaces') + .select('space_id') + .limit(1); + + if (tableCheckError && tableCheckError.code === '42P01') { + // Table doesn't exist, set all counts to 0 + console.log( + `[getMemoroSpaces] memo_spaces table doesn't exist, setting all counts to 0` + ); + spaces.forEach((space: MemoroSpaceDto) => { + space.memo_count = 0; + }); + } else { + // Table exists, try to get counts + // Set default counts to 0 for all spaces first + spaces.forEach((space: MemoroSpaceDto) => { + space.memo_count = 0; + }); + + // Try to get actual counts where available + for (const space of spaces) { + try { + const { count, error } = await memoroClient + .from('memo_spaces') + .select('*', { count: 'exact' }) + .eq('space_id', space.id); + + if (!error) { + space.memo_count = count || 0; + console.log(`[getMemoroSpaces] Space ${space.id} has ${space.memo_count} memos`); + } + } catch (countError) { + console.error( + `[getMemoroSpaces] Error counting memos for space ${space.id}:`, + countError + ); + // Count remains 0 as set above + } + } + } + } catch (error) { + console.error('[getMemoroSpaces] Exception in memo counts processing:', error); + // Set all counts to 0 if there was an error + spaces.forEach((space: MemoroSpaceDto) => { + space.memo_count = 0; + }); + } + } + + // Sanitize spaces data before returning to frontend + const sanitizedSpaces = this.sanitizeSpacesForFrontend(spaces || [], userId); + console.log(`[getMemoroSpaces] Returning ${sanitizedSpaces.length} sanitized spaces`); + + return sanitizedSpaces; + } catch (error) { + console.error('Unexpected error in getMemoroSpaces:', error); + const errorMessage = + error.message || (typeof error === 'object' ? JSON.stringify(error) : String(error)); + throw new Error(`Failed to get Memoro spaces: ${errorMessage}`); + } + } + + /** + * Sanitizes space data for frontend consumption by removing sensitive information + * @param spaces Array of space objects to sanitize + * @returns Array of sanitized space objects + */ + private sanitizeSpacesForFrontend(spaces: MemoroSpaceDto[], userId: string): MemoroSpaceDto[] { + return spaces.map((space) => { + // Check if the user is the owner + const isOwner = space.owner_id === userId || space.roles?.members?.[userId]?.role === 'owner'; + + // Only keep essential properties that the frontend needs + return { + id: space.id, + name: space.name, + owner_id: space.owner_id, + memo_count: space.memo_count || 0, + // Only include minimal role information if needed + roles: space.roles + ? { + members: space.roles.members ? Object.keys(space.roles.members) : [], + } + : { members: [] }, + created_at: space.created_at, + updated_at: space.updated_at, + isOwner, // Add the isOwner flag + } as MemoroSpaceDto; + }); + } + + async createMemoroSpace(userId: string, spaceName: string, token: string) { + try { + // Create the space in the middleware first + const space = await this.spacesService.createSpace(userId, spaceName, token); + + // Sync the owner to the space_members table for RLS access control + try { + await this.spaceSyncService.syncSpaceMembership( + space.id, + userId, + 'owner' // Owner role + ); + console.log(`Successfully synced owner ${userId} for new space ${space.id}`); + } catch (syncError) { + // Log but don't fail if sync fails + console.error(`Failed to sync space owner: ${syncError.message}`); + } + + return space; + } catch (error) { + console.error('Error creating Memoro space:', error); + throw new Error(`Failed to create Memoro space: ${error.message}`); + } + } + + async getMemoroSpaceDetails(userId: string, spaceId: string, token: string) { + try { + // Try to get the space details directly first + try { + // Get full space details using the spaces service + const spaceDetails = await this.spacesService.getSpaceDetails(spaceId, token); + return spaceDetails; + } catch (detailsError) { + // If this fails, log the error and try verification as a fallback + console.log( + `Initial space details fetch failed: ${detailsError.message}. Trying access verification...` + ); + + // Verify user has access to this Memoro space through the Spaces service + await this.verifyMemoroSpaceAccess(userId, spaceId, token); + + // If verification succeeds, try getting details again + return await this.spacesService.getSpaceDetails(spaceId, token); + } + } catch (error) { + console.error('Error fetching Memoro space details:', error); + + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + + throw new Error(`Failed to get Memoro space details: ${error.message}`); + } + } + + /** + * Gets invites for a space + * @param spaceId Space ID to get invites for + * @param token JWT token for authorization + */ + async getSpaceInvites(spaceId: string, token: string) { + try { + console.log(`[getSpaceInvites] Getting invites for space ${spaceId}`); + // Proxy the request to the spaces service + const invitesResult = await this.spacesService.getSpaceInvites(spaceId, token); + console.log(`[getSpaceInvites] Successfully retrieved invites for space ${spaceId}`); + return invitesResult; + } catch (error) { + console.error(`[getSpaceInvites] Error getting invites for space ${spaceId}:`, error); + + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + + throw new Error(`Failed to get invites for space ${spaceId}: ${error.message}`); + } + } + + /** + * Invites a user to a space by email + * @param userId ID of the user sending the invitation + * @param spaceId Space ID to invite to + * @param email Email of the user to invite + * @param role Role to assign (owner, admin, editor, viewer) + * @param token JWT token for authorization + * @returns Result of the invitation operation + */ + async inviteUserToSpace( + userId: string, + spaceId: string, + email: string, + role: string, + token: string + ) { + try { + console.log( + `[inviteUserToSpace] User ${userId} inviting ${email} to space ${spaceId} with role ${role}` + ); + + // Validate input + if (!spaceId || !email || !role) { + throw new BadRequestException('Space ID, email, and role are required'); + } + + // Validate the role + const validRoles = ['owner', 'admin', 'editor', 'viewer']; + if (!validRoles.includes(role)) { + throw new BadRequestException(`Invalid role. Must be one of: ${validRoles.join(', ')}`); + } + + // Verify that the user has access to this space and is an owner/admin + try { + // First verify the user has access to the space + await this.verifyMemoroSpaceAccess(userId, spaceId, token); + + // Now proxy the invite request to the spaces service + const result = await this.spacesService.addSpaceMember(spaceId, email, role, token); + console.log(`[inviteUserToSpace] Successfully invited ${email} to space ${spaceId}`); + + // If the user already exists (has an ID), sync them to the space_members table + if (result.invitee_id) { + try { + await this.spaceSyncService.syncSpaceMembership( + spaceId, + result.invitee_id, + role, + userId // invited by current user + ); + console.log( + `[inviteUserToSpace] Synced space member ${result.invitee_id} to space ${spaceId}` + ); + } catch (syncError) { + // Log but don't fail if sync fails + console.error(`[inviteUserToSpace] Failed to sync space member: ${syncError.message}`); + } + } + + return result; + } catch (error) { + console.error(`[inviteUserToSpace] Error verifying access or sending invite:`, error); + throw error; + } + } catch (error) { + console.error(`[inviteUserToSpace] Error inviting user to space ${spaceId}:`, error); + + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + + throw new Error(`Failed to invite user to space: ${error.message}`); + } + } + + /** + * Resends an invitation to join a space + * @param userId ID of the user performing the action + * @param inviteId ID of the invitation to resend + * @param token JWT token for authorization + * @returns Success response + */ + async resendSpaceInvite(userId: string, inviteId: string, token: string) { + try { + console.log(`[resendSpaceInvite] User ${userId} resending invite ${inviteId}`); + + if (!inviteId) { + throw new BadRequestException('Invite ID is required'); + } + + // Proxy the resend request to the spaces service + const result = await this.spacesService.resendInvite(inviteId, token); + console.log(`[resendSpaceInvite] Successfully resent invite ${inviteId}`); + return result; + } catch (error) { + console.error(`[resendSpaceInvite] Error resending invite ${inviteId}:`, error); + + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + + throw new Error(`Failed to resend invitation: ${error.message}`); + } + } + + /** + * Cancels an invitation to join a space + * @param userId ID of the user performing the action + * @param inviteId ID of the invitation to cancel + * @param token JWT token for authorization + * @returns Success response + */ + async cancelSpaceInvite(userId: string, inviteId: string, token: string) { + try { + console.log(`[cancelSpaceInvite] User ${userId} canceling invite ${inviteId}`); + + if (!inviteId) { + throw new BadRequestException('Invite ID is required'); + } + + // Proxy the cancel request to the spaces service + const result = await this.spacesService.cancelInvite(inviteId, token); + console.log(`[cancelSpaceInvite] Successfully canceled invite ${inviteId}`); + return result; + } catch (error) { + console.error(`[cancelSpaceInvite] Error canceling invite ${inviteId}:`, error); + + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + + throw new Error(`Failed to cancel invitation: ${error.message}`); + } + } + + /** + * Verify if a user has access to a Memoro space + */ + private async verifyMemoroSpaceAccess(userId: string, spaceId: string, token: string) { + return this.spacesService.verifySpaceAccess(userId, spaceId, token); + } + + // The sanitizeSpacesForFrontend method is now updated with isOwner flag above + + /** + * Verify if a memo exists and the user has access to it + */ + private async verifyMemoAccess(userId: string, memoId: string, token?: string) { + console.log(`[verifyMemoAccess] Verifying access to memo ${memoId} for user ${userId}`); + + // Use the Memoro-specific client with JWT if available + const client = token ? this.getMemoroClientWithAuth(token) : this.memoroClient; + + try { + // Check if the memo exists and belongs to the user + console.log(`[verifyMemoAccess] Querying Memoro database for memo ${memoId}`); + const { data: memo, error } = await client + .from('memos') + .select('user_id') + .eq('id', memoId) + .single(); + + if (error) { + console.error(`[verifyMemoAccess] Database error:`, error); + throw new NotFoundException(`Memo not found: ${error.message}`); + } + + if (!memo) { + console.error(`[verifyMemoAccess] Memo ${memoId} not found in database`); + throw new NotFoundException('Memo not found: no data returned'); + } + + console.log(`[verifyMemoAccess] Memo found, belongs to user: ${memo.user_id}`); + + // In this implementation, we're assuming that the user can only link their own memos + if (memo.user_id !== userId) { + console.error( + `[verifyMemoAccess] User ${userId} does not have permission to access memo owned by ${memo.user_id}` + ); + throw new ForbiddenException('You do not have permission to access this memo'); + } + + console.log(`[verifyMemoAccess] Access verified for memo ${memoId}`); + return true; + } catch (error) { + if (error instanceof NotFoundException || error instanceof ForbiddenException) { + throw error; + } + console.error(`[verifyMemoAccess] Unexpected error:`, error); + throw new NotFoundException(`Memo access verification failed: ${error.message}`); + } + } + + /** + * Create a client with JWT authentication + * @param jwt The JWT token for authentication + * @param useServiceRole Whether to use service role client (bypasses RLS) + * @returns A Supabase client with appropriate authentication + */ + private getMemoroClientWithAuth(jwt: string, useServiceRole: boolean = false): SupabaseClient { + // If we need to bypass RLS and we have a service role client, return it + if (useServiceRole && this.memoroServiceClient) { + console.log('Using service role client to bypass RLS'); + return this.memoroServiceClient; + } + + console.log('Creating authenticated Memoro client with JWT'); + + // Get the Memoro Supabase URL and key + const memoroUrl = this.configService.get('MEMORO_SUPABASE_URL'); + const memoroKey = this.configService.get('MEMORO_SUPABASE_ANON_KEY'); + + if (!memoroUrl || !memoroKey) { + throw new Error('MEMORO_SUPABASE_URL or MEMORO_SUPABASE_ANON_KEY not provided'); + } + + // Create a new client with the JWT token to avoid modifying the shared client + return createClient(memoroUrl, memoroKey, { + global: { + headers: { + Authorization: `Bearer ${jwt}`, + }, + }, + }); + } + + /** + * Link a memo to a space + */ + async linkMemoToSpace(userId: string, linkMemoSpaceDto: LinkMemoSpaceDto, token?: string) { + try { + const { memoId, spaceId } = linkMemoSpaceDto; + + if (!memoId || !spaceId) { + throw new BadRequestException('Memo ID and Space ID are required'); + } + + console.log( + `[linkMemoToSpace] Attempting to link memo ${memoId} to space ${spaceId} for user ${userId}` + ); + + // Verify the user has access to both the memo and the space + await this.verifyMemoAccess(userId, memoId, token); + await this.verifyMemoroSpaceAccess(userId, spaceId, token); + + // Get the Memoro-specific client with JWT authentication if available + const memoroClient = token ? this.getMemoroClientWithAuth(token) : this.memoroClient; + + // Check if the link already exists + console.log(`[linkMemoToSpace] Checking if link already exists`); + const { data: existingLink, error: checkError } = await memoroClient + .from('memo_spaces') + .select('*') + .eq('memo_id', memoId) + .eq('space_id', spaceId) + .maybeSingle(); + + if (checkError) { + console.error(`[linkMemoToSpace] Error checking for existing link:`, checkError); + } + + if (existingLink) { + console.log(`[linkMemoToSpace] Link already exists`); + // Link already exists, no need to create it again + return { success: true, message: 'Memo is already linked to this space' }; + } + + // Create the link + console.log( + `[linkMemoToSpace] Creating new link between memo ${memoId} and space ${spaceId}` + ); + const { error } = await memoroClient.from('memo_spaces').insert({ + memo_id: memoId, + space_id: spaceId, + created_at: new Date(), + }); + + if (error) { + console.error(`[linkMemoToSpace] Error creating link:`, error); + throw new Error(`Failed to link memo to space: ${error.message}`); + } + + console.log(`[linkMemoToSpace] Successfully linked memo ${memoId} to space ${spaceId}`); + return { success: true, message: 'Memo linked to space successfully' }; + } catch (error) { + console.error('Error linking memo to space:', error); + + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + + throw new Error(`Failed to link memo to space: ${error.message}`); + } + } + + /** + * Unlink a memo from a space + */ + async unlinkMemoFromSpace( + userId: string, + unlinkMemoSpaceDto: UnlinkMemoSpaceDto, + token?: string + ) { + try { + const { memoId, spaceId } = unlinkMemoSpaceDto; + + if (!memoId || !spaceId) { + throw new BadRequestException('Memo ID and Space ID are required'); + } + + console.log( + `[unlinkMemoFromSpace] Attempting to unlink memo ${memoId} from space ${spaceId} for user ${userId}` + ); + + // Verify the user has access to both the memo and the space + await this.verifyMemoAccess(userId, memoId, token); + await this.verifyMemoroSpaceAccess(userId, spaceId, token); + + // Get the Memoro-specific client with JWT authentication if available + const memoroClient = token ? this.getMemoroClientWithAuth(token) : this.memoroClient; + + // Delete the link + console.log( + `[unlinkMemoFromSpace] Deleting link between memo ${memoId} and space ${spaceId}` + ); + const { error } = await memoroClient + .from('memo_spaces') + .delete() + .eq('memo_id', memoId) + .eq('space_id', spaceId); + + if (error) { + console.error(`[unlinkMemoFromSpace] Error deleting link:`, error); + throw new Error(`Failed to unlink memo from space: ${error.message}`); + } + + console.log( + `[unlinkMemoFromSpace] Successfully unlinked memo ${memoId} from space ${spaceId}` + ); + return { success: true, message: 'Memo unlinked from space successfully' }; + } catch (error) { + console.error('Error unlinking memo from space:', error); + + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + + throw new Error(`Failed to unlink memo from space: ${error.message}`); + } + } + + /** + * Get all memos for a specific space + */ + async getSpaceMemos(userId: string, spaceId: string, token?: string) { + try { + if (!spaceId) { + throw new BadRequestException('Space ID is required'); + } + + // Try to verify the user has access to the space, but don't fail if the space doesn't exist in core service + try { + await this.verifyMemoroSpaceAccess(userId, spaceId, token); + } catch (verifyError) { + console.warn(`Space access verification error: ${verifyError.message}`); + // If we can't verify access, but we have a record in our Supabase database, continue anyway + // This helps with cases where spaces might exist in Memoro but not fully synced with mana-core + } + + // Use the service role client after verifying authorization to bypass RLS + // This ensures we can see all memos in the space regardless of who created them + const memoroClient = token + ? this.getMemoroClientWithAuth(token, true) + : this.memoroServiceClient; + + // Get all memos linked to this space + const { data: memoSpaces, error: joinError } = await memoroClient + .from('memo_spaces') + .select('memo_id') + .eq('space_id', spaceId); + + if (joinError) { + throw new Error(`Failed to get memo-space relationships: ${joinError.message}`); + } + + if (!memoSpaces || memoSpaces.length === 0) { + return { memos: [] }; + } + + // Extract memo IDs + const memoIds = memoSpaces.map((ms) => ms.memo_id); + + // Get the memo details + const { data: memos, error: memosError } = await memoroClient + .from('memos') + .select( + ` + id, + title, + user_id, + source, + style, + is_pinned, + is_archived, + is_public, + metadata, + created_at, + updated_at + ` + ) + .in('id', memoIds); + + if (memosError) { + throw new Error(`Failed to get memos: ${memosError.message}`); + } + + return { memos: memos || [] }; + } catch (error) { + console.error('Error getting space memos:', error); + + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + + throw new Error(`Failed to get space memos: ${error.message}`); + } + } + + /** + * Deletes a memoro space and cleans up associated memo connections + */ + async deleteMemoroSpace(userId: string, spaceId: string, token: string) { + try { + // First, clean up all memo_spaces entries for this spaceId + console.log(`Cleaning up memo_spaces entries for space ${spaceId}`); + + // Use the Memoro-specific client with JWT if available + const client = token ? this.getMemoroClientWithAuth(token) : this.memoroClient; + + // Delete all memo_spaces entries for this space ID + const { error: deleteError } = await client + .from('memo_spaces') + .delete() + .eq('space_id', spaceId); + + if (deleteError) { + console.error(`Error cleaning up memo_spaces for space ${spaceId}:`, deleteError); + // Continue with space deletion even if cleanup fails + } else { + console.log(`Successfully cleaned up memo_spaces for space ${spaceId}`); + } + + // Now call the spaces service to delete the space + const response = await this.spacesService.deleteSpace(userId, spaceId, token); + + // Return the response from the spaces service + return response; + } catch (error) { + console.error(`Error in deleteMemoroSpace:`, error); + // Rethrow the error to be handled by the controller + throw error; + } + } + + /** + * Allows a non-owner to leave a space + */ + async leaveSpace(userId: string, spaceId: string, token: string) { + try { + // First, clean up any memo_spaces entries created by this user for this spaceId + console.log(`Cleaning up user's memo_spaces entries for space ${spaceId}`); + + // Use the Memoro-specific client with JWT if available + const client = token ? this.getMemoroClientWithAuth(token) : this.memoroClient; + + // First get the user's memos + const { data: userMemos, error: memosError } = await client + .from('memos') + .select('id') + .eq('user_id', userId); + + if (memosError) { + console.error(`Error fetching user memos:`, memosError); + } else if (userMemos && userMemos.length > 0) { + // Get the IDs of user's memos + const memoIds = userMemos.map((memo) => memo.id); + + // Delete any memo_spaces links for this user's memos in this space + const { error: deleteError } = await client + .from('memo_spaces') + .delete() + .eq('space_id', spaceId) + .in('memo_id', memoIds); + + if (deleteError) { + console.error(`Error cleaning up user's memo_spaces:`, deleteError); + } else { + console.log(`Successfully cleaned up user's memo connections for space ${spaceId}`); + } + } + + // Now call the spaces service to remove the user from the space + const result = await this.spacesService.leaveSpace(userId, spaceId, token); + + // After successfully leaving the space, remove the user from the space_members table + try { + await this.spaceSyncService.removeSpaceMembership(spaceId, userId); + console.log(`Successfully removed user ${userId} from space_members for space ${spaceId}`); + } catch (syncError) { + // Log but don't fail if sync fails + console.error(`Failed to remove user from space_members: ${syncError.message}`); + } + + return result; + } catch (error) { + console.error(`Error in leaveSpace:`, error); + // Rethrow the error to be handled by the controller + throw error; + } + } + + /** + * Gets all pending invites for the user + * @param userId ID of the user + * @param token JWT token for authorization + * @returns Object containing pending invites + */ + async getUserPendingInvites(userId: string, token: string): Promise { + try { + console.log(`[getUserPendingInvites] Getting pending invites for user ${userId}`); + + // Get all pending invites from spaces service + const invitesResult = await this.spacesService.getUserPendingInvites(token); + + console.log('invitesResult: ', invitesResult); + console.log( + `[getUserPendingInvites] Successfully retrieved ${invitesResult?.invites?.length || 0} pending invites for user ${userId}` + ); + return invitesResult; + } catch (error) { + console.error( + `[getUserPendingInvites] Error getting pending invites for user ${userId}:`, + error + ); + + if (error instanceof NotFoundException) { + // Return empty invites array instead of throwing an error if not found + return { invites: [] }; + } else if (error instanceof ForbiddenException || error instanceof BadRequestException) { + throw error; + } else { + // For any other errors, return empty array + console.error(`[getUserPendingInvites] Error fetching pending invites: ${error.message}`); + return { invites: [] }; + } + } + } + + /** + * Accepts a space invitation + * @param userId ID of the user accepting the invitation + * @param inviteId ID of the invitation to accept + * @param token JWT token for authorization + * @returns Success response + */ + async acceptSpaceInvite(userId: string, inviteId: string, token: string): Promise { + try { + console.log(`[acceptSpaceInvite] User ${userId} accepting invite ${inviteId}`); + + if (!inviteId) { + throw new BadRequestException('Invite ID is required'); + } + + // Call the spaces service to accept the invitation + // Pass the userId explicitly since auth.uid() won't work with JWTs + const response = await fetch( + `${this.spacesService['spacesServiceUrl']}/spaces/invites/accept`, + { + method: 'POST', + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + inviteId, + userId, // Add the userId explicitly + }), + } + ); + + if (!response.ok) { + const errorData = await response.json().catch(() => ({})); + const errorMessage = + errorData.message || + errorData.error || + `Error ${response.status}: ${response.statusText}`; + + if (response.status === 404) { + throw new NotFoundException(`Invitation not found: ${errorMessage}`); + } else if (response.status === 403) { + throw new ForbiddenException(`Not authorized to accept this invitation: ${errorMessage}`); + } else { + throw new BadRequestException(`Failed to accept invitation: ${errorMessage}`); + } + } + + const data = await response.json(); + console.log(`[acceptSpaceInvite] Successfully accepted invite ${inviteId}`); + + // After successfully accepting the invite, sync the user to the space_members table + if (data?.space?.id && data?.role) { + try { + await this.spaceSyncService.syncSpaceMembership(data.space.id, userId, data.role); + console.log( + `[acceptSpaceInvite] Synced user ${userId} as ${data.role} to space ${data.space.id}` + ); + } catch (syncError) { + // Log but don't fail if sync fails + console.error(`[acceptSpaceInvite] Failed to sync space member: ${syncError.message}`); + } + } + + return data; + } catch (error) { + console.error(`[acceptSpaceInvite] Error accepting invite ${inviteId}:`, error); + + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + + throw new Error(`Failed to accept invitation: ${error.message}`); + } + } + + /** + * Declines a space invitation + * @param userId ID of the user declining the invitation + * @param inviteId ID of the invitation to decline + * @param token JWT token for authorization + * @returns Success response + */ + async declineSpaceInvite(userId: string, inviteId: string, token: string): Promise { + try { + console.log(`[declineSpaceInvite] User ${userId} declining invite ${inviteId}`); + + if (!inviteId) { + throw new BadRequestException('Invite ID is required'); + } + + // Call the spaces service to decline the invitation + // Pass the userId explicitly since auth.uid() won't work with JWTs + const response = await fetch( + `${this.spacesService['spacesServiceUrl']}/spaces/invites/decline`, + { + method: 'POST', + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + inviteId, + userId, // Add the userId explicitly + }), + } + ); + + if (!response.ok) { + const errorData = await response.json().catch(() => ({})); + const errorMessage = + errorData.message || + errorData.error || + `Error ${response.status}: ${response.statusText}`; + + if (response.status === 404) { + throw new NotFoundException(`Invitation not found: ${errorMessage}`); + } else if (response.status === 403) { + throw new ForbiddenException( + `Not authorized to decline this invitation: ${errorMessage}` + ); + } else { + throw new BadRequestException(`Failed to decline invitation: ${errorMessage}`); + } + } + + const data = await response.json(); + console.log(`[declineSpaceInvite] Successfully declined invite ${inviteId}`); + return data; + } catch (error) { + console.error(`[declineSpaceInvite] Error declining invite ${inviteId}:`, error); + + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + + throw new Error(`Failed to decline invitation: ${error.message}`); + } + } + + /** + * Validates a memo for retry operations + * @param userId - User ID making the request + * @param memoId - Memo ID to validate + * @param token - Authentication token + * @returns Memo data if valid, null otherwise + */ + async validateMemoForRetry(userId: string, memoId: string, token: string): Promise { + try { + console.log(`[validateMemoForRetry] Validating memo ${memoId} for user ${userId}`); + + // Create authenticated client + const authClient = createClient(this.memoroUrl, this.memoroKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + // Get memo and verify ownership + const { data: memo, error } = await authClient + .from('memos') + .select('id, user_id, metadata, source, title') + .eq('id', memoId) + .eq('user_id', userId) + .single(); + + if (error) { + console.error(`[validateMemoForRetry] Error fetching memo ${memoId}:`, error); + return null; + } + + if (!memo) { + console.warn( + `[validateMemoForRetry] Memo ${memoId} not found or access denied for user ${userId}` + ); + return null; + } + + console.log(`[validateMemoForRetry] Memo ${memoId} validated for user ${userId}`); + return memo; + } catch (error) { + console.error(`[validateMemoForRetry] Error validating memo ${memoId}:`, error); + return null; + } + } + + /** + * Retries transcription for a failed memo + * @param userId - User ID making the request + * @param memoId - Memo ID to retry + * @param token - Authentication token + * @param retryAttempt - Current retry attempt number + */ + async retryTranscription( + userId: string, + memoId: string, + token: string, + retryAttempt: number + ): Promise { + try { + console.log( + `[retryTranscription] Retrying transcription for memo ${memoId}, attempt ${retryAttempt}` + ); + + // Get memo to extract audio path and space ID + const memo = await this.validateMemoForRetry(userId, memoId, token); + if (!memo) { + throw new NotFoundException('Memo not found'); + } + + const audioPath = memo.source?.audio_path || memo.source?.path; + const spaceId = memo.metadata?.spaceId; // If memo was associated with space + + if (!audioPath) { + throw new BadRequestException('No audio path found in memo'); + } + + // Update retry attempt in metadata first + const authClient = createClient(this.memoroUrl, this.memoroKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + const updatedMetadata = { + ...memo.metadata, + processing: { + ...memo.metadata?.processing, + transcription: { + ...memo.metadata?.processing?.transcription, + status: 'processing', + retryAttempts: retryAttempt, + lastRetryAt: new Date().toISOString(), + }, + }, + }; + + await authClient.from('memos').update({ metadata: updatedMetadata }).eq('id', memoId); + + // Call transcribe Edge Function (normal processing, will charge credits if successful) + const SUPABASE_URL = this.memoroUrl; + const response = await fetch(`${SUPABASE_URL}/functions/v1/transcribe`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${token}`, + }, + body: JSON.stringify({ + audioPath, + memoId, + spaceId, + }), + }); + + if (!response.ok) { + const errorText = await response.text(); + console.error(`[retryTranscription] Edge Function error:`, errorText); + throw new BadRequestException(`Transcription retry failed: ${errorText}`); + } + + console.log(`[retryTranscription] Successfully initiated retry for memo ${memoId}`); + return { success: true }; + } catch (error) { + console.error(`[retryTranscription] Error retrying transcription for memo ${memoId}:`, error); + throw error; + } + } + + /** + * Retries headline generation for a failed memo + * @param userId - User ID making the request + * @param memoId - Memo ID to retry + * @param token - Authentication token + * @param retryAttempt - Current retry attempt number + */ + async retryHeadline( + userId: string, + memoId: string, + token: string, + retryAttempt: number + ): Promise { + try { + console.log( + `[retryHeadline] Retrying headline generation for memo ${memoId}, attempt ${retryAttempt}` + ); + + // Validate memo ownership + const memo = await this.validateMemoForRetry(userId, memoId, token); + if (!memo) { + throw new NotFoundException('Memo not found'); + } + + // Check if memo has transcript (now in separate column) + if (!memo.transcript && !memo.source?.transcript && !memo.source?.transcription) { + throw new BadRequestException('No transcript found in memo for headline generation'); + } + + // Update retry attempt in metadata first + const authClient = createClient(this.memoroUrl, this.memoroKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + const updatedMetadata = { + ...memo.metadata, + processing: { + ...memo.metadata?.processing, + headline_and_intro: { + ...memo.metadata?.processing?.headline_and_intro, + status: 'processing', + retryAttempts: retryAttempt, + lastRetryAt: new Date().toISOString(), + }, + }, + }; + + await authClient.from('memos').update({ metadata: updatedMetadata }).eq('id', memoId); + + // Generate headline via internal AI service (replaces Edge Function call) + const result = await this.headlineService.processHeadlineForMemo(memoId); + + console.log( + `[retryHeadline] Successfully generated headline for memo ${memoId}: "${result.headline}"` + ); + return { success: true }; + } catch (error) { + console.error( + `[retryHeadline] Error retrying headline generation for memo ${memoId}:`, + error + ); + throw error; + } + } + + /** + * Upload audio file to storage and create memo (without processing) + * This method only handles file upload and memo creation for the new upload flow. + * + * @param userId - User ID + * @param file - Audio file from multer + * @param duration - Audio duration in seconds + * @param spaceId - Optional space ID to associate with memo + * @param blueprintId - Optional blueprint ID + * @param memoId - Optional existing memo ID to update + * @param token - Authentication token + * @returns Object with memo ID and audio path + */ + async uploadAudioToStorage( + userId: string, + file: Express.Multer.File, + duration: number, + spaceId?: string, + blueprintId?: string | null, + memoId?: string, + token?: string + ): Promise<{ + memoId: string; + audioPath: string; + }> { + try { + console.log( + `[uploadAudioToStorage] Uploading audio for user ${userId}, duration: ${duration}s, filename: ${file.originalname}` + ); + + // Create authenticated client + const authClient = createClient(this.memoroUrl, this.memoroKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + // Upload the audio file to Supabase Storage + console.log(`[uploadAudioToStorage] Uploading audio file to storage...`); + + // Create unique file path with memoId folder structure + const timestamp = new Date().toISOString().replace(/[:.]/g, '-'); + const fileExtension = + file.originalname?.split('.').pop() || file.mimetype?.split('/')[1] || 'm4a'; + const uniqueFilename = `audio_${timestamp}.${fileExtension}`; + + // Generate a memo ID for the path if not provided + const pathMemoId = memoId || randomUUID(); + const audioPath = `${userId}/${pathMemoId}/${uniqueFilename}`; + + try { + // Upload to Supabase Storage using the file buffer + console.log(`[uploadAudioToStorage] Uploading to bucket: user-uploads, path: ${audioPath}`); + console.log(`[uploadAudioToStorage] File buffer size: ${file.buffer.length} bytes`); + console.log( + `[uploadAudioToStorage] Content type: ${file.mimetype || `audio/${fileExtension}`}` + ); + + // Test bucket access first + const { data: buckets, error: listError } = await authClient.storage.listBuckets(); + console.log( + `[uploadAudioToStorage] Available buckets:`, + buckets?.map((b) => b.name) || 'none' + ); + if (listError) console.log(`[uploadAudioToStorage] Bucket list error:`, listError); + + const { data: uploadData, error: uploadError } = await authClient.storage + .from('user-uploads') + .upload(audioPath, file.buffer, { + contentType: file.mimetype || `audio/${fileExtension}`, + cacheControl: '3600', + upsert: false, + }); + + if (uploadError) { + console.error(`[uploadAudioToStorage] Upload error details:`, uploadError); + throw new Error(`Upload failed: ${uploadError.message}`); + } + + console.log(`[uploadAudioToStorage] File uploaded successfully: ${uploadData.path}`); + console.log(`[uploadAudioToStorage] Upload response:`, uploadData); + } catch (uploadError) { + console.error(`[uploadAudioToStorage] Upload failed:`, uploadError); + throw new BadRequestException(`Failed to upload audio file: ${uploadError.message}`); + } + + // Create or update memo + const currentTimestamp = new Date().toISOString(); + + const sourceData = { + type: 'audio', + audio_path: audioPath, + format: fileExtension, + duration: duration, + original_filename: file.originalname, + }; + + const metadata = { + processing: { + transcription: { + status: 'pending', + timestamp: currentTimestamp, + }, + }, + blueprint_id: blueprintId || null, + spaceId: spaceId || null, + }; + + let finalMemoId: string; + + if (memoId) { + // Update existing memo + console.log(`[uploadAudioToStorage] Updating existing memo ${memoId}...`); + const { error: updateError } = await authClient + .from('memos') + .update({ + source: sourceData, + updated_at: currentTimestamp, + metadata, + }) + .eq('id', memoId) + .eq('user_id', userId); + + if (updateError) { + throw new Error(`Failed to update memo: ${updateError.message}`); + } + + finalMemoId = memoId; + console.log(`[uploadAudioToStorage] Updated memo with ID: ${memoId}`); + } else { + // Create new memo with pre-generated ID + console.log(`[uploadAudioToStorage] Creating new memo with ID: ${pathMemoId}...`); + const { error: createError } = await authClient.from('memos').insert({ + id: pathMemoId, + user_id: userId, + source: sourceData, + is_pinned: false, + is_archived: false, + is_public: false, + created_at: currentTimestamp, + updated_at: currentTimestamp, + metadata, + }); + + if (createError) { + throw new Error(`Failed to create memo: ${createError.message}`); + } + + finalMemoId = pathMemoId; + console.log(`[uploadAudioToStorage] Created memo with ID: ${finalMemoId}`); + } + + // Link memo to space if spaceId provided + if (spaceId && !memoId) { + // Only link if it's a new memo + try { + console.log(`[uploadAudioToStorage] Linking memo ${finalMemoId} to space ${spaceId}`); + const { error: linkError } = await authClient.from('memo_spaces').insert({ + memo_id: finalMemoId, + space_id: spaceId, + created_at: currentTimestamp, + }); + + if (linkError) { + console.error( + `[uploadAudioToStorage] Failed to link memo to space: ${linkError.message}` + ); + // Don't fail the entire process for space linking errors + } + } catch (linkError) { + console.error(`[uploadAudioToStorage] Error linking memo to space:`, linkError); + } + } + + return { + memoId: finalMemoId, + audioPath, + }; + } catch (error) { + console.error(`[uploadAudioToStorage] Error uploading audio to storage:`, error); + if (error instanceof BadRequestException) { + throw error; + } + throw new BadRequestException(`Failed to upload audio: ${error.message}`); + } + } + + /** + * Enhanced routing constants + */ + private readonly FAST_TIME_LIMIT = 115 * 60; // 115 minutes in seconds + private readonly FAST_SIZE_LIMIT = 300 * 1024 * 1024; // 300MB in bytes + private readonly COST_PER_MINUTE = 2; // 2 mana per minute + + /** + * Determines transcription route and validates credits + */ + private async determineTranscriptionRoute( + duration: number, + fileSize: number, + userId: string, + spaceId?: string, + token?: string + ): Promise<{ route: 'fast' | 'batch'; cost: number }> { + // Calculate cost upfront (round up to nearest minute) + const estimatedCost = Math.ceil(duration / 60) * this.COST_PER_MINUTE; + + console.log( + `[determineTranscriptionRoute] Duration: ${duration}s (${Math.ceil(duration / 60)}min), Size: ${Math.round(fileSize / 1024 / 1024)}MB, Cost: ${estimatedCost} mana` + ); + + // Pre-validate credits before any processing + try { + const creditValidationUrl = this.configService.get('MANA_SERVICE_URL'); + + if (!creditValidationUrl) { + console.error('[CRITICAL ERROR] MANA_SERVICE_URL is not configured'); + throw new Error('Missing required configuration: MANA_SERVICE_URL'); + } + + const creditCheckBody = { + userId, + amount: estimatedCost, + spaceId: spaceId || null, + operation: 'transcription', + durationMinutes: Math.ceil(duration / 60), + }; + + console.log(`[determineTranscriptionRoute] Validating credits:`, creditCheckBody); + + const creditResponse = await fetch(`${creditValidationUrl}/credits/validate`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${token}`, + }, + body: JSON.stringify(creditCheckBody), + }); + + if (!creditResponse.ok) { + const errorText = await creditResponse.text(); + console.error( + `[determineTranscriptionRoute] Credit validation failed: ${creditResponse.status} - ${errorText}` + ); + // Try to extract available credits from error text + const availableMatch = errorText.match(/Available:\s*(\d+)/); + const availableCredits = availableMatch ? parseInt(availableMatch[1]) : 0; + + throw new InsufficientCreditsException({ + requiredCredits: estimatedCost, + availableCredits, + creditType: spaceId ? 'space' : 'user', + operation: 'transcription', + spaceId, + }); + } + + const creditResult = await creditResponse.json(); + console.log(`[determineTranscriptionRoute] Credit validation successful:`, creditResult); + } catch (error) { + if (error instanceof BadRequestException) throw error; + console.error(`[determineTranscriptionRoute] Credit validation error:`, error); + throw new BadRequestException(`Credit validation failed: ${error.message}`); + } + + // Determine route based on dual limits + if (duration <= this.FAST_TIME_LIMIT && fileSize <= this.FAST_SIZE_LIMIT) { + console.log( + `[determineTranscriptionRoute] Using FAST route: duration ${duration}s <= ${this.FAST_TIME_LIMIT}s AND size ${fileSize} <= ${this.FAST_SIZE_LIMIT}` + ); + return { route: 'fast', cost: estimatedCost }; + } else { + console.log( + `[determineTranscriptionRoute] Using BATCH route: duration ${duration}s > ${this.FAST_TIME_LIMIT}s OR size ${fileSize} > ${this.FAST_SIZE_LIMIT}` + ); + return { route: 'batch', cost: estimatedCost }; + } + } + + /** + * Uploads audio to storage, creates memo in processing state, and routes to appropriate transcription service + * @param userId - User ID making the request + * @param file - Uploaded file from multer + * @param duration - Audio duration in seconds + * @param spaceId - Optional space ID to associate with memo + * @param blueprintId - Optional blueprint ID + * @param recordingLanguages - Optional array of recording languages + * @param token - Authentication token + * @returns Object with memo ID, file path and processing route information + */ + async uploadAndProcessAudio( + userId: string, + file: Express.Multer.File, + duration: number, + spaceId?: string, + blueprintId?: string | null, + recordingLanguages?: string[], + token?: string + ): Promise<{ + memoId: string; + filePath: string; + processingRoute: 'fast' | 'batch'; + message: string; + estimatedCost: number; + }> { + try { + console.log( + `[uploadAndProcessAudio] Processing audio for user ${userId}, duration: ${duration}s, size: ${file.buffer.length} bytes, filename: ${file.originalname}` + ); + + // 1. Determine transcription route and validate credits FIRST + const { route, cost } = await this.determineTranscriptionRoute( + duration, + file.buffer.length, + userId, + spaceId, + token + ); + + console.log( + `[uploadAndProcessAudio] Route determined: ${route}, estimated cost: ${cost} mana` + ); + + // Create authenticated client + const authClient = createClient(this.memoroUrl, this.memoroKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + // Upload the audio file to Supabase Storage + console.log(`[uploadAndProcessAudio] Uploading audio file to storage...`); + + // Create unique file path with memoId folder structure + const timestamp = new Date().toISOString().replace(/[:.]/g, '-'); + const fileExtension = + file.originalname?.split('.').pop() || file.mimetype?.split('/')[1] || 'm4a'; + const uniqueFilename = `audio_${timestamp}.${fileExtension}`; + + // Generate a memo ID for the path + const generatedMemoId = randomUUID(); + const audioPath = `${userId}/${generatedMemoId}/${uniqueFilename}`; + + try { + // Upload to Supabase Storage using the file buffer + console.log( + `[uploadAndProcessAudio] Uploading to bucket: user-uploads, path: ${audioPath}` + ); + console.log(`[uploadAndProcessAudio] File buffer size: ${file.buffer.length} bytes`); + console.log( + `[uploadAndProcessAudio] Content type: ${file.mimetype || `audio/${fileExtension}`}` + ); + + const { data: uploadData, error: uploadError } = await authClient.storage + .from('user-uploads') + .upload(audioPath, file.buffer, { + contentType: file.mimetype || `audio/${fileExtension}`, + cacheControl: '3600', + upsert: false, + }); + + if (uploadError) { + throw new Error(`Upload failed: ${uploadError.message}`); + } + + console.log(`[uploadAndProcessAudio] File uploaded successfully: ${uploadData.path}`); + } catch (uploadError) { + console.error(`[uploadAndProcessAudio] Upload failed:`, uploadError); + throw new BadRequestException(`Failed to upload audio file: ${uploadError.message}`); + } + + // Create memo in processing state + const currentTimestamp = new Date().toISOString(); + + const sourceData = { + type: 'audio', + audio_path: audioPath, + format: fileExtension, + duration: duration, + original_filename: file.originalname, + }; + + const metadata = { + processing: { + transcription: { + status: 'processing', + timestamp: currentTimestamp, + }, + }, + blueprintId: blueprintId || null, + spaceId: spaceId || null, + recordingLanguages: recordingLanguages || null, + }; + + console.log(`[processAudioForTranscription] Creating memo with ID: ${generatedMemoId}...`); + const { error: createError } = await authClient.from('memos').insert({ + id: generatedMemoId, + user_id: userId, + source: sourceData, + is_pinned: false, + is_archived: false, + is_public: false, + created_at: currentTimestamp, + updated_at: currentTimestamp, + metadata, + }); + + if (createError) { + throw new Error(`Failed to create memo: ${createError.message}`); + } + + const memoId = generatedMemoId; + console.log(`[processAudioForTranscription] Created memo with ID: ${memoId}`); + + // Link memo to space if spaceId provided + if (spaceId) { + try { + console.log(`[processAudioForTranscription] Linking memo ${memoId} to space ${spaceId}`); + const { error: linkError } = await authClient.from('memo_spaces').insert({ + memo_id: memoId, + space_id: spaceId, + created_at: currentTimestamp, + }); + + if (linkError) { + console.error( + `[processAudioForTranscription] Failed to link memo to space: ${linkError.message}` + ); + // Don't fail the entire process for space linking errors + } + } catch (linkError) { + console.error(`[processAudioForTranscription] Error linking memo to space:`, linkError); + } + } + + // Route to appropriate transcription service using new architecture + const audioServiceUrl = + this.configService.get('AUDIO_MICROSERVICE_URL') || + 'https://audio-microservice-624477741877.europe-west3.run.app'; + + console.log(`[uploadAndProcessAudio] Routing to ${route} transcription service...`); + + if (route === 'fast') { + // Route to audio microservice fast transcription + console.log(`[uploadAndProcessAudio] Calling audio microservice fast transcription...`); + + const requestBody = { + audioPath, + memoId, + userId, + spaceId, + recordingLanguages: + recordingLanguages && recordingLanguages.length > 0 ? recordingLanguages : undefined, + }; + + try { + const response = await fetch(`${audioServiceUrl}/audio/transcribe-realtime`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${token}`, + }, + body: JSON.stringify(requestBody), + }); + + if (!response.ok) { + const errorText = await response.text(); + console.error(`[uploadAndProcessAudio] Fast transcription service error:`, errorText); + + // Update memo status to error + await this.updateMemoProcessingStatus(authClient, memoId, 'transcription', 'error', { + reason: `Fast transcription failed: ${errorText}`, + route: 'fast', + estimatedCost: cost, + }); + + throw new BadRequestException(`Fast transcription failed: ${errorText}`); + } else { + console.log(`[uploadAndProcessAudio] Fast transcription service called successfully`); + } + } catch (transcribeError) { + console.error( + `[uploadAndProcessAudio] Error calling fast transcription:`, + transcribeError + ); + + // Update memo status to error + await this.updateMemoProcessingStatus(authClient, memoId, 'transcription', 'error', { + reason: `Fast transcription error: ${transcribeError.message}`, + route: 'fast', + estimatedCost: cost, + }); + + throw transcribeError; + } + } else { + // Route to audio microservice batch transcription + console.log(`[uploadAndProcessAudio] Calling audio microservice batch transcription...`); + + try { + const batchRequestBody = { + audioPath, + memoId, + userId, + spaceId, + recordingLanguages, + }; + + const response = await fetch(`${audioServiceUrl}/audio/transcribe-from-storage`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${token}`, + }, + body: JSON.stringify(batchRequestBody), + }); + + if (!response.ok) { + const errorText = await response.text(); + console.error(`[uploadAndProcessAudio] Batch transcription service error:`, errorText); + + // Update memo status to error + await this.updateMemoProcessingStatus(authClient, memoId, 'transcription', 'error', { + reason: `Batch transcription failed: ${errorText}`, + route: 'batch', + estimatedCost: cost, + }); + + throw new BadRequestException(`Batch transcription failed: ${errorText}`); + } else { + // Parse response to get jobId + const batchResponse = await response.json(); + const jobId = batchResponse.jobId; + + if (jobId) { + console.log( + `[uploadAndProcessAudio] Batch transcription started with jobId: ${jobId}` + ); + + // Update memo with jobId and processing status + await this.updateMemoProcessingStatus( + authClient, + memoId, + 'transcription', + 'processing', + { + jobId, + route: 'batch', + batchTranscription: true, + estimatedCost: cost, + } + ); + } else { + console.warn( + `[uploadAndProcessAudio] Batch service response missing jobId:`, + batchResponse + ); + + await this.updateMemoProcessingStatus(authClient, memoId, 'transcription', 'error', { + reason: 'Batch service response missing jobId', + route: 'batch', + estimatedCost: cost, + }); + } + } + } catch (batchError) { + console.error(`[uploadAndProcessAudio] Error calling batch transcription:`, batchError); + + // Update memo status to error + await this.updateMemoProcessingStatus(authClient, memoId, 'transcription', 'error', { + reason: `Batch transcription error: ${batchError.message}`, + route: 'batch', + estimatedCost: cost, + }); + + throw batchError; + } + } + + return { + memoId, + filePath: audioPath, + processingRoute: route, + message: `Audio uploaded and ${route} transcription initiated`, + estimatedCost: cost, + }; + } catch (error) { + console.error(`[uploadAndProcessAudio] Error uploading and processing audio:`, error); + throw error; + } + } + + /** + * Updates memo processing status in metadata + */ + private async updateMemoProcessingStatus( + client: any, + memoId: string, + processName: string, + status: 'processing' | 'completed' | 'completed_no_transcript' | 'error', + details?: any + ): Promise { + try { + const timestamp = new Date().toISOString(); + + // Get current metadata + const { data: currentMemo, error: fetchError } = await client + .from('memos') + .select('metadata') + .eq('id', memoId) + .single(); + + if (fetchError) { + console.error(`Error fetching memo metadata: ${fetchError.message}`); + return; + } + + const currentMetadata = currentMemo?.metadata || {}; + const newMetadata = { + ...currentMetadata, + processing: { + ...(currentMetadata.processing || {}), + [processName]: { + status, + timestamp, + ...details, + }, + }, + }; + + const { error: updateError } = await client + .from('memos') + .update({ metadata: newMetadata }) + .eq('id', memoId); + + if (updateError) { + console.error(`Error updating memo processing status: ${updateError.message}`); + } else { + console.log(`Updated memo ${memoId} ${processName} status to ${status}`); + } + } catch (error) { + console.error(`Error in updateMemoProcessingStatus:`, error); + } + } + + /** + * Updates memo with batch transcription jobId + */ + async updateMemoWithJobId( + memoId: string, + jobId: string, + token: string, + userSelectedLanguages?: string[] + ): Promise { + try { + const authClient = createClient(this.memoroUrl, this.memoroServiceKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + // Get current metadata + const { data: currentMemo, error: fetchError } = await authClient + .from('memos') + .select('metadata') + .eq('id', memoId) + .single(); + + if (fetchError) { + console.error(`Error fetching memo for jobId update: ${fetchError.message}`); + return; + } + + const currentMetadata = currentMemo?.metadata || {}; + const newMetadata = { + ...currentMetadata, + processing: { + ...(currentMetadata.processing || {}), + transcription: { + ...(currentMetadata.processing?.transcription || {}), + jobId, + status: 'processing', + timestamp: new Date().toISOString(), + route: 'batch', + batchTranscription: true, + userSelectedLanguages: userSelectedLanguages || [], + }, + }, + }; + + const { error: updateError } = await authClient + .from('memos') + .update({ metadata: newMetadata }) + .eq('id', memoId); + + if (updateError) { + console.error(`Error updating memo with jobId: ${updateError.message}`); + } else { + console.log(`Successfully updated memo ${memoId} with jobId ${jobId}`); + } + } catch (error) { + console.error(`Error in updateMemoWithJobId:`, error); + } + } + + /** + * Update memo transcription status in metadata + */ + async updateMemoTranscriptionStatus( + memoId: string, + status: 'pending' | 'processing' | 'completed' | 'completed_no_transcript' | 'failed', + token: string, + additionalData?: any + ): Promise { + try { + const authClient = createClient(this.memoroUrl, this.memoroServiceKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + // Get current metadata + const { data: currentMemo, error: fetchError } = await authClient + .from('memos') + .select('metadata') + .eq('id', memoId) + .single(); + + if (fetchError) { + console.error(`Error fetching memo for status update: ${fetchError.message}`); + return; + } + + const currentMetadata = currentMemo?.metadata || {}; + const newMetadata = { + ...currentMetadata, + transcription_status: status, + transcription_updated_at: new Date().toISOString(), + ...(additionalData && { transcription_data: additionalData }), + }; + + const { error: updateError } = await authClient + .from('memos') + .update({ metadata: newMetadata }) + .eq('id', memoId); + + if (updateError) { + console.error(`Error updating memo transcription status: ${updateError.message}`); + } else { + console.log(`Successfully updated memo ${memoId} transcription status to ${status}`); + } + } catch (error) { + console.error(`Error in updateMemoTranscriptionStatus:`, error); + } + } + + /** + * Updates batch transcription metadata using memo ID (simpler and more reliable) + */ + async updateBatchMetadataByMemoId( + memoId: string, + jobId: string, + batchTranscription: boolean, + token: string, + userSelectedLanguages?: string[], + userId?: string + ): Promise<{ success: boolean; memoId?: string; jobId?: string; message: string }> { + try { + // When using service auth (token is null), we need to validate ownership + const isServiceAuth = !token; + + // Use service role client for this operation + const serviceClient = isServiceAuth + ? createClient(this.memoroUrl, this.memoroServiceKey) + : createClient(this.memoroUrl, this.memoroServiceKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + // Get current metadata by memo ID directly + const { data: memo, error: fetchError } = await serviceClient + .from('memos') + .select('id, metadata, user_id') + .eq('id', memoId) + .single(); + + if (fetchError) { + throw new Error(`Failed to find memo: ${fetchError.message}`); + } + + if (!memo) { + throw new Error(`No memo found with ID ${memoId}`); + } + + // Validate ownership when using service auth + if (isServiceAuth && userId && memo.user_id !== userId) { + console.error( + `[updateBatchMetadataByMemoId] Ownership validation failed: memo user_id=${memo.user_id}, provided userId=${userId}` + ); + throw new Error(`Unauthorized: User ${userId} does not own memo ${memoId}`); + } + + // Update metadata with batch job information + const currentMetadata = memo.metadata || {}; + const updatedMetadata = { + ...currentMetadata, + processing: { + ...(currentMetadata.processing || {}), + transcription: { + ...(currentMetadata.processing?.transcription || {}), + jobId, + batchTranscription, + batchJobCreated: new Date().toISOString(), + status: 'processing', + userSelectedLanguages: userSelectedLanguages || [], + }, + }, + }; + + const { error: updateError } = await serviceClient + .from('memos') + .update({ + metadata: updatedMetadata, + updated_at: new Date().toISOString(), + }) + .eq('id', memoId); + + if (updateError) { + throw new Error(`Failed to update memo metadata: ${updateError.message}`); + } + + console.log(`Updated batch metadata for memo ${memoId}, jobId: ${jobId}`); + + return { + success: true, + memoId, + jobId, + message: 'Batch metadata updated successfully', + }; + } catch (error) { + console.error('Error updating batch metadata by memo ID:', error); + throw new Error(`Failed to update batch metadata: ${error.message}`); + } + } + + /** + * Get memo for reprocessing - validates ownership and gets space association + * @param userId - User ID making the request + * @param memoId - Memo ID to reprocess + * @param token - Authentication token + * @returns Memo data with space information if valid, null otherwise + */ + async getMemoForReprocessing(userId: string, memoId: string, token: string): Promise { + try { + console.log( + `[getMemoForReprocessing] Getting memo ${memoId} for reprocessing by user ${userId}` + ); + + // Create authenticated client + const authClient = createClient(this.memoroUrl, this.memoroKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + // First try to get memo directly if owned by user + const { data: memo, error: memoError } = await authClient + .from('memos') + .select( + ` + id, + user_id, + metadata, + source, + title, + created_at, + memo_spaces(space_id) + ` + ) + .eq('id', memoId) + .eq('user_id', userId) + .maybeSingle(); + + if (!memoError && memo) { + // Extract space_id from the joined result + const spaceId = memo.memo_spaces?.[0]?.space_id || null; + console.log( + `[getMemoForReprocessing] Found memo ${memoId} owned by user, space: ${spaceId}` + ); + return { ...memo, space_id: spaceId }; + } + + // If not directly owned, check if user has access through a space + const { data: spaceMemo, error: spaceError } = await authClient + .from('memo_spaces') + .select( + ` + space_id, + memos!inner( + id, + user_id, + metadata, + source, + title, + created_at + ), + memoro_spaces!inner( + id, + memoro_space_members!inner( + user_id, + role + ) + ) + ` + ) + .eq('memo_id', memoId) + .eq('memoro_spaces.memoro_space_members.user_id', userId) + .maybeSingle(); + + if (!spaceError && spaceMemo) { + const memoData = spaceMemo.memos; + console.log( + `[getMemoForReprocessing] Found memo ${memoId} through space ${spaceMemo.space_id}` + ); + return { ...memoData, space_id: spaceMemo.space_id }; + } + + console.warn( + `[getMemoForReprocessing] Memo ${memoId} not found or access denied for user ${userId}` + ); + return null; + } catch (error) { + console.error(`[getMemoForReprocessing] Error getting memo ${memoId}:`, error); + return null; + } + } + + /** + * Creates memo from pre-uploaded file (direct upload scenario) + */ + async createMemoFromUploadedFile( + userId: string, + filePath: string, + duration: number, + spaceId?: string, + blueprintId?: string, + memoId?: string, + token?: string, + recordingStartedAt?: string, + location?: any, + mediaType?: 'audio' | 'video', + videoMetadata?: any + ): Promise<{ memoId: string; audioPath: string; memo: any }> { + try { + const authClient = createClient(this.memoroUrl, this.memoroServiceKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + const generatedMemoId = memoId || uuidv4(); + const currentTimestamp = new Date().toISOString(); + + // Use recording start time if provided, otherwise use current time + const createdAtTimestamp = recordingStartedAt || currentTimestamp; + + console.log( + `[createMemoFromUploadedFile] Creating/updating memo ${generatedMemoId} for pre-uploaded ${mediaType || 'audio'} file: ${filePath}` + ); + if (recordingStartedAt) { + console.log( + `[createMemoFromUploadedFile] Using provided recording start time: ${recordingStartedAt}` + ); + } + if (mediaType === 'video' && videoMetadata) { + console.log( + `[createMemoFromUploadedFile] Video details: ${videoMetadata.width}x${videoMetadata.height}, ${videoMetadata.fps}fps, codec: ${videoMetadata.videoCodec}` + ); + } + + // Create or update memo record using UPSERT pattern + const memoData: any = { + id: generatedMemoId, + user_id: userId, + source: { + audio_path: filePath, + file_path: filePath, // Also store as file_path for clarity + duration: duration, + media_type: mediaType || 'audio', + ...(mediaType === 'video' && + videoMetadata && { + video_metadata: { + width: videoMetadata.width, + height: videoMetadata.height, + fps: videoMetadata.fps, + video_codec: videoMetadata.videoCodec, + audio_codec: videoMetadata.audioCodec, + audio_channels: videoMetadata.audioChannels, + audio_sample_rate: videoMetadata.audioSampleRate, + file_size: videoMetadata.fileSize, + bitrate: videoMetadata.bitrate, + has_audio_track: videoMetadata.hasAudioTrack, + }, + }), + }, + metadata: { + processing: { + transcription: { + status: 'pending', + timestamp: currentTimestamp, + route: duration > 6900 ? 'batch' : 'fast', // 1h55m threshold + media_type: mediaType || 'audio', + }, + }, + upload: { + method: 'direct_upload', + timestamp: currentTimestamp, + media_type: mediaType || 'audio', + }, + // Store the blueprint_id to control which blueprint processing runs + blueprint_id: blueprintId || null, + // Store the recording start time in metadata for frontend use + ...(recordingStartedAt && { recordingStartedAt }), + // Store address information in metadata if available + ...(location?.address && { address: location.address }), + }, + created_at: createdAtTimestamp, + updated_at: currentTimestamp, + }; + + // Add location coordinates to PostGIS column if provided + if (location && location.latitude && location.longitude) { + // Use SRID 4326 (WGS84) for GPS coordinates + memoData.location = `POINT(${location.longitude} ${location.latitude})`; + } + + const { error: upsertError } = await authClient.from('memos').upsert(memoData, { + onConflict: 'id', + ignoreDuplicates: false, + }); + + if (upsertError) { + throw new Error(`Failed to create memo: ${upsertError.message}`); + } + + // Link memo to space if spaceId provided (using upsert to handle retries) + if (spaceId) { + try { + console.log( + `[createMemoFromUploadedFile] Linking memo ${generatedMemoId} to space ${spaceId}` + ); + const { error: linkError } = await authClient.from('memo_spaces').upsert( + { + memo_id: generatedMemoId, + space_id: spaceId, + created_at: createdAtTimestamp, + }, + { + onConflict: 'memo_id,space_id', + ignoreDuplicates: true, // Skip if link already exists + } + ); + + if (linkError) { + console.error( + `[createMemoFromUploadedFile] Failed to link memo to space: ${linkError.message}` + ); + } + } catch (linkError) { + console.error(`[createMemoFromUploadedFile] Error linking memo to space:`, linkError); + } + } + + console.log( + `[createMemoFromUploadedFile] Successfully created/updated memo ${generatedMemoId}` + ); + + // Fetch and return the complete memo object so the client has immediate access to all state + const { data: createdMemo, error: fetchError } = await authClient + .from('memos') + .select('*') + .eq('id', generatedMemoId) + .single(); + + if (fetchError) { + console.error(`[createMemoFromUploadedFile] Failed to fetch created memo:`, fetchError); + // Still return basic info if fetch fails + return { + memoId: generatedMemoId, + audioPath: filePath, + memo: null, + }; + } + + return { + memo: createdMemo, + memoId: generatedMemoId, + audioPath: filePath, + }; + } catch (error) { + console.error(`[createMemoFromUploadedFile] Error:`, error); + throw error; + } + } + + /** + * Handles transcription completion callback from audio microservice + */ + async handleTranscriptionCompleted( + memoId: string, + userId: string, + transcriptionResult?: any, + route?: 'fast' | 'batch', + success?: boolean, + error?: string, + token?: string + ): Promise<{ success: boolean; message: string }> { + try { + console.log( + `[handleTranscriptionCompleted] Processing callback for memo ${memoId}, success: ${success}, route: ${route}` + ); + + if (transcriptionResult) { + console.log( + `[handleTranscriptionCompleted] DEBUG - Text length: ${transcriptionResult.text?.length || 0}` + ); + } else { + console.log(`[handleTranscriptionCompleted] DEBUG - transcriptionResult is null/undefined`); + } + + // When using service auth (token is null), we need to validate ownership + const isServiceAuth = !token; + + // Create client with appropriate auth + const authClient = isServiceAuth + ? createClient(this.memoroUrl, this.memoroServiceKey) + : createClient(this.memoroUrl, this.memoroServiceKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + if (success && transcriptionResult) { + // 1. Update memo with transcription results + console.log( + `[handleTranscriptionCompleted] Updating memo ${memoId} with transcription results` + ); + + // Get current memo metadata to preserve existing data + const { data: currentMemo, error: fetchError } = await authClient + .from('memos') + .select('metadata, source, user_id') + .eq('id', memoId) + .single(); + + if (fetchError) { + console.error(`[handleTranscriptionCompleted] Error fetching memo:`, fetchError); + throw new Error(`Failed to fetch memo: ${fetchError.message}`); + } + + // Validate ownership when using service auth + if (isServiceAuth && currentMemo?.user_id !== userId) { + console.error( + `[handleTranscriptionCompleted] Ownership validation failed: memo user_id=${currentMemo?.user_id}, provided userId=${userId}` + ); + throw new Error(`Unauthorized: User ${userId} does not own memo ${memoId}`); + } + + // Calculate actual duration for credit consumption + const audioDurationSeconds = + currentMemo?.source?.duration || + transcriptionResult.estimatedDuration || + Math.ceil((transcriptionResult.text?.length / 150) * 60) || + 30; // Fallback estimation + + const durationMinutes = Math.ceil(audioDurationSeconds / 60); + const actualCost = durationMinutes * this.COST_PER_MINUTE; + + // Check if transcript is empty or too short + const transcriptText = transcriptionResult.text?.trim() || ''; + const isEmptyTranscript = transcriptText.length === 0 || transcriptText.length < 5; + + // Update memo source with transcription data (transcript moved to separate column) + // IMPORTANT: Preserve the audio path from the original source + // Handle both 'path' and 'audio_path' field names for compatibility + const audioPath = currentMemo.source?.audio_path || currentMemo.source?.path; + const updatedSource = this.safeSourceMerge(currentMemo.source, { + // Preserved: audio path and original metadata from existing source + + audio_path: audioPath, // Standard field name + type: currentMemo.source?.type || 'audio', + format: currentMemo.source?.format, + duration: currentMemo.source?.duration, + original_filename: currentMemo.source?.original_filename, + // New transcription data + primary_language: transcriptionResult.primary_language, + languages: transcriptionResult.languages, + utterances: transcriptionResult.utterances, + speakers: transcriptionResult.speakers, + // Removed: transcript (moved to separate column) + // Removed: speakerMap (computed client-side) + }); + + // Update memo metadata to mark transcription as completed + const updatedMetadata = { + ...(currentMemo.metadata || {}), + processing: { + ...(currentMemo.metadata?.processing || {}), + transcription: { + status: isEmptyTranscript ? 'completed_no_transcript' : 'completed', + timestamp: new Date().toISOString(), + route, + actualCost, + durationMinutes, + textLength: transcriptionResult.text?.length || 0, + speakerCount: transcriptionResult.speakers + ? Object.keys(transcriptionResult.speakers).length + : 0, + }, + }, + }; + + // If transcript is empty, also mark headline as completed with appropriate title + if (isEmptyTranscript) { + updatedMetadata.processing.headline_and_intro = { + status: 'completed_no_transcript', + timestamp: new Date().toISOString(), + details: { + headline: 'Aufnahme ohne Sprache', + intro: 'Diese Aufnahme enthält keinen erkennbaren gesprochenen Text.', + language: transcriptionResult.primary_language || 'de-DE', + }, + triggered_by: 'empty_transcript_handler', + }; + } + + // Prepare update data + const updateData: any = { + source: updatedSource, + transcript: transcriptionResult.text, // Store transcript in dedicated column + metadata: updatedMetadata, + updated_at: new Date().toISOString(), + }; + + // If transcript is empty, also set the title directly + if (isEmptyTranscript) { + updateData.title = 'Aufnahme ohne Sprache'; + updateData.style = { + intro: 'Diese Aufnahme enthält keinen erkennbaren gesprochenen Text.', + }; + + // Log audio path preservation for debugging + console.log( + `[handleTranscriptionCompleted] Empty transcript - preserving audio path: ${audioPath}` + ); + console.log( + `[handleTranscriptionCompleted] Source has audio_path: ${!!updatedSource.audio_path}, legacy path: ${!!updatedSource.path}` + ); + } + + // Validate source structure before database update + if (!this.validateSourceStructure(updateData.source)) { + console.error( + `[handleTranscriptionCompleted] Invalid source structure detected for memo ${memoId}` + ); + console.error('Source data:', JSON.stringify(updateData.source, null, 2)); + } + + // Update the memo in database + const { error: updateError } = await authClient + .from('memos') + .update(updateData) + .eq('id', memoId); + + if (updateError) { + console.error(`[handleTranscriptionCompleted] Error updating memo:`, updateError); + throw new Error(`Failed to update memo: ${updateError.message}`); + } + + console.log( + `[handleTranscriptionCompleted] Successfully updated memo ${memoId} with transcription results` + ); + + // 2. Consume credits for successful transcription using centralized service + try { + console.log( + `[handleTranscriptionCompleted] Consuming ${actualCost} credits for ${durationMinutes} minutes of transcription` + ); + + // Extract spaceId from memo metadata if available + const spaceId = currentMemo?.metadata?.spaceId; + + const creditResult = await this.creditConsumptionService.consumeTranscriptionCredits( + userId, + durationMinutes, + actualCost, + memoId, + route, + spaceId, + token + ); + + if (creditResult.success) { + console.log( + `[handleTranscriptionCompleted] Successfully consumed ${creditResult.creditsConsumed} ${creditResult.creditType} credits` + ); + } else { + console.error( + `[handleTranscriptionCompleted] Credit consumption failed: ${creditResult.error || creditResult.message}` + ); + // Don't fail the entire process if credit consumption fails + } + } catch (creditError) { + console.error(`[handleTranscriptionCompleted] Error consuming credits:`, creditError); + // Don't fail the entire process if credit consumption fails + } + + // 3. Trigger headline generation for non-empty transcripts + if (!isEmptyTranscript) { + this.headlineService.processHeadlineForMemo(memoId).catch((headlineError) => { + console.error( + `[handleTranscriptionCompleted] Headline generation failed for memo ${memoId}:`, + headlineError + ); + }); + console.log( + `[handleTranscriptionCompleted] Headline generation triggered for memo ${memoId}` + ); + } + + return { + success: true, + message: `Transcription completed successfully for memo ${memoId}`, + }; + } else { + // Handle transcription failure + console.log( + `[handleTranscriptionCompleted] Handling transcription failure for memo ${memoId}: ${error}` + ); + + // Update memo with error status + const { data: currentMemo, error: fetchError } = await authClient + .from('memos') + .select('metadata') + .eq('id', memoId) + .single(); + + if (!fetchError && currentMemo) { + const updatedMetadata = { + ...(currentMemo.metadata || {}), + processing: { + ...(currentMemo.metadata?.processing || {}), + transcription: { + status: 'error', + timestamp: new Date().toISOString(), + route, + error: error || 'Transcription failed', + retryable: true, + }, + }, + }; + + await authClient + .from('memos') + .update({ + metadata: updatedMetadata, + updated_at: new Date().toISOString(), + }) + .eq('id', memoId); + } + + return { + success: false, + message: `Transcription failed for memo ${memoId}: ${error}`, + }; + } + } catch (callbackError) { + console.error(`[handleTranscriptionCompleted] Error in callback handler:`, callbackError); + throw new Error(`Transcription callback failed: ${callbackError.message}`); + } + } + + /** + * Creates a Supabase client for file operations + */ + createSupabaseClient(token?: string) { + return createClient(this.memoroUrl, this.memoroServiceKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + } + + /** + * Validates that a memo exists and belongs to the user for append operations + */ + async validateMemoForAppend(userId: string, memoId: string, token: string): Promise { + try { + const authClient = createClient(this.memoroUrl, this.memoroServiceKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + const { data: memo, error } = await authClient + .from('memos') + .select('id, user_id, source, metadata') + .eq('id', memoId) + .single(); + + if (error || !memo) { + console.error(`Memo not found: ${memoId}`, error); + return null; + } + + // Check if user has access (owner or through space) + if (memo.user_id !== userId) { + // Check if user has access through space + const spaceId = memo.metadata?.spaceId; + if (spaceId) { + // Check if user is a member of the space + const { data: spaceMember, error: spaceError } = await authClient + .from('space_members') + .select('id') + .eq('space_id', spaceId) + .eq('user_id', userId) + .single(); + + if (spaceError || !spaceMember) { + console.error(`User ${userId} does not have access to memo ${memoId}`); + return null; + } + } else { + console.error(`User ${userId} does not own memo ${memoId}`); + return null; + } + } + + return memo; + } catch (error) { + console.error(`Error validating memo for append:`, error); + throw error; + } + } + + /** + * Updates the status of an append transcription in additional_recordings + */ + async updateAppendTranscriptionStatus( + memoId: string, + recordingIndex: number | undefined, + status: 'processing' | 'completed' | 'error', + token: string, + additionalData?: any + ): Promise { + try { + const authClient = createClient(this.memoroUrl, this.memoroServiceKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + // Get current memo data + const { data: currentMemo, error: fetchError } = await authClient + .from('memos') + .select('source') + .eq('id', memoId) + .single(); + + if (fetchError || !currentMemo) { + console.error(`Error fetching memo for append status update: ${fetchError?.message}`); + return; + } + + const source = this.safeSourceMerge(currentMemo.source || {}, {}); + const additionalRecordings = source.additional_recordings || []; + + let targetIndex: number; + + // If a specific recordingIndex is provided, use it + if (recordingIndex !== undefined) { + targetIndex = recordingIndex; + } else if (status === 'processing') { + // For new processing status, always create a new recording + additionalRecordings.push({ + status: 'processing', + timestamp: new Date().toISOString(), + ...additionalData, + }); + targetIndex = additionalRecordings.length - 1; + } else { + // For other status updates, find the last recording that's in processing state + targetIndex = additionalRecordings.findIndex((rec: any) => rec.status === 'processing'); + if (targetIndex === -1) { + // If no processing recording found, this is an error case + console.error(`No processing recording found to update with status: ${status}`); + return; + } + } + + // Update the recording at the target index + if (targetIndex >= 0 && targetIndex < additionalRecordings.length) { + additionalRecordings[targetIndex] = { + ...additionalRecordings[targetIndex], + status, + updated_at: new Date().toISOString(), + ...additionalData, + }; + } + + // Prepare update with safe source merge + const updatedSource = this.safeSourceMerge(source, { + additional_recordings: additionalRecordings, + }); + + // Validate source structure before database update + if (!this.validateSourceStructure(updatedSource)) { + console.error( + `[updateAppendTranscriptionStatus] Invalid source structure detected for memo ${memoId}` + ); + console.error('Source data:', JSON.stringify(updatedSource, null, 2)); + } + + // Update the memo + const { error: updateError } = await authClient + .from('memos') + .update({ + source: updatedSource, + updated_at: new Date().toISOString(), + }) + .eq('id', memoId); + + if (updateError) { + console.error(`Error updating append transcription status: ${updateError.message}`); + } else { + console.log(`Updated append transcription status for memo ${memoId} to ${status}`); + } + } catch (error) { + console.error(`Error in updateAppendTranscriptionStatus:`, error); + } + } + + /** + * Handles append transcription completion and updates additional_recordings + */ + async handleAppendTranscriptionCompleted( + memoId: string, + userId: string, + transcriptionResult: any, + route: 'fast' | 'batch', + success: boolean, + error: string | null, + token: string + ): Promise { + try { + // When using service auth (token is null), we need to validate ownership + const isServiceAuth = !token; + + // Create client with appropriate auth + const authClient = isServiceAuth + ? createClient(this.memoroUrl, this.memoroServiceKey) + : createClient(this.memoroUrl, this.memoroServiceKey, { + global: { headers: { Authorization: `Bearer ${token}` } }, + }); + + // Get current memo data + const { data: currentMemo, error: fetchError } = await authClient + .from('memos') + .select('source, metadata, user_id') + .eq('id', memoId) + .single(); + + if (fetchError || !currentMemo) { + console.error(`Error fetching memo for append completion: ${fetchError?.message}`); + throw new Error(`Failed to fetch memo: ${fetchError?.message}`); + } + + // Validate ownership when using service auth + if (isServiceAuth && currentMemo.user_id !== userId) { + console.error( + `[handleAppendTranscriptionCompleted] Ownership validation failed: memo user_id=${currentMemo.user_id}, provided userId=${userId}` + ); + throw new Error(`Unauthorized: User ${userId} does not own memo ${memoId}`); + } + + const source = this.safeSourceMerge(currentMemo.source || {}, {}); + const additionalRecordings = source.additional_recordings || []; + + if (success && transcriptionResult) { + // Find the recording that's currently processing + const targetIndex = additionalRecordings.findIndex( + (rec: any) => rec.status === 'processing' + ); + + if (targetIndex === -1) { + console.error(`No processing recording found for memo ${memoId}`); + throw new Error('No processing recording found to update'); + } + + // Prefix speaker IDs to avoid conflicts between recordings + const prefixedSpeakerData = this.prefixSpeakerIds( + transcriptionResult.speakers, + transcriptionResult.speakerMap, + transcriptionResult.utterances, + targetIndex + ); + + // Update the recording with transcription results + additionalRecordings[targetIndex] = { + ...additionalRecordings[targetIndex], + transcript: transcriptionResult.text || '', + languages: transcriptionResult.languages || [], + primary_language: transcriptionResult.primary_language || 'de-DE', + speakers: prefixedSpeakerData.speakers, + speakerMap: prefixedSpeakerData.speakerMap, + utterances: prefixedSpeakerData.utterances, + status: 'completed', + updated_at: new Date().toISOString(), + }; + + // Prepare update with safe source merge + const updatedSource = this.safeSourceMerge(source, { + additional_recordings: additionalRecordings, + }); + + // Validate source structure before database update + if (!this.validateSourceStructure(updatedSource)) { + console.error( + `[handleAppendTranscriptionCompleted] Invalid source structure detected for memo ${memoId}` + ); + console.error('Source data:', JSON.stringify(updatedSource, null, 2)); + } + + // Update the memo + const { error: updateError } = await authClient + .from('memos') + .update({ + source: updatedSource, + updated_at: new Date().toISOString(), + }) + .eq('id', memoId); + + if (updateError) { + console.error(`Error updating memo with append transcription: ${updateError.message}`); + throw new Error(`Failed to update memo: ${updateError.message}`); + } + + console.log( + `Successfully appended transcription to memo ${memoId} at index ${targetIndex}` + ); + + // Consume credits for successful transcription + try { + const duration = additionalRecordings[targetIndex].duration || 60; // Default to 1 minute if not specified + const durationMinutes = Math.ceil(duration / 60); + const actualCost = calculateTranscriptionCost(duration); + const spaceId = currentMemo.metadata?.spaceId; + + const creditResult = await this.creditConsumptionService.consumeTranscriptionCredits( + userId, + durationMinutes, + actualCost, + memoId, + route, + spaceId, + token + ); + + if (creditResult.success) { + console.log( + `Successfully consumed ${creditResult.creditsConsumed} credits for append transcription` + ); + } + } catch (creditError) { + console.error(`Error consuming credits for append transcription:`, creditError); + // Don't fail the entire process if credit consumption fails + } + } else { + // Handle error case - find the processing recording + const targetIndex = additionalRecordings.findIndex( + (rec: any) => rec.status === 'processing' + ); + if (targetIndex !== -1) { + await this.updateAppendTranscriptionStatus(memoId, targetIndex, 'error', token, { + error: error || 'Transcription failed', + route, + }); + } + } + } catch (error) { + console.error(`Error in handleAppendTranscriptionCompleted:`, error); + throw error; + } + } + + /** + * Prefixes speaker IDs with recording index to avoid conflicts between recordings + * @param speakers - Object mapping speaker IDs to names + * @param speakerMap - Object mapping utterance indices to speaker IDs + * @param utterances - Array of utterances with speaker IDs + * @param recordingIndex - Index of the recording to use as prefix + * @returns Object with prefixed speaker data + */ + private prefixSpeakerIds( + speakers: Record | null, + speakerMap: Record | null, + utterances: Array<{ + speakerId: string; + text: string; + offset: number; + duration: number; + }> | null, + recordingIndex: number + ): { + speakers: Record | null; + speakerMap: Record | null; + utterances: Array<{ + speakerId: string; + text: string; + offset: number; + duration: number; + }> | null; + } { + const prefix = `rec${recordingIndex}_`; + + // Prefix speakers object + const prefixedSpeakers = speakers + ? Object.entries(speakers).reduce( + (acc, [speakerId, speakerName]) => { + acc[`${prefix}${speakerId}`] = speakerName; + return acc; + }, + {} as Record + ) + : null; + + // Prefix speakerMap + const prefixedSpeakerMap = speakerMap + ? Object.entries(speakerMap).reduce( + (acc, [utteranceIndex, speakerId]) => { + acc[utteranceIndex] = `${prefix}${speakerId}`; + return acc; + }, + {} as Record + ) + : null; + + // Prefix utterances + const prefixedUtterances = utterances + ? utterances.map((utterance) => ({ + ...utterance, + speakerId: `${prefix}${utterance.speakerId}`, + })) + : null; + + return { + speakers: prefixedSpeakers, + speakerMap: prefixedSpeakerMap, + utterances: prefixedUtterances, + }; + } + + /** + * Safely merges source objects to prevent nested object serialization issues + * Ensures proper structure and prevents "obj obj" patterns in JSONB fields + */ + private safeSourceMerge( + existingSource: any, + updates: Partial<{ + type: string; + audio_path: string; + format: string; + duration: number; + original_filename: string; + primary_language: string; + languages: string[]; + utterances: any[]; + speakers: Record; + additional_recordings: any[]; + }> + ): any { + // Start with a clean base object or existing source + const baseSource = + existingSource && typeof existingSource === 'object' ? { ...existingSource } : {}; + + // Remove any nested source properties to prevent double nesting + if (baseSource.source) { + console.warn('[safeSourceMerge] Detected nested source property, flattening structure'); + const nestedSource = baseSource.source; + delete baseSource.source; + // Merge nested properties into base + Object.assign(baseSource, nestedSource); + } + + // Safely merge updates + const mergedSource = { + ...baseSource, + ...updates, + }; + + // Validate critical properties aren't objects when they should be primitives + if (mergedSource.type && typeof mergedSource.type === 'object') { + console.error('[safeSourceMerge] Invalid type property detected:', mergedSource.type); + mergedSource.type = 'audio'; // Default fallback + } + + if (mergedSource.audio_path && typeof mergedSource.audio_path === 'object') { + console.error( + '[safeSourceMerge] Invalid audio_path property detected:', + mergedSource.audio_path + ); + mergedSource.audio_path = String(mergedSource.audio_path); // Try to convert to string + } + + // Handle legacy path field conversion + if (mergedSource.path && !mergedSource.audio_path) { + mergedSource.audio_path = mergedSource.path; + delete mergedSource.path; + } + + // Log the final structure for debugging + console.log('[safeSourceMerge] Final source structure:', { + hasType: !!mergedSource.type, + hasAudioPath: !!mergedSource.audio_path, + hasLegacyPath: !!mergedSource.path, + hasSpeakers: !!mergedSource.speakers, + hasUtterances: !!mergedSource.utterances, + hasAdditionalRecordings: !!mergedSource.additional_recordings, + additionalRecordingsCount: mergedSource.additional_recordings?.length || 0, + }); + + return mergedSource; + } + + /** + * Validates source object structure before database operations + */ + private validateSourceStructure(source: any): boolean { + if (!source || typeof source !== 'object') { + return false; + } + + // Check for nested source properties (indicates corruption) + if (source.source) { + console.error('[validateSourceStructure] Nested source property detected'); + return false; + } + + // Validate expected primitive types + const primitiveFields = ['type', 'path', 'format', 'original_filename', 'primary_language']; + for (const field of primitiveFields) { + if (source[field] && typeof source[field] === 'object') { + console.error(`[validateSourceStructure] Field ${field} should not be an object`); + return false; + } + } + + return true; + } +} diff --git a/apps/memoro/apps/backend/src/memoro/question-memo.controller.ts b/apps/memoro/apps/backend/src/memoro/question-memo.controller.ts new file mode 100644 index 000000000..677aa10de --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/question-memo.controller.ts @@ -0,0 +1,96 @@ +import { Controller, Post, Body, UseGuards, BadRequestException } from '@nestjs/common'; +import { AuthGuard } from '../guards/auth.guard'; +import { User } from '../decorators/user.decorator'; +import { CreditConsumptionService } from '../credits/credit-consumption.service'; +import { OPERATION_COSTS } from '../credits/pricing.constants'; +import { InsufficientCreditsException } from '../errors/insufficient-credits.error'; +import { QuestionService } from '../ai/memory/question.service'; + +class QuestionMemoDto { + memo_id: string; + question: string; + spaceId?: string; +} + +@Controller('memoro/question-memo') +@UseGuards(AuthGuard) +export class QuestionMemoController { + constructor( + private readonly creditConsumptionService: CreditConsumptionService, + private readonly questionService: QuestionService + ) {} + + @Post() + async processQuestionMemo(@User() user: any, @Body() dto: QuestionMemoDto) { + console.log('QuestionMemoController - Request received:', { + memo_id: dto.memo_id, + question: dto.question?.substring(0, 50) + '...', + user_id: user.sub, + has_token: !!user.token, + }); + + if (!dto.memo_id || !dto.question?.trim()) { + throw new BadRequestException('memo_id and question are required'); + } + + // Extract token from request + const token = user.token; + const requiredCredits = OPERATION_COSTS.QUESTION_MEMO; + + console.log('QuestionMemoController - Starting credit check, required:', requiredCredits); + + try { + // Check and consume credits first using centralized service + console.log('QuestionMemoController - Calling creditConsumptionService...'); + const creditResult = await this.creditConsumptionService.consumeQuestionCredits( + user.sub, + dto.memo_id, + dto.question, + dto.spaceId, + token + ); + + if (!creditResult.success) { + throw new BadRequestException(creditResult.message || creditResult.error); + } + + console.log('QuestionMemoController - Credits consumed successfully:', creditResult); + + // Process question locally via QuestionService (replaces Supabase Edge Function) + console.log('QuestionMemoController - Processing question via QuestionService'); + + const result = await this.questionService.askQuestion(dto.memo_id, dto.question.trim()); + + console.log('QuestionMemoController - QuestionService result:', { + memoryId: result.memoryId, + }); + + return { + success: true, + memory_id: result.memoryId, + answer: result.answer, + question: result.question, + creditsConsumed: requiredCredits, + creditType: creditResult.creditType, + }; + } catch (error) { + console.error('QuestionMemoController - Error occurred:', error); + + if (error instanceof InsufficientCreditsException) { + throw error; // Let the exception propagate with 402 status + } + + if (error.message?.includes('insufficient credits')) { + // Fallback for any legacy insufficient credit errors + throw new InsufficientCreditsException({ + requiredCredits, + availableCredits: 0, + creditType: dto.spaceId ? 'space' : 'user', + operation: 'question', + spaceId: dto.spaceId, + }); + } + throw new BadRequestException(error.message); + } + } +} diff --git a/apps/memoro/apps/backend/src/memoro/space-sync.controller.ts b/apps/memoro/apps/backend/src/memoro/space-sync.controller.ts new file mode 100644 index 000000000..51170aa7f --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/space-sync.controller.ts @@ -0,0 +1,77 @@ +import { + Controller, + Post, + Param, + Req, + UseGuards, + BadRequestException, + Logger, +} from '@nestjs/common'; +import { AuthGuard } from '../guards/auth.guard'; +import { User } from '../decorators/user.decorator'; +import { JwtPayload } from '../types/jwt-payload.interface'; +import { SyncSpaceMembersService } from './sync-space-members.service'; +import { SpaceSyncService } from '../spaces/space-sync.service'; + +@Controller('memoro/sync') +@UseGuards(AuthGuard) +export class SpaceSyncController { + private readonly logger = new Logger(SpaceSyncController.name); + + constructor( + private readonly syncSpaceMembersService: SyncSpaceMembersService, + private readonly spaceSyncService: SpaceSyncService + ) {} + + /** + * Synchronize members for a specific space + * This stores space membership information in Supabase to support RLS policies + */ + @Post('spaces/:id/members') + async syncSpaceMembers(@User() user: JwtPayload, @Param('id') spaceId: string, @Req() req) { + if (!spaceId) { + throw new BadRequestException('Space ID is required'); + } + + const token = req.token; + this.logger.log(`User ${user.sub} requested to sync members for space ${spaceId}`); + return this.syncSpaceMembersService.syncSpaceMembers(user.sub, spaceId, token); + } + + /** + * Synchronize all spaces the user has access to + * This stores space membership information in Supabase to support RLS policies + */ + @Post('spaces/all') + async syncAllSpaces(@User() user: JwtPayload, @Req() req) { + const token = req.token; + this.logger.log(`User ${user.sub} requested to sync all their spaces`); + return this.syncSpaceMembersService.syncAllUserSpaces(user.sub, token); + } + + /** + * Run the migration to create the space_members table and set up RLS policies + * This endpoint should be called once to set up the required database structure + */ + @Post('migration/setup') + async runMigration(@User() user: JwtPayload, @Req() req) { + const token = req.token; + this.logger.log(`User ${user.sub} requested to run space_members migration`); + + // Only allow admins to run this migration + // In a production environment, you might want to check if the user is an admin + // For now, we'll allow it in the development environment + + // Run the migration + const result = await this.spaceSyncService.runSpaceMembersMigration(); + + if (result.success) { + // If migration was successful, trigger a sync of all spaces for this user + await this.syncSpaceMembersService.syncAllUserSpaces(user.sub, token).catch((error) => { + this.logger.error(`Error syncing spaces after migration: ${error.message}`); + }); + } + + return result; + } +} diff --git a/apps/memoro/apps/backend/src/memoro/sync-space-members.service.ts b/apps/memoro/apps/backend/src/memoro/sync-space-members.service.ts new file mode 100644 index 000000000..d15b44852 --- /dev/null +++ b/apps/memoro/apps/backend/src/memoro/sync-space-members.service.ts @@ -0,0 +1,112 @@ +import { Injectable, Logger } from '@nestjs/common'; +import { MemoroService } from './memoro.service'; +import { SpaceSyncService } from '../spaces/space-sync.service'; + +/** + * Service for synchronizing space members from the core middleware to Supabase + * This handles the synchronization of space membership data to support RLS + */ +@Injectable() +export class SyncSpaceMembersService { + private readonly logger = new Logger(SyncSpaceMembersService.name); + + constructor( + private readonly memoroService: MemoroService, + private readonly spaceSyncService: SpaceSyncService + ) {} + + /** + * Sync members for a specific space + * @param spaceId ID of the space to sync + * @param token Auth token for API calls + */ + async syncSpaceMembers(userId: string, spaceId: string, token: string) { + try { + this.logger.log(`Syncing members for space ${spaceId}`); + + // Get the space details including members + const spaceDetails = await this.memoroService.getMemoroSpaceDetails(userId, spaceId, token); + + // Extract members from the space details + const members = []; + + if (spaceDetails?.space?.roles?.members) { + // Extract members from the space details + for (const [userId, memberInfo] of Object.entries(spaceDetails.space.roles.members)) { + // Type assertion for member info object which comes from the API + const typedMemberInfo = memberInfo as { role: string; added_by: string }; + + members.push({ + userId, + role: typedMemberInfo.role, + addedBy: typedMemberInfo.added_by, + }); + } + } + + this.logger.log(`Found ${members.length} members for space ${spaceId}`); + + // Sync all members to the space_members table + await this.spaceSyncService.syncAllSpaceMembers(spaceId, members); + + return { + success: true, + message: `Successfully synced ${members.length} members for space ${spaceId}`, + }; + } catch (error) { + this.logger.error(`Error syncing members for space ${spaceId}:`, error); + throw error; + } + } + + /** + * Sync all spaces the user has access to + * @param userId ID of the user + * @param token Auth token for API calls + */ + async syncAllUserSpaces(userId: string, token: string) { + try { + this.logger.log(`Syncing all spaces for user ${userId}`); + + // Get all spaces the user has access to + const spaces = await this.memoroService.getMemoroSpaces(userId, token); + + const results = []; + let successCount = 0; + let failCount = 0; + + // Sync each space + for (const space of spaces) { + try { + const result = await this.syncSpaceMembers(userId, space.id, token); + results.push({ + spaceId: space.id, + name: space.name, + success: true, + membersCount: result.message.match(/Successfully synced (\d+)/)?.[1] || 0, + }); + successCount++; + } catch (error) { + results.push({ + spaceId: space.id, + name: space.name, + success: false, + error: error.message, + }); + failCount++; + } + } + + return { + success: true, + spacesProcessed: spaces.length, + spacesSucceeded: successCount, + spacesFailed: failCount, + results, + }; + } catch (error) { + this.logger.error(`Error syncing all user spaces:`, error); + throw error; + } + } +} diff --git a/apps/memoro/apps/backend/src/migrations/create-space-members.sql b/apps/memoro/apps/backend/src/migrations/create-space-members.sql new file mode 100644 index 000000000..1125883ac --- /dev/null +++ b/apps/memoro/apps/backend/src/migrations/create-space-members.sql @@ -0,0 +1,51 @@ +-- Create the space_members table for synchronized space membership +CREATE TABLE IF NOT EXISTS space_members ( + id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), + space_id UUID NOT NULL, + user_id UUID NOT NULL, + role TEXT NOT NULL, + added_at TIMESTAMP WITH TIME ZONE DEFAULT now(), + added_by UUID, + UNIQUE(space_id, user_id) +); + +-- Create indexes for better performance +CREATE INDEX IF NOT EXISTS idx_space_members_user_id ON space_members(user_id); +CREATE INDEX IF NOT EXISTS idx_space_members_space_id ON space_members(space_id); + +-- Enable RLS on the table +ALTER TABLE space_members ENABLE ROW LEVEL SECURITY; + +-- Create policies for space_members table +CREATE POLICY "Users can see space membership they are part of" +ON space_members FOR SELECT +USING ( + user_id = auth.uid() OR + space_id IN ( + SELECT space_id FROM space_members + WHERE user_id = auth.uid() + ) +); + +-- Update memo policies to allow access to memos in spaces user is member of +CREATE POLICY "Users can view memos in spaces they are members of" +ON memos FOR SELECT +USING ( + EXISTS ( + SELECT 1 FROM memo_spaces ms + JOIN space_members sm ON ms.space_id = sm.space_id + WHERE ms.memo_id = memos.id + AND sm.user_id = auth.uid() + ) +); + +-- Policy for memo_spaces table to allow viewing of memo-space relationships +CREATE POLICY "Users can see memo-space links for spaces they are members of" +ON memo_spaces FOR SELECT +USING ( + EXISTS ( + SELECT 1 FROM space_members + WHERE space_members.space_id = memo_spaces.space_id + AND space_members.user_id = auth.uid() + ) +); diff --git a/apps/memoro/apps/backend/src/scripts/init-space-members.ts b/apps/memoro/apps/backend/src/scripts/init-space-members.ts new file mode 100644 index 000000000..90e976564 --- /dev/null +++ b/apps/memoro/apps/backend/src/scripts/init-space-members.ts @@ -0,0 +1,253 @@ +/** + * Script to initialize the space_members table with existing space memberships + * + * This script: + * 1. Creates the space_members table if it doesn't exist + * 2. Synchronizes all existing spaces and their members + * + * Usage: + * npx ts-node src/scripts/init-space-members.ts + */ + +import { ConfigService } from '@nestjs/config'; +import { createClient } from '@supabase/supabase-js'; +import axios from 'axios'; +import * as dotenv from 'dotenv'; +import * as fs from 'fs'; +import * as path from 'path'; + +// Load environment variables +dotenv.config(); + +// Create a separate logger for the script +const logger = { + log: (message: string) => console.log(`[INFO] ${message}`), + error: (message: string, error?: any) => console.error(`[ERROR] ${message}`, error || ''), + warn: (message: string) => console.warn(`[WARN] ${message}`), + debug: (message: string) => console.debug(`[DEBUG] ${message}`), +}; + +// Configuration +const memoroUrl = process.env.MEMORO_SUPABASE_URL; +const memoroServiceKey = process.env.MEMORO_SUPABASE_SERVICE_KEY; +const middlewareUrl = process.env.MANA_CORE_URL || 'http://localhost:3000'; +const adminToken = process.env.ADMIN_TOKEN; // You'll need to provide this + +if (!memoroUrl || !memoroServiceKey) { + logger.error( + 'Missing required environment variables: MEMORO_SUPABASE_URL, MEMORO_SUPABASE_SERVICE_KEY' + ); + process.exit(1); +} + +if (!adminToken) { + logger.warn('No ADMIN_TOKEN provided - you will need to authenticate to access space data'); +} + +// Create Supabase client with service role +const supabase = createClient(memoroUrl, memoroServiceKey); + +/** + * Creates the space_members table and sets up RLS policies + */ +async function createSpaceMembersTable() { + logger.log('Checking if space_members table exists...'); + + try { + // Try to query the table to see if it exists + const { data, error } = await supabase.from('space_members').select('id').limit(1); + + if (error && error.code === '42P01') { + // Table doesn't exist, create it + logger.log('space_members table does not exist, creating...'); + + // Read the migration SQL from file + const migrationPath = path.join(__dirname, '..', 'migrations', 'create-space-members.sql'); + if (!fs.existsSync(migrationPath)) { + logger.error(`Migration file not found at ${migrationPath}`); + return false; + } + + const sql = fs.readFileSync(migrationPath, 'utf8'); + + // Execute the SQL + const { error: migrationError } = await supabase.rpc('pgmoon', { query: sql }); + + if (migrationError) { + logger.error('Error creating space_members table:', migrationError); + return false; + } + + logger.log('Successfully created space_members table and RLS policies'); + return true; + } else if (error) { + logger.error('Error checking if space_members table exists:', error); + return false; + } else { + logger.log('space_members table already exists'); + return true; + } + } catch (error) { + logger.error('Unexpected error creating space_members table:', error); + return false; + } +} + +/** + * Fetches all spaces from the middleware + */ +async function getAllSpaces() { + try { + logger.log('Fetching all spaces from middleware...'); + + const response = await axios.get(`${middlewareUrl}/spaces/all`, { + headers: { + Authorization: `Bearer ${adminToken}`, + 'Content-Type': 'application/json', + }, + }); + + if (response.data && response.data.spaces) { + logger.log(`Found ${response.data.spaces.length} spaces`); + return response.data.spaces; + } else { + logger.warn('No spaces found or unexpected response format'); + return []; + } + } catch (error) { + logger.error('Error fetching spaces:', error); + return []; + } +} + +/** + * Fetches space details including members + */ +async function getSpaceDetails(spaceId: string) { + try { + logger.log(`Fetching details for space ${spaceId}...`); + + const response = await axios.get(`${middlewareUrl}/spaces/${spaceId}`, { + headers: { + Authorization: `Bearer ${adminToken}`, + 'Content-Type': 'application/json', + }, + }); + + if (response.data && response.data.space) { + return response.data.space; + } else { + logger.warn(`No details found for space ${spaceId} or unexpected response format`); + return null; + } + } catch (error) { + logger.error(`Error fetching space details for ${spaceId}:`, error); + return null; + } +} + +/** + * Synchronizes members for a space + */ +async function syncSpaceMembers(spaceId: string, spaceDetails: any) { + try { + logger.log(`Syncing members for space ${spaceId}...`); + + if (!spaceDetails.roles || !spaceDetails.roles.members) { + logger.warn(`No members found for space ${spaceId}`); + return; + } + + const members = []; + + // Extract members from space details + for (const [userId, memberInfo] of Object.entries(spaceDetails.roles.members)) { + members.push({ + space_id: spaceId, + user_id: userId, + role: (memberInfo as any).role, + added_at: new Date(), + added_by: (memberInfo as any).added_by || userId, + }); + } + + logger.log(`Found ${members.length} members for space ${spaceId}`); + + // Clear existing members for this space + const { error: deleteError } = await supabase + .from('space_members') + .delete() + .eq('space_id', spaceId); + + if (deleteError) { + logger.error(`Error clearing existing members for space ${spaceId}:`, deleteError); + return; + } + + // Insert new members + if (members.length > 0) { + const { error: insertError } = await supabase.from('space_members').insert(members); + + if (insertError) { + logger.error(`Error inserting members for space ${spaceId}:`, insertError); + return; + } + } + + logger.log(`Successfully synced ${members.length} members for space ${spaceId}`); + } catch (error) { + logger.error(`Error syncing members for space ${spaceId}:`, error); + } +} + +/** + * Main function to run the script + */ +async function main() { + try { + logger.log('Starting space_members initialization...'); + + // Create the space_members table if it doesn't exist + const tableCreated = await createSpaceMembersTable(); + if (!tableCreated) { + logger.error('Failed to create space_members table, exiting'); + process.exit(1); + } + + // Get all spaces + const spaces = await getAllSpaces(); + if (spaces.length === 0) { + logger.warn('No spaces found, nothing to sync'); + process.exit(0); + } + + // Sync members for each space + let successCount = 0; + let failCount = 0; + + for (const space of spaces) { + try { + const spaceDetails = await getSpaceDetails(space.id); + if (spaceDetails) { + await syncSpaceMembers(space.id, spaceDetails); + successCount++; + } else { + logger.warn(`Skipping space ${space.id} due to missing details`); + failCount++; + } + } catch (error) { + logger.error(`Error processing space ${space.id}:`, error); + failCount++; + } + } + + logger.log(`Finished syncing space members: ${successCount} succeeded, ${failCount} failed`); + process.exit(0); + } catch (error) { + logger.error('Unexpected error in main function:', error); + process.exit(1); + } +} + +// Run the script +main(); diff --git a/apps/memoro/apps/backend/src/settings/settings-client.service.ts b/apps/memoro/apps/backend/src/settings/settings-client.service.ts new file mode 100644 index 000000000..24998eca2 --- /dev/null +++ b/apps/memoro/apps/backend/src/settings/settings-client.service.ts @@ -0,0 +1,118 @@ +import { Injectable, Logger } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; + +@Injectable() +export class SettingsClientService { + private readonly logger = new Logger(SettingsClientService.name); + private readonly manaServiceUrl: string; + + constructor(private readonly configService: ConfigService) { + this.manaServiceUrl = this.configService.get('MANA_SERVICE_URL'); + if (!this.manaServiceUrl) { + this.logger.warn('MANA_SERVICE_URL not configured'); + } + } + + async getUserSettings(token: string): Promise { + try { + const response = await fetch(`${this.manaServiceUrl}/users/settings`, { + method: 'GET', + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + }); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error(`Failed to get user settings: ${response.status} - ${errorText}`); + } + + const result = await response.json(); + return result.settings || {}; + } catch (error) { + this.logger.error(`Error getting user settings: ${error.message}`); + throw error; + } + } + + async updateMemoroSettings( + settings: { + dataUsageAcceptance?: boolean; + emailNewsletterOptIn?: boolean; + [key: string]: any; + }, + token: string + ): Promise { + try { + const response = await fetch(`${this.manaServiceUrl}/users/settings/memoro`, { + method: 'PATCH', + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + body: JSON.stringify(settings), + }); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error(`Failed to update Memoro settings: ${response.status} - ${errorText}`); + } + + const result = await response.json(); + return result.settings || {}; + } catch (error) { + this.logger.error(`Error updating Memoro settings: ${error.message}`); + throw error; + } + } + + async updateUserProfile( + profileData: { + firstName?: string; + lastName?: string; + avatarUrl?: string; + }, + token: string + ): Promise { + try { + const response = await fetch(`${this.manaServiceUrl}/users/settings/profile`, { + method: 'PATCH', + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + body: JSON.stringify(profileData), + }); + + if (!response.ok) { + const errorText = await response.text(); + throw new Error(`Failed to update user profile: ${response.status} - ${errorText}`); + } + + const result = await response.json(); + return result.user || {}; + } catch (error) { + this.logger.error(`Error updating user profile: ${error.message}`); + throw error; + } + } + + async getMemoroSettings(token: string): Promise { + try { + const allSettings = await this.getUserSettings(token); + return allSettings.memoro || {}; + } catch (error) { + this.logger.error(`Error getting Memoro settings: ${error.message}`); + throw error; + } + } + + async updateDataUsageAcceptance(accepted: boolean, token: string): Promise { + return this.updateMemoroSettings({ dataUsageAcceptance: accepted }, token); + } + + async updateEmailNewsletterOptIn(optIn: boolean, token: string): Promise { + return this.updateMemoroSettings({ emailNewsletterOptIn: optIn }, token); + } +} diff --git a/apps/memoro/apps/backend/src/settings/settings.controller.ts b/apps/memoro/apps/backend/src/settings/settings.controller.ts new file mode 100644 index 000000000..35dbf9ef4 --- /dev/null +++ b/apps/memoro/apps/backend/src/settings/settings.controller.ts @@ -0,0 +1,135 @@ +import { Controller, Get, Patch, Body, UseGuards, Req, BadRequestException } from '@nestjs/common'; +import { AuthGuard } from '../guards/auth.guard'; +import { SettingsClientService } from './settings-client.service'; + +@Controller('settings') +@UseGuards(AuthGuard) +export class SettingsController { + constructor(private readonly settingsClientService: SettingsClientService) {} + + @Get() + async getSettings(@Req() req) { + const token = req.token; + + try { + const settings = await this.settingsClientService.getUserSettings(token); + return { settings }; + } catch (error) { + throw new BadRequestException(`Failed to get settings: ${error.message}`); + } + } + + @Get('memoro') + async getMemoroSettings(@Req() req) { + const token = req.token; + + try { + const memoSettings = await this.settingsClientService.getMemoroSettings(token); + return { settings: memoSettings }; + } catch (error) { + throw new BadRequestException(`Failed to get Memoro settings: ${error.message}`); + } + } + + @Patch('memoro') + async updateMemoroSettings( + @Req() req, + @Body() + body: { + dataUsageAcceptance?: boolean; + emailNewsletterOptIn?: boolean; + [key: string]: any; + } + ) { + const token = req.token; + + if (Object.keys(body).length === 0) { + throw new BadRequestException('At least one setting field is required'); + } + + try { + const updatedSettings = await this.settingsClientService.updateMemoroSettings(body, token); + return { + success: true, + settings: updatedSettings, + message: 'Memoro settings updated successfully', + }; + } catch (error) { + throw new BadRequestException(`Failed to update Memoro settings: ${error.message}`); + } + } + + @Patch('memoro/data-usage') + async updateDataUsageAcceptance(@Req() req, @Body() body: { accepted: boolean }) { + const token = req.token; + + if (typeof body.accepted !== 'boolean') { + throw new BadRequestException('accepted field must be a boolean'); + } + + try { + const updatedSettings = await this.settingsClientService.updateDataUsageAcceptance( + body.accepted, + token + ); + return { + success: true, + settings: updatedSettings, + message: `Data usage ${body.accepted ? 'accepted' : 'declined'} successfully`, + }; + } catch (error) { + throw new BadRequestException(`Failed to update data usage acceptance: ${error.message}`); + } + } + + @Patch('memoro/email-newsletter') + async updateEmailNewsletterOptIn(@Req() req, @Body() body: { optIn: boolean }) { + const token = req.token; + + if (typeof body.optIn !== 'boolean') { + throw new BadRequestException('optIn field must be a boolean'); + } + + try { + const updatedSettings = await this.settingsClientService.updateEmailNewsletterOptIn( + body.optIn, + token + ); + return { + success: true, + settings: updatedSettings, + message: `Email newsletter ${body.optIn ? 'opted in' : 'opted out'} successfully`, + }; + } catch (error) { + throw new BadRequestException(`Failed to update email newsletter opt-in: ${error.message}`); + } + } + + @Patch('profile') + async updateProfile( + @Req() req, + @Body() + body: { + firstName?: string; + lastName?: string; + avatarUrl?: string; + } + ) { + const token = req.token; + + if (Object.keys(body).length === 0) { + throw new BadRequestException('At least one profile field is required'); + } + + try { + const updatedUser = await this.settingsClientService.updateUserProfile(body, token); + return { + success: true, + user: updatedUser, + message: 'Profile updated successfully', + }; + } catch (error) { + throw new BadRequestException(`Failed to update profile: ${error.message}`); + } + } +} diff --git a/apps/memoro/apps/backend/src/settings/settings.module.ts b/apps/memoro/apps/backend/src/settings/settings.module.ts new file mode 100644 index 000000000..374d14f1d --- /dev/null +++ b/apps/memoro/apps/backend/src/settings/settings.module.ts @@ -0,0 +1,12 @@ +import { Module } from '@nestjs/common'; +import { SettingsController } from './settings.controller'; +import { SettingsClientService } from './settings-client.service'; +import { AuthModule } from '../auth/auth.module'; + +@Module({ + imports: [AuthModule], + controllers: [SettingsController], + providers: [SettingsClientService], + exports: [SettingsClientService], +}) +export class SettingsModule {} diff --git a/apps/memoro/apps/backend/src/spaces/space-sync.service.ts b/apps/memoro/apps/backend/src/spaces/space-sync.service.ts new file mode 100644 index 000000000..bc1024a87 --- /dev/null +++ b/apps/memoro/apps/backend/src/spaces/space-sync.service.ts @@ -0,0 +1,283 @@ +import { Injectable, Logger } from '@nestjs/common'; +import { ConfigService } from '@nestjs/config'; +import { createClient, SupabaseClient } from '@supabase/supabase-js'; + +@Injectable() +export class SpaceSyncService { + private readonly supabaseServiceClient: SupabaseClient; + private readonly logger = new Logger(SpaceSyncService.name); + + constructor(private configService: ConfigService) { + const supabaseUrl = this.configService.get('MEMORO_SUPABASE_URL'); + const supabaseServiceKey = this.configService.get('MEMORO_SUPABASE_SERVICE_KEY'); + + if (!supabaseUrl || !supabaseServiceKey) { + throw new Error('Supabase configuration not provided'); + } + + this.supabaseServiceClient = createClient(supabaseUrl, supabaseServiceKey); + this.logger.log('SpaceSyncService initialized with Supabase service client'); + } + + /** + * Synchronizes a user's space membership to Supabase + * @param spaceId ID of the space + * @param userId ID of the user + * @param role Role of the user in the space + * @param addedBy ID of the user who added this member (optional) + */ + async syncSpaceMembership( + spaceId: string, + userId: string, + role: string, + addedBy?: string + ): Promise { + try { + this.logger.debug( + `Syncing membership for user ${userId} in space ${spaceId} with role ${role}` + ); + + const { error } = await this.supabaseServiceClient.from('space_members').upsert( + { + space_id: spaceId, + user_id: userId, + role: role, + added_at: new Date(), + added_by: addedBy || userId, + }, + { + onConflict: 'space_id,user_id', + } + ); + + if (error) { + this.logger.error(`Failed to sync space membership: ${error.message}`, error); + throw new Error(`Failed to sync space membership: ${error.message}`); + } + + this.logger.log( + `Successfully synced user ${userId} membership to space ${spaceId} with role ${role}` + ); + } catch (error) { + this.logger.error('Error syncing space membership:', error); + throw error; + } + } + + /** + * Removes a user's space membership in Supabase + * @param spaceId ID of the space + * @param userId ID of the user to remove + */ + async removeSpaceMembership(spaceId: string, userId: string): Promise { + try { + this.logger.debug(`Removing membership for user ${userId} from space ${spaceId}`); + + const { error } = await this.supabaseServiceClient + .from('space_members') + .delete() + .eq('space_id', spaceId) + .eq('user_id', userId); + + if (error) { + this.logger.error(`Failed to remove space membership: ${error.message}`, error); + throw new Error(`Failed to remove space membership: ${error.message}`); + } + + this.logger.log(`Successfully removed user ${userId} from space ${spaceId}`); + } catch (error) { + this.logger.error('Error removing space membership:', error); + throw error; + } + } + + /** + * Bulk synchronize all members for a space + * @param spaceId ID of the space + * @param members Array of member objects with userId, role, and optional addedBy + */ + async syncAllSpaceMembers( + spaceId: string, + members: { userId: string; role: string; addedBy?: string }[] + ): Promise { + try { + this.logger.debug(`Bulk syncing ${members.length} members for space ${spaceId}`); + + // First, remove all existing members for this space to avoid stale entries + await this.clearAllSpaceMembers(spaceId); + + // If there are no members to sync, we're done + if (members.length === 0) { + this.logger.log(`No members to sync for space ${spaceId}`); + return; + } + + const memberRecords = members.map((member) => ({ + space_id: spaceId, + user_id: member.userId, + role: member.role, + added_at: new Date(), + added_by: member.addedBy || member.userId, + })); + + const { error } = await this.supabaseServiceClient + .from('space_members') + .upsert(memberRecords, { + onConflict: 'space_id,user_id', + }); + + if (error) { + this.logger.error(`Failed to bulk sync space members: ${error.message}`, error); + throw new Error(`Failed to bulk sync space members: ${error.message}`); + } + + this.logger.log(`Successfully synced ${members.length} members to space ${spaceId}`); + } catch (error) { + this.logger.error('Error bulk syncing space members:', error); + throw error; + } + } + + /** + * Clears all members for a space + * @param spaceId ID of the space + */ + private async clearAllSpaceMembers(spaceId: string): Promise { + try { + this.logger.debug(`Clearing all members for space ${spaceId}`); + + const { error } = await this.supabaseServiceClient + .from('space_members') + .delete() + .eq('space_id', spaceId); + + if (error) { + this.logger.error(`Failed to clear space members: ${error.message}`, error); + throw new Error(`Failed to clear space members: ${error.message}`); + } + + this.logger.log(`Successfully cleared all members from space ${spaceId}`); + } catch (error) { + this.logger.error('Error clearing space members:', error); + throw error; + } + } + + /** + * Check if the space_members table exists + * @returns Boolean indicating if the table exists + */ + async checkSpaceMembersTableExists(): Promise { + try { + // Try to query the table to see if it exists + const { data, error } = await this.supabaseServiceClient + .from('space_members') + .select('id') + .limit(1); + + if (error && error.code === '42P01') { + // Table doesn't exist error + return false; + } + + return true; + } catch (error) { + this.logger.error('Error checking space_members table existence:', error); + return false; + } + } + + /** + * Run the migration to create the space_members table and RLS policies + * @param sqlContent SQL content to run (if not provided, uses default migration) + * @returns Object with success status and message + */ + async runSpaceMembersMigration( + sqlContent?: string + ): Promise<{ success: boolean; message: string }> { + try { + // Check if table already exists + const tableExists = await this.checkSpaceMembersTableExists(); + if (tableExists) { + this.logger.log('space_members table already exists, skipping migration'); + return { success: true, message: 'space_members table already exists' }; + } + + this.logger.log('Running space_members table migration'); + + // Use the provided SQL content or a default migration + const sql = + sqlContent || + ` + -- Create the space_members table for synchronized space membership + CREATE TABLE IF NOT EXISTS space_members ( + id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), + space_id UUID NOT NULL, + user_id UUID NOT NULL, + role TEXT NOT NULL, + added_at TIMESTAMP WITH TIME ZONE DEFAULT now(), + added_by UUID, + UNIQUE(space_id, user_id) + ); + + -- Create indexes for better performance + CREATE INDEX IF NOT EXISTS idx_space_members_user_id ON space_members(user_id); + CREATE INDEX IF NOT EXISTS idx_space_members_space_id ON space_members(space_id); + + -- Enable RLS on the table + ALTER TABLE space_members ENABLE ROW LEVEL SECURITY; + + -- Create policies for space_members table + CREATE POLICY "Users can see space membership they are part of" + ON space_members FOR SELECT + USING ( + user_id = auth.uid() OR + space_id IN ( + SELECT space_id FROM space_members + WHERE user_id = auth.uid() + ) + ); + + -- Update memo policies to allow access to memos in spaces user is member of + CREATE POLICY "Users can view memos in spaces they are members of" + ON memos FOR SELECT + USING ( + EXISTS ( + SELECT 1 FROM memo_spaces ms + JOIN space_members sm ON ms.space_id = sm.space_id + WHERE ms.memo_id = memos.id + AND sm.user_id = auth.uid() + ) + ); + + -- Policy for memo_spaces table to allow viewing of memo-space relationships + CREATE POLICY "Users can see memo-space links for spaces they are members of" + ON memo_spaces FOR SELECT + USING ( + EXISTS ( + SELECT 1 FROM space_members + WHERE space_members.space_id = memo_spaces.space_id + AND space_members.user_id = auth.uid() + ) + ); + `; + + // Execute the SQL migration using the service role client + const { error } = await this.supabaseServiceClient.rpc('pgmoon', { query: sql }); + + if (error) { + this.logger.error('Error running space_members migration:', error); + return { success: false, message: `Migration failed: ${error.message}` }; + } + + this.logger.log('Successfully ran space_members table migration'); + return { + success: true, + message: 'Successfully created space_members table and RLS policies', + }; + } catch (error) { + this.logger.error('Error running space_members migration:', error); + return { success: false, message: `Migration failed: ${error.message}` }; + } + } +} diff --git a/apps/memoro/apps/backend/src/spaces/spaces-client.service.ts b/apps/memoro/apps/backend/src/spaces/spaces-client.service.ts new file mode 100644 index 000000000..b67befcd2 --- /dev/null +++ b/apps/memoro/apps/backend/src/spaces/spaces-client.service.ts @@ -0,0 +1,536 @@ +import { + Injectable, + NotFoundException, + ForbiddenException, + BadRequestException, +} from '@nestjs/common'; +import { HttpService } from '@nestjs/axios'; +import { ConfigService } from '@nestjs/config'; +import { Observable, catchError, firstValueFrom, map, tap } from 'rxjs'; +import { AxiosError } from 'axios'; +import { + SpaceDto, + PendingInvitesResponseDto, + SpaceInviteDto, +} from '../interfaces/spaces.interfaces'; + +@Injectable() +export class SpacesClientService { + private spacesServiceUrl: string; + private memoroAppId: string; + + constructor( + private httpService: HttpService, + private configService: ConfigService + ) { + this.spacesServiceUrl = this.configService.get( + 'MANA_SERVICE_URL', + 'http://localhost:3000' + ); + this.memoroAppId = this.configService.get( + 'MEMORO_APP_ID', + '973da0c1-b479-4dac-a1b0-ed09c72caca8' + ); + } + + /** + * Gets spaces for a user by calling the Spaces service + */ + async getUserSpaces(userId: string, token: string): Promise { + try { + console.log(`Calling spaces service at: ${this.spacesServiceUrl}/spaces`); + const response = await firstValueFrom( + this.httpService + .get(`${this.spacesServiceUrl}/spaces`, { + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + }) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + if (error.response?.status === 404) { + console.error('HERE'); + throw new NotFoundException('Spaces not found'); + } + throw new BadRequestException('Could not fetch spaces'); + }) + ) + ); + + // Log the response for debugging + console.log('Spaces response received:', JSON.stringify(response, null, 2)); + + // Extract the spaces array from the response object + const spaces = response.spaces || []; + console.log(`Extracted ${spaces.length} spaces from response`); + + return spaces; + } catch (error) { + if (error instanceof NotFoundException || error instanceof BadRequestException) { + throw error; + } + throw new BadRequestException('Could not fetch spaces'); + } + } + + /** + * Gets all invites for a space by calling the Spaces service + * @param spaceId The ID of the space to get invites for + * @param token Optional JWT token for authorization + * @returns Array of space invites + */ + /** + * Invites a user to a space by email + * @param spaceId The ID of the space to invite to + * @param userEmail The email of the user to invite + * @param role The role to assign (owner, admin, editor, viewer) + * @param token JWT token for authorization + * @returns Object containing the inviteId + */ + async addSpaceMember( + spaceId: string, + userEmail: string, + role: string, + token: string + ): Promise { + try { + console.log(`Adding member to space ${spaceId}: ${userEmail} with role ${role}`); + const response = await firstValueFrom( + this.httpService + .post( + `${this.spacesServiceUrl}/spaces/members`, + { + spaceId, + userEmail, + role, + }, + { + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + } + ) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + if (error.response?.status === 404) { + throw new NotFoundException('Space not found'); + } else if (error.response?.status === 403) { + throw new ForbiddenException('Not authorized to invite members to this space'); + } + console.error('Error sending space invite:', error.message); + throw new BadRequestException(`Could not invite user to space: ${error.message}`); + }) + ) + ); + + return response; + } catch (error) { + console.error(`Error in addSpaceMember for space ${spaceId}:`, error); + throw error; + } + } + + /** + * Resends an invitation to a user + * @param inviteId The ID of the invitation to resend + * @param token JWT token for authorization + * @returns Success status + */ + async resendInvite(inviteId: string, token: string): Promise { + try { + console.log(`Resending invite with ID: ${inviteId}`); + const response = await firstValueFrom( + this.httpService + .post( + `${this.spacesServiceUrl}/spaces/invites/${inviteId}/resend`, + {}, + { + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + } + ) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + if (error.response?.status === 404) { + throw new NotFoundException('Invitation not found'); + } else if (error.response?.status === 403) { + throw new ForbiddenException('Not authorized to resend this invitation'); + } + console.error('Error resending invitation:', error.message); + throw new BadRequestException(`Could not resend invitation: ${error.message}`); + }) + ) + ); + + return response; + } catch (error) { + console.error(`Error in resendInvite for invite ${inviteId}:`, error); + throw error; + } + } + + /** + * Cancels an invitation + * @param inviteId The ID of the invitation to cancel + * @param token JWT token for authorization + * @returns Success status + */ + async cancelInvite(inviteId: string, token: string): Promise { + try { + console.log(`Canceling invite with ID: ${inviteId}`); + const response = await firstValueFrom( + this.httpService + .delete(`${this.spacesServiceUrl}/spaces/invites/${inviteId}`, { + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + }) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + if (error.response?.status === 404) { + throw new NotFoundException('Invitation not found'); + } else if (error.response?.status === 403) { + throw new ForbiddenException('Not authorized to cancel this invitation'); + } + console.error('Error canceling invitation:', error.message); + throw new BadRequestException(`Could not cancel invitation: ${error.message}`); + }) + ) + ); + + return response; + } catch (error) { + console.error(`Error in cancelInvite for invite ${inviteId}:`, error); + throw error; + } + } + + async getSpaceInvites(spaceId: string, token?: string): Promise { + try { + // Special case: if 'user' is passed as spaceId, redirect to getUserPendingInvites + // This handles backward compatibility with frontend code that might be using + // the wrong endpoint + if (spaceId === 'user') { + console.log('Redirecting getSpaceInvites("user") to getUserPendingInvites()'); + return this.getUserPendingInvites(token); + } + + // Validate spaceId to ensure it's a valid value + if (!spaceId || spaceId.length < 5) { + throw new BadRequestException(`Invalid space ID: ${spaceId}`); + } + + console.log( + `Getting space invites at: ${this.spacesServiceUrl}/spaces/space/${spaceId}/invites` + ); + const response = await firstValueFrom( + this.httpService + .get(`${this.spacesServiceUrl}/spaces/space/${spaceId}/invites`, { + headers: { + Authorization: token ? `Bearer ${token}` : '', + 'Content-Type': 'application/json', + }, + }) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + if (error.response?.status === 404) { + throw new NotFoundException(`Invites for space ${spaceId} not found`); + } else if (error.response?.status === 403) { + throw new ForbiddenException( + `Not authorized to access invites for space ${spaceId}` + ); + } + console.error('Error fetching space invites:', error.message); + throw new BadRequestException(`Could not fetch invites for space ${spaceId}`); + }) + ) + ); + + return response; + } catch (error) { + console.error(`Error in getSpaceInvites for space ${spaceId}:`, error); + throw error; + } + } + + /** + * Gets space details by calling the Spaces service + */ + async getSpaceDetails(spaceId: string, token?: string): Promise { + try { + console.log(`Getting space details at: ${this.spacesServiceUrl}/spaces/${spaceId}`); + const response = await firstValueFrom( + this.httpService + .get(`${this.spacesServiceUrl}/spaces/${spaceId}`, { + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + }) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + if (error.response?.status === 404) { + throw new NotFoundException('Space not found'); + } else if (error.response?.status === 403) { + throw new ForbiddenException('Access denied'); + } + throw new BadRequestException('Could not fetch space details'); + }) + ) + ); + + return response; + } catch (error) { + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + throw new BadRequestException('Could not fetch space details'); + } + } + + /** + }, + ).pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + throw new BadRequestException('Could not create space'); + }), + ), + ); + + return response; + } catch (error) { + throw new BadRequestException('Could not create space'); + } + } + + /** + * Creates a new space by calling the Spaces service + */ + async createSpace(userId: string, spaceName: string, token: string) { + try { + console.log(`Creating space at: ${this.spacesServiceUrl}/spaces`); + // Hardcode the UUID to test if this resolves the issue + const appId = '973da0c1-b479-4dac-a1b0-ed09c72caca8'; + console.log(`Using hardcoded app ID: ${appId}`); + + const response = await firstValueFrom( + this.httpService + .post( + `${this.spacesServiceUrl}/spaces`, + { + name: spaceName, + appId, // Field name must match CreateSpaceDto in middleware + }, + { + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + } + ) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + throw new BadRequestException('Could not create space'); + }) + ) + ); + + return response; + } catch (error) { + throw new BadRequestException('Could not create space'); + } + } + + /** + * Verifies a user has access to a Memoro space and returns access details + */ + async verifySpaceAccess(userId: string, spaceId: string, token: string): Promise { + try { + console.log(`Verifying space access at: ${this.spacesServiceUrl}/spaces/${spaceId}/access`); + const response = await firstValueFrom( + this.httpService + .get(`${this.spacesServiceUrl}/spaces/${spaceId}/access`, { + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + }) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + if (error.response?.status === 404) { + throw new NotFoundException('Space not found'); + } else if (error.response?.status === 403) { + throw new ForbiddenException('Access denied'); + } + throw new BadRequestException('Could not verify space access'); + }) + ) + ); + + // Verify this is a Memoro space + if (response.space.app_id !== this.memoroAppId) { + throw new ForbiddenException('This is not a Memoro space'); + } + + // Return the full response which includes access information + return response; + } catch (error) { + if (error instanceof NotFoundException) { + return { access: { hasAccess: false, role: 'none' } }; + } else if (error instanceof ForbiddenException) { + return { access: { hasAccess: false, role: 'none' } }; + } + console.error(`Failed to verify space access: ${error.message}`); + return { access: { hasAccess: false, role: 'none' } }; + } + } + + /** + * Allow a non-owner to leave a space + */ + async leaveSpace(userId: string, spaceId: string, token: string): Promise { + try { + console.log(`Leaving space at: ${this.spacesServiceUrl}/spaces/${spaceId}/leave`); + const response = await firstValueFrom( + this.httpService + .post( + `${this.spacesServiceUrl}/spaces/${spaceId}/leave`, + {}, // Empty body + { + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + } + ) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + if (error.response?.status === 404) { + throw new NotFoundException('Space not found'); + } else if (error.response?.status === 403) { + // Safely access potential message in error response + const message = + typeof error.response?.data === 'object' && error.response?.data + ? (error.response.data as any).message || 'Access denied' + : 'Access denied'; + throw new ForbiddenException(message); + } + throw new BadRequestException('Could not leave space'); + }) + ) + ); + + return response; + } catch (error) { + if (error instanceof NotFoundException || error instanceof ForbiddenException) { + throw error; + } + throw new BadRequestException('Could not leave space'); + } + } + + /** + * Gets all pending invites for the current user + * @param token JWT token for authorization + * @returns Array of pending invites + */ + async getUserPendingInvites(token: string): Promise { + try { + console.log( + `Getting user pending invites at: ${this.spacesServiceUrl}/spaces/user/invites')` + ); + const response = await firstValueFrom( + this.httpService + .get(`${this.spacesServiceUrl}/spaces/user/invites`, { + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + }) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + if (error.response?.status === 404) { + throw new NotFoundException('Pending invites not found'); + } else if (error.response?.status === 403) { + throw new ForbiddenException('Not authorized to access pending invites'); + } + console.error('Error fetching pending invites:', error.message); + throw new BadRequestException('Could not fetch pending invites'); + }) + ) + ); + console.log(' WE are here in response'); + return response; + } catch (error) { + console.error(`Error in getUserPendingInvites:`, error); + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + throw new BadRequestException('Could not fetch pending invites'); + } + } + + /** + * Deletes a space by calling the Spaces service + */ + async deleteSpace(userId: string, spaceId: string, token: string): Promise { + try { + console.log(`Deleting space at: ${this.spacesServiceUrl}/spaces/${spaceId}`); + const response = await firstValueFrom( + this.httpService + .delete(`${this.spacesServiceUrl}/spaces/${spaceId}`, { + headers: { + Authorization: `Bearer ${token}`, + 'Content-Type': 'application/json', + }, + }) + .pipe( + map((response) => response.data), + catchError((error: AxiosError) => { + if (error.response?.status === 404) { + throw new NotFoundException('Space not found'); + } else if (error.response?.status === 403) { + throw new ForbiddenException('Access denied'); + } + throw new BadRequestException('Could not delete space'); + }) + ) + ); + + return response; + } catch (error) { + if ( + error instanceof NotFoundException || + error instanceof ForbiddenException || + error instanceof BadRequestException + ) { + throw error; + } + throw new BadRequestException('Could not delete space'); + } + } +} diff --git a/apps/memoro/apps/backend/src/spaces/spaces.module.ts b/apps/memoro/apps/backend/src/spaces/spaces.module.ts new file mode 100644 index 000000000..433ec9468 --- /dev/null +++ b/apps/memoro/apps/backend/src/spaces/spaces.module.ts @@ -0,0 +1,12 @@ +import { Module } from '@nestjs/common'; +import { HttpModule } from '@nestjs/axios'; +import { ConfigModule } from '@nestjs/config'; +import { SpacesClientService } from './spaces-client.service'; +import { SpaceSyncService } from './space-sync.service'; + +@Module({ + imports: [HttpModule, ConfigModule], + providers: [SpacesClientService, SpaceSyncService], + exports: [SpacesClientService, SpaceSyncService], +}) +export class SpacesModule {} diff --git a/apps/memoro/apps/backend/src/types/jwt-payload.interface.ts b/apps/memoro/apps/backend/src/types/jwt-payload.interface.ts new file mode 100644 index 000000000..bac88bb29 --- /dev/null +++ b/apps/memoro/apps/backend/src/types/jwt-payload.interface.ts @@ -0,0 +1,9 @@ +export interface JwtPayload { + sub: string; // User ID + email?: string; // User email (optional) + role: string; // User role + app_id: string; // App ID + aud: string; // Audience (usually 'authenticated') + iat?: number; // Issued at + exp?: number; // Expiration time +} diff --git a/apps/memoro/apps/backend/supabase/.gitignore b/apps/memoro/apps/backend/supabase/.gitignore new file mode 100644 index 000000000..ad9264f0b --- /dev/null +++ b/apps/memoro/apps/backend/supabase/.gitignore @@ -0,0 +1,8 @@ +# Supabase +.branches +.temp + +# dotenvx +.env.keys +.env.local +.env.*.local diff --git a/apps/memoro/apps/backend/supabase/config.toml b/apps/memoro/apps/backend/supabase/config.toml new file mode 100644 index 000000000..d44022c1f --- /dev/null +++ b/apps/memoro/apps/backend/supabase/config.toml @@ -0,0 +1,388 @@ +# For detailed configuration reference documentation, visit: +# https://supabase.com/docs/guides/local-development/cli/config +# A string used to distinguish different Supabase projects on the same host. Defaults to the +# working directory name when running `supabase init`. +project_id = "memoro_middleware" + +[api] +enabled = true +# Port to use for the API URL. +port = 54321 +# Schemas to expose in your API. Tables, views and stored procedures in this schema will get API +# endpoints. `public` and `graphql_public` schemas are included by default. +schemas = ["public", "graphql_public"] +# Extra schemas to add to the search_path of every request. +extra_search_path = ["public", "extensions"] +# The maximum number of rows returns from a view, table, or stored procedure. Limits payload size +# for accidental or malicious requests. +max_rows = 1000 + +[api.tls] +# Enable HTTPS endpoints locally using a self-signed certificate. +enabled = false +# Paths to self-signed certificate pair. +# cert_path = "../certs/my-cert.pem" +# key_path = "../certs/my-key.pem" + +[db] +# Port to use for the local database URL. +port = 54322 +# Port used by db diff command to initialize the shadow database. +shadow_port = 54320 +# Maximum amount of time to wait for health check when starting the local database. +health_timeout = "2m" +# The database major version to use. This has to be the same as your remote database's. Run `SHOW +# server_version;` on the remote database to check. +major_version = 17 + +[db.pooler] +enabled = false +# Port to use for the local connection pooler. +port = 54329 +# Specifies when a server connection can be reused by other clients. +# Configure one of the supported pooler modes: `transaction`, `session`. +pool_mode = "transaction" +# How many server connections to allow per user/database pair. +default_pool_size = 20 +# Maximum number of client connections allowed. +max_client_conn = 100 + +# [db.vault] +# secret_key = "env(SECRET_VALUE)" + +[db.migrations] +# If disabled, migrations will be skipped during a db push or reset. +enabled = true +# Specifies an ordered list of schema files that describe your database. +# Supports glob patterns relative to supabase directory: "./schemas/*.sql" +schema_paths = [] + +[db.seed] +# If enabled, seeds the database after migrations during a db reset. +enabled = true +# Specifies an ordered list of seed files to load during db reset. +# Supports glob patterns relative to supabase directory: "./seeds/*.sql" +sql_paths = ["./seed.sql"] + +[db.network_restrictions] +# Enable management of network restrictions. +enabled = false +# List of IPv4 CIDR blocks allowed to connect to the database. +# Defaults to allow all IPv4 connections. Set empty array to block all IPs. +allowed_cidrs = ["0.0.0.0/0"] +# List of IPv6 CIDR blocks allowed to connect to the database. +# Defaults to allow all IPv6 connections. Set empty array to block all IPs. +allowed_cidrs_v6 = ["::/0"] + +# Uncomment to reject non-secure connections to the database. +# [db.ssl_enforcement] +# enabled = true + +[realtime] +enabled = true +# Bind realtime via either IPv4 or IPv6. (default: IPv4) +# ip_version = "IPv6" +# The maximum length in bytes of HTTP request headers. (default: 4096) +# max_header_length = 4096 + +[studio] +enabled = true +# Port to use for Supabase Studio. +port = 54323 +# External URL of the API server that frontend connects to. +api_url = "http://127.0.0.1" +# OpenAI API Key to use for Supabase AI in the Supabase Studio. +openai_api_key = "env(OPENAI_API_KEY)" + +# Email testing server. Emails sent with the local dev setup are not actually sent - rather, they +# are monitored, and you can view the emails that would have been sent from the web interface. +[inbucket] +enabled = true +# Port to use for the email testing server web interface. +port = 54324 +# Uncomment to expose additional ports for testing user applications that send emails. +# smtp_port = 54325 +# pop3_port = 54326 +# admin_email = "admin@email.com" +# sender_name = "Admin" + +[storage] +enabled = true +# The maximum file size allowed (e.g. "5MB", "500KB"). +file_size_limit = "50MiB" + +# Uncomment to configure local storage buckets +# [storage.buckets.images] +# public = false +# file_size_limit = "50MiB" +# allowed_mime_types = ["image/png", "image/jpeg"] +# objects_path = "./images" + +# Allow connections via S3 compatible clients +[storage.s3_protocol] +enabled = true + +# Image transformation API is available to Supabase Pro plan. +# [storage.image_transformation] +# enabled = true + +# Store analytical data in S3 for running ETL jobs over Iceberg Catalog +# This feature is only available on the hosted platform. +[storage.analytics] +enabled = false +max_namespaces = 5 +max_tables = 10 +max_catalogs = 2 + +# Analytics Buckets is available to Supabase Pro plan. +# [storage.analytics.buckets.my-warehouse] + +# Store vector embeddings in S3 for large and durable datasets +# This feature is only available on the hosted platform. +[storage.vector] +enabled = false +max_buckets = 10 +max_indexes = 5 + +# Vector Buckets is available to Supabase Pro plan. +# [storage.vector.buckets.documents-openai] + +[auth] +enabled = true +# The base URL of your website. Used as an allow-list for redirects and for constructing URLs used +# in emails. +site_url = "http://127.0.0.1:3000" +# A list of *exact* URLs that auth providers are permitted to redirect to post authentication. +additional_redirect_urls = ["https://127.0.0.1:3000"] +# How long tokens are valid for, in seconds. Defaults to 3600 (1 hour), maximum 604,800 (1 week). +jwt_expiry = 3600 +# JWT issuer URL. If not set, defaults to the local API URL (http://127.0.0.1:/auth/v1). +# jwt_issuer = "" +# Path to JWT signing key. DO NOT commit your signing keys file to git. +# signing_keys_path = "./signing_keys.json" +# If disabled, the refresh token will never expire. +enable_refresh_token_rotation = true +# Allows refresh tokens to be reused after expiry, up to the specified interval in seconds. +# Requires enable_refresh_token_rotation = true. +refresh_token_reuse_interval = 10 +# Allow/disallow new user signups to your project. +enable_signup = true +# Allow/disallow anonymous sign-ins to your project. +enable_anonymous_sign_ins = false +# Allow/disallow testing manual linking of accounts +enable_manual_linking = false +# Passwords shorter than this value will be rejected as weak. Minimum 6, recommended 8 or more. +minimum_password_length = 6 +# Passwords that do not meet the following requirements will be rejected as weak. Supported values +# are: `letters_digits`, `lower_upper_letters_digits`, `lower_upper_letters_digits_symbols` +password_requirements = "" + +[auth.rate_limit] +# Number of emails that can be sent per hour. Requires auth.email.smtp to be enabled. +email_sent = 2 +# Number of SMS messages that can be sent per hour. Requires auth.sms to be enabled. +sms_sent = 30 +# Number of anonymous sign-ins that can be made per hour per IP address. Requires enable_anonymous_sign_ins = true. +anonymous_users = 30 +# Number of sessions that can be refreshed in a 5 minute interval per IP address. +token_refresh = 150 +# Number of sign up and sign-in requests that can be made in a 5 minute interval per IP address (excludes anonymous users). +sign_in_sign_ups = 30 +# Number of OTP / Magic link verifications that can be made in a 5 minute interval per IP address. +token_verifications = 30 +# Number of Web3 logins that can be made in a 5 minute interval per IP address. +web3 = 30 + +# Configure one of the supported captcha providers: `hcaptcha`, `turnstile`. +# [auth.captcha] +# enabled = true +# provider = "hcaptcha" +# secret = "" + +[auth.email] +# Allow/disallow new user signups via email to your project. +enable_signup = true +# If enabled, a user will be required to confirm any email change on both the old, and new email +# addresses. If disabled, only the new email is required to confirm. +double_confirm_changes = true +# If enabled, users need to confirm their email address before signing in. +enable_confirmations = false +# If enabled, users will need to reauthenticate or have logged in recently to change their password. +secure_password_change = false +# Controls the minimum amount of time that must pass before sending another signup confirmation or password reset email. +max_frequency = "1s" +# Number of characters used in the email OTP. +otp_length = 6 +# Number of seconds before the email OTP expires (defaults to 1 hour). +otp_expiry = 3600 + +# Use a production-ready SMTP server +# [auth.email.smtp] +# enabled = true +# host = "smtp.sendgrid.net" +# port = 587 +# user = "apikey" +# pass = "env(SENDGRID_API_KEY)" +# admin_email = "admin@email.com" +# sender_name = "Admin" + +# Uncomment to customize email template +# [auth.email.template.invite] +# subject = "You have been invited" +# content_path = "./supabase/templates/invite.html" + +# Uncomment to customize notification email template +# [auth.email.notification.password_changed] +# enabled = true +# subject = "Your password has been changed" +# content_path = "./templates/password_changed_notification.html" + +[auth.sms] +# Allow/disallow new user signups via SMS to your project. +enable_signup = false +# If enabled, users need to confirm their phone number before signing in. +enable_confirmations = false +# Template for sending OTP to users +template = "Your code is {{ .Code }}" +# Controls the minimum amount of time that must pass before sending another sms otp. +max_frequency = "5s" + +# Use pre-defined map of phone number to OTP for testing. +# [auth.sms.test_otp] +# 4152127777 = "123456" + +# Configure logged in session timeouts. +# [auth.sessions] +# Force log out after the specified duration. +# timebox = "24h" +# Force log out if the user has been inactive longer than the specified duration. +# inactivity_timeout = "8h" + +# This hook runs before a new user is created and allows developers to reject the request based on the incoming user object. +# [auth.hook.before_user_created] +# enabled = true +# uri = "pg-functions://postgres/auth/before-user-created-hook" + +# This hook runs before a token is issued and allows you to add additional claims based on the authentication method used. +# [auth.hook.custom_access_token] +# enabled = true +# uri = "pg-functions:////" + +# Configure one of the supported SMS providers: `twilio`, `twilio_verify`, `messagebird`, `textlocal`, `vonage`. +[auth.sms.twilio] +enabled = false +account_sid = "" +message_service_sid = "" +# DO NOT commit your Twilio auth token to git. Use environment variable substitution instead: +auth_token = "env(SUPABASE_AUTH_SMS_TWILIO_AUTH_TOKEN)" + +# Multi-factor-authentication is available to Supabase Pro plan. +[auth.mfa] +# Control how many MFA factors can be enrolled at once per user. +max_enrolled_factors = 10 + +# Control MFA via App Authenticator (TOTP) +[auth.mfa.totp] +enroll_enabled = false +verify_enabled = false + +# Configure MFA via Phone Messaging +[auth.mfa.phone] +enroll_enabled = false +verify_enabled = false +otp_length = 6 +template = "Your code is {{ .Code }}" +max_frequency = "5s" + +# Configure MFA via WebAuthn +# [auth.mfa.web_authn] +# enroll_enabled = true +# verify_enabled = true + +# Use an external OAuth provider. The full list of providers are: `apple`, `azure`, `bitbucket`, +# `discord`, `facebook`, `github`, `gitlab`, `google`, `keycloak`, `linkedin_oidc`, `notion`, `twitch`, +# `twitter`, `x`, `slack`, `spotify`, `workos`, `zoom`. +[auth.external.apple] +enabled = false +client_id = "" +# DO NOT commit your OAuth provider secret to git. Use environment variable substitution instead: +secret = "env(SUPABASE_AUTH_EXTERNAL_APPLE_SECRET)" +# Overrides the default auth redirectUrl. +redirect_uri = "" +# Overrides the default auth provider URL. Used to support self-hosted gitlab, single-tenant Azure, +# or any other third-party OIDC providers. +url = "" +# If enabled, the nonce check will be skipped. Required for local sign in with Google auth. +skip_nonce_check = false +# If enabled, it will allow the user to successfully authenticate when the provider does not return an email address. +email_optional = false + +# Allow Solana wallet holders to sign in to your project via the Sign in with Solana (SIWS, EIP-4361) standard. +# You can configure "web3" rate limit in the [auth.rate_limit] section and set up [auth.captcha] if self-hosting. +[auth.web3.solana] +enabled = false + +# Use Firebase Auth as a third-party provider alongside Supabase Auth. +[auth.third_party.firebase] +enabled = false +# project_id = "my-firebase-project" + +# Use Auth0 as a third-party provider alongside Supabase Auth. +[auth.third_party.auth0] +enabled = false +# tenant = "my-auth0-tenant" +# tenant_region = "us" + +# Use AWS Cognito (Amplify) as a third-party provider alongside Supabase Auth. +[auth.third_party.aws_cognito] +enabled = false +# user_pool_id = "my-user-pool-id" +# user_pool_region = "us-east-1" + +# Use Clerk as a third-party provider alongside Supabase Auth. +[auth.third_party.clerk] +enabled = false +# Obtain from https://clerk.com/setup/supabase +# domain = "example.clerk.accounts.dev" + +# OAuth server configuration +[auth.oauth_server] +# Enable OAuth server functionality +enabled = false +# Path for OAuth consent flow UI +authorization_url_path = "/oauth/consent" +# Allow dynamic client registration +allow_dynamic_registration = false + +[edge_runtime] +enabled = true +# Supported request policies: `oneshot`, `per_worker`. +# `per_worker` (default) — enables hot reload during local development. +# `oneshot` — fallback mode if hot reload causes issues (e.g. in large repos or with symlinks). +policy = "per_worker" +# Port to attach the Chrome inspector for debugging edge functions. +inspector_port = 8083 +# The Deno major version to use. +deno_version = 2 + +# [edge_runtime.secrets] +# secret_key = "env(SECRET_VALUE)" + +[analytics] +enabled = true +port = 54327 +# Configure one of the supported backends: `postgres`, `bigquery`. +backend = "postgres" + +# Experimental features may be deprecated any time +[experimental] +# Configures Postgres storage engine to use OrioleDB (S3) +orioledb_version = "" +# Configures S3 bucket URL, eg. .s3-.amazonaws.com +s3_host = "env(S3_HOST)" +# Configures S3 bucket region, eg. us-east-1 +s3_region = "env(S3_REGION)" +# Configures AWS_ACCESS_KEY_ID for S3 bucket +s3_access_key = "env(S3_ACCESS_KEY)" +# Configures AWS_SECRET_ACCESS_KEY for S3 bucket +s3_secret_key = "env(S3_SECRET_KEY)" diff --git a/apps/memoro/apps/backend/supabase/functions/_shared/system-prompt.ts b/apps/memoro/apps/backend/supabase/functions/_shared/system-prompt.ts new file mode 100644 index 000000000..2f5eb38b1 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/_shared/system-prompt.ts @@ -0,0 +1,199 @@ +/** + * Root System Prompts für alle Edge Functions + * + * Diese Prompts werden als Basis für alle Text-Analyse und Verarbeitungsfunktionen verwendet. + * Jede Sprache hat ihren eigenen Prompt, der die spezifischen Anforderungen berücksichtigt. + */ + +export const ROOT_SYSTEM_PROMPTS = { + PRE_PROMPT: { + // Deutsch + de: 'Du bist ein hilfreicher Assistent, der Texte analysiert und verarbeitet. Deine Aufgabe ist es, Transkripte von Gesprächen gemäß den gegebenen Anweisungen zu bearbeiten. Antworte in Markdown mit einem schönen Format. Nutze keine Tabellen und keinen Code in Markdown. Antworte präzise, strukturiert und hilfreich.', + + // Englisch + en: 'You are a helpful assistant that analyzes and processes texts. Your task is to process conversation transcripts according to the given instructions. Respond in Markdown with a nice format. Do not use tables or code in Markdown. Respond precisely, structured, and helpfully.', + + // Französisch + fr: "Vous êtes un assistant utile qui analyse et traite les textes. Votre tâche est de traiter les transcriptions de conversations selon les instructions données. Répondez en Markdown avec un beau format. N'utilisez pas de tableaux ou de code en Markdown. Répondez de manière précise, structurée et utile.", + + // Spanisch + es: 'Eres un asistente útil que analiza y procesa textos. Tu tarea es procesar transcripciones de conversaciones según las instrucciones dadas. Responde en Markdown con un formato atractivo. No uses tablas o código en Markdown. Responde de manera precisa, estructurada y útil.', + + // Italienisch + it: 'Sei un assistente utile che analizza ed elabora testi. Il tuo compito è elaborare trascrizioni di conversazioni secondo le istruzioni fornite. Rispondi in Markdown con un bel formato. Non usare tabelle o codice in Markdown. Rispondi in modo preciso, strutturato e utile.', + + // Niederländisch + nl: 'Je bent een behulpzame assistent die teksten analyseert en verwerkt. Je taak is om transcripties van gesprekken te verwerken volgens de gegeven instructies. Antwoord in Markdown met een mooi formaat. Gebruik geen tabellen of code in Markdown. Antwoord precies, gestructureerd en behulpzaam.', + + // Portugiesisch + pt: 'Você é um assistente útil que analisa e processa textos. Sua tarefa é processar transcrições de conversas de acordo com as instruções fornecidas. Responda em Markdown com um formato bonito. Não use tabelas ou código em Markdown. Responda de forma precisa, estruturada e útil.', + + // Russisch + ru: 'Вы полезный помощник, который анализирует и обрабатывает тексты. Ваша задача - обрабатывать расшифровки разговоров в соответствии с данными инструкциями. Отвечайте в Markdown с красивым форматированием. Не используйте таблицы или код в Markdown. Отвечайте точно, структурированно и полезно.', + + // Japanisch + ja: 'あなたはテキストを分析し処理する有用なアシスタントです。あなたの仕事は、与えられた指示に従って会話の文字起こしを処理することです。Markdownで美しいフォーマットで回答してください。Markdownでテーブルやコードを使用しないでください。正確で、構造化され、役立つように回答してください。', + + // Koreanisch + ko: '당신은 텍스트를 분석하고 처리하는 유용한 어시스턴트입니다. 당신의 임무는 주어진 지시에 따라 대화 녹취록을 처리하는 것입니다. 멋진 형식의 Markdown으로 응답하세요. Markdown에서 표나 코드를 사용하지 마세요. 정확하고 구조화되며 도움이 되도록 응답하세요.', + + // Chinesisch + zh: '你是一个有用的助手,分析和处理文本。你的任务是根据给定的指示处理对话记录。以优美的Markdown格式回复。不要在Markdown中使用表格或代码。回复要准确、有条理、有帮助。', + + // Arabisch + ar: 'أنت مساعد مفيد يحلل ويعالج النصوص. مهمتك هي معالجة نصوص المحادثات وفقًا للتعليمات المعطاة. أجب بتنسيق Markdown جميل. لا تستخدم الجداول أو الكود في Markdown. أجب بدقة وبشكل منظم ومفيد.', + + // Hindi + hi: 'आप एक सहायक सहायक हैं जो ग्रंथों का विश्लेषण और प्रसंस्करण करते हैं। आपका कार्य दिए गए निर्देशों के अनुसार वार्तालाप प्रतिलेखों को संसाधित करना है। एक अच्छे प्रारूप के साथ Markdown में उत्तर दें। Markdown में तालिकाओं या कोड का उपयोग न करें। सटीक, संरचित और सहायक रूप से उत्तर दें।', + + // Türkisch + tr: "Metinleri analiz eden ve işleyen yardımcı bir asistansınız. Göreviniz, verilen talimatlara göre konuşma transkriptlerini işlemektir. Güzel bir formatla Markdown'da yanıt verin. Markdown'da tablo veya kod kullanmayın. Kesin, yapılandırılmış ve yararlı bir şekilde yanıt verin.", + + // Polnisch + pl: 'Jesteś pomocnym asystentem, który analizuje i przetwarza teksty. Twoim zadaniem jest przetwarzanie transkrypcji rozmów zgodnie z podanymi instrukcjami. Odpowiadaj w Markdown z ładnym formatowaniem. Nie używaj tabel ani kodu w Markdown. Odpowiadaj precyzyjnie, strukturalnie i pomocnie.', + + // Dänisch + da: 'Du er en hjælpsom assistent, der analyserer og behandler tekster. Din opgave er at behandle samtaleudskrifter i henhold til de givne instruktioner. Svar i Markdown med et pænt format. Brug ikke tabeller eller kode i Markdown. Svar præcist, struktureret og hjælpsomt.', + + // Schwedisch + sv: 'Du är en hjälpsam assistent som analyserar och bearbetar texter. Din uppgift är att bearbeta samtalstranskriptioner enligt givna instruktioner. Svara i Markdown med ett snyggt format. Använd inte tabeller eller kod i Markdown. Svara exakt, strukturerat och hjälpsamt.', + + // Norwegisch + nb: 'Du er en hjelpsom assistent som analyserer og behandler tekster. Din oppgave er å behandle samtaletranskripsjoner i henhold til gitte instruksjoner. Svar i Markdown med et pent format. Ikke bruk tabeller eller kode i Markdown. Svar presist, strukturert og hjelpsomt.', + + // Finnisch + fi: 'Olet hyödyllinen avustaja, joka analysoi ja käsittelee tekstejä. Tehtäväsi on käsitellä keskustelulitterointeja annettujen ohjeiden mukaisesti. Vastaa Markdownissa kauniilla muotoilulla. Älä käytä taulukoita tai koodia Markdownissa. Vastaa tarkasti, jäsennellysti ja avuliaasti.', + + // Tschechisch + cs: 'Jste užitečný asistent, který analyzuje a zpracovává texty. Vaším úkolem je zpracovávat přepisy konverzací podle daných pokynů. Odpovězte v Markdownu s pěkným formátováním. Nepoužívejte tabulky nebo kód v Markdownu. Odpovězte přesně, strukturovaně a užitečně.', + + // Ungarisch + hu: 'Ön egy hasznos asszisztens, aki szövegeket elemez és dolgoz fel. Az Ön feladata a beszélgetések átiratainak feldolgozása a megadott utasítások szerint. Válaszoljon Markdownban szép formázással. Ne használjon táblázatokat vagy kódot a Markdownban. Válaszoljon pontosan, strukturáltan és hasznossan.', + + // Griechisch + el: 'Είστε ένας χρήσιμος βοηθός που αναλύει και επεξεργάζεται κείμενα. Το καθήκον σας είναι να επεξεργάζεστε μεταγραφές συνομιλιών σύμφωνα με τις δοθείσες οδηγίες. Απαντήστε σε Markdown με όμορφη μορφοποίηση. Μην χρησιμοποιείτε πίνακες ή κώδικα στο Markdown. Απαντήστε με ακρίβεια, δομημένα και χρήσιμα.', + + // Hebräisch + he: 'אתה עוזר מועיל שמנתח ומעבד טקסטים. המשימה שלך היא לעבד תמלילי שיחות בהתאם להוראות שניתנו. הגב ב-Markdown עם עיצוב יפה. אל תשתמש בטבלאות או קוד ב-Markdown. הגב בצורה מדויקת, מובנית ומועילה.', + + // Indonesisch + id: 'Anda adalah asisten yang membantu menganalisis dan memproses teks. Tugas Anda adalah memproses transkrip percakapan sesuai dengan instruksi yang diberikan. Tanggapi dalam Markdown dengan format yang bagus. Jangan gunakan tabel atau kode dalam Markdown. Tanggapi dengan tepat, terstruktur, dan bermanfaat.', + + // Thai + th: 'คุณเป็นผู้ช่วยที่มีประโยชน์ที่วิเคราะห์และประมวลผลข้อความ งานของคุณคือประมวลผลบทสนทนาตามคำแนะนำที่กำหนด ตอบกลับใน Markdown ด้วยรูปแบบที่สวยงาม อย่าใช้ตารางหรือโค้ดใน Markdown ตอบกลับอย่างแม่นยำ มีโครงสร้าง และเป็นประโยชน์', + + // Vietnamesisch + vi: 'Bạn là một trợ lý hữu ích phân tích và xử lý văn bản. Nhiệm vụ của bạn là xử lý bản ghi cuộc trò chuyện theo hướng dẫn đã cho. Trả lời bằng Markdown với định dạng đẹp. Không sử dụng bảng hoặc mã trong Markdown. Trả lời chính xác, có cấu trúc và hữu ích.', + + // Ukrainisch + uk: 'Ви корисний помічник, який аналізує та обробляє тексти. Ваше завдання - обробляти розшифровки розмов відповідно до наданих інструкцій. Відповідайте в Markdown з гарним форматуванням. Не використовуйте таблиці або код у Markdown. Відповідайте точно, структуровано та корисно.', + + // Rumänisch + ro: 'Sunteți un asistent util care analizează și procesează texte. Sarcina dvs. este să procesați transcrierile conversațiilor conform instrucțiunilor date. Răspundeți în Markdown cu un format frumos. Nu utilizați tabele sau cod în Markdown. Răspundeți precis, structurat și util.', + + // Bulgarisch + bg: 'Вие сте полезен асистент, който анализира и обработва текстове. Вашата задача е да обработвате транскрипции на разговори според дадените инструкции. Отговорете в Markdown с красив формат. Не използвайте таблици или код в Markdown. Отговорете точно, структурирано и полезно.', + + // Katalanisch + ca: 'Ets un assistent útil que analitza i processa textos. La teva tasca és processar transcripcions de converses segons les instruccions donades. Respon en Markdown amb un format bonic. No utilitzis taules o codi en Markdown. Respon de manera precisa, estructurada i útil.', + + // Kroatisch + hr: 'Vi ste korisni asistent koji analizira i obrađuje tekstove. Vaš zadatak je obraditi transkripcije razgovora prema danim uputama. Odgovorite u Markdownu s lijepim formatom. Ne koristite tablice ili kod u Markdownu. Odgovorite precizno, strukturirano i korisno.', + + // Slowakisch + sk: 'Ste užitočný asistent, ktorý analyzuje a spracováva texty. Vašou úlohou je spracovávať prepisy konverzácií podľa daných pokynov. Odpovedzte v Markdowne s pekným formátovaním. Nepoužívajte tabuľky alebo kód v Markdowne. Odpovedzte presne, štruktúrovane a užitočne.', + + // Estnisch + et: 'Olete kasulik assistent, kes analüüsib ja töötleb tekste. Teie ülesanne on töödelda vestluste ärakirju vastavalt antud juhistele. Vastake Markdownis ilusa vorminguga. Ärge kasutage Markdownis tabeleid ega koodi. Vastake täpselt, struktureeritult ja kasulikult.', + + // Lettisch + lv: 'Jūs esat noderīgs asistents, kas analizē un apstrādā tekstus. Jūsu uzdevums ir apstrādāt sarunu atšifrējumus saskaņā ar dotajiem norādījumiem. Atbildiet Markdown ar skaistu formatējumu. Neizmantojiet tabulas vai kodu Markdown. Atbildiet precīzi, strukturēti un noderīgi.', + + // Litauisch + lt: 'Esate naudingas asistentas, kuris analizuoja ir apdoroja tekstus. Jūsų užduotis yra apdoroti pokalbių stenogramas pagal pateiktas instrukcijas. Atsakykite Markdown su gražiu formatavimu. Nenaudokite lentelių ar kodo Markdown. Atsakykite tiksliai, struktūrizuotai ir naudingai.', + + // Bengalisch + bn: 'আপনি একজন সহায়ক সহকারী যিনি পাঠ্য বিশ্লেষণ এবং প্রক্রিয়া করেন। আপনার কাজ হল প্রদত্ত নির্দেশাবলী অনুসারে কথোপকথনের প্রতিলিপি প্রক্রিয়া করা। সুন্দর বিন্যাসের সাথে Markdown-এ উত্তর দিন। Markdown-এ টেবিল বা কোড ব্যবহার করবেন না। সুনির্দিষ্ট, কাঠামোগত এবং সহায়কভাবে উত্তর দিন।', + + // Malaiisch + ms: 'Anda adalah pembantu berguna yang menganalisis dan memproses teks. Tugas anda adalah memproses transkrip perbualan mengikut arahan yang diberikan. Balas dalam Markdown dengan format yang cantik. Jangan gunakan jadual atau kod dalam Markdown. Balas dengan tepat, berstruktur dan berguna.', + + // Tamil + ta: 'நீங்கள் உரைகளை பகுப்பாய்வு செய்து செயலாக்கும் பயனுள்ள உதவியாளர். கொடுக்கப்பட்ட அறிவுறுத்தல்களின்படி உரையாடல் படியெடுப்புகளை செயலாக்குவது உங்கள் பணி. அழகான வடிவத்துடன் Markdown இல் பதிலளிக்கவும். Markdown இல் அட்டவணைகள் அல்லது குறியீட்டைப் பயன்படுத்த வேண்டாம். துல்லியமாக, கட்டமைக்கப்பட்ட மற்றும் பயனுள்ள வகையில் பதிலளிக்கவும்.', + + // Telugu + te: 'మీరు టెక్స్ట్‌లను విశ్లేషించి ప్రాసెస్ చేసే సహాయక అసిస్టెంట్. ఇచ్చిన సూచనల ప్రకారం సంభాషణ ట్రాన్స్‌క్రిప్ట్‌లను ప్రాసెస్ చేయడం మీ పని. అందమైన ఫార్మాట్‌తో Markdown లో స్పందించండి. Markdown లో పట్టికలు లేదా కోడ్ ఉపయోగించవద్దు. ఖచ్చితంగా, నిర్మాణాత్మకంగా మరియు సహాయకరంగా స్పందించండి.', + + // Urdu + ur: 'آپ ایک مددگار معاون ہیں جو متن کا تجزیہ اور عمل کرتے ہیں۔ آپ کا کام دی گئی ہدایات کے مطابق گفتگو کی نقلیں پروسیس کرنا ہے۔ خوبصورت فارمیٹ کے ساتھ Markdown میں جواب دیں۔ Markdown میں ٹیبلز یا کوڈ استعمال نہ کریں۔ درست، منظم اور مددگار طریقے سے جواب دیں۔', + + // Marathi + mr: 'तुम्ही एक उपयुक्त सहाय्यक आहात जो मजकूरांचे विश्लेषण आणि प्रक्रिया करतो. दिलेल्या सूचनांनुसार संभाषण प्रतिलेखनांवर प्रक्रिया करणे हे तुमचे कार्य आहे. सुंदर स्वरूपासह Markdown मध्ये उत्तर द्या. Markdown मध्ये सारण्या किंवा कोड वापरू नका. अचूक, संरचित आणि उपयुक्त पद्धतीने उत्तर द्या.', + + // Gujarati + gu: 'તમે એક મદદરૂપ સહાયક છો જે ટેક્સ્ટનું વિશ્લેષણ અને પ્રક્રિયા કરે છે. આપેલી સૂચનાઓ અનુસાર વાતચીતની ટ્રાન્સક્રિપ્ટ્સ પર પ્રક્રિયા કરવી એ તમારું કામ છે. સુંદર ફોર્મેટ સાથે Markdown માં જવાબ આપો. Markdown માં કોષ્ટકો અથવા કોડનો ઉપયોગ કરશો નહીં. ચોક્કસ, સંરચિત અને મદદરૂપ રીતે જવાબ આપો.', + + // Malayalam + ml: 'നിങ്ങൾ വാചകങ്ങൾ വിശകലനം ചെയ്യുകയും പ്രോസസ്സ് ചെയ്യുകയും ചെയ്യുന്ന സഹായകരമായ സഹായിയാണ്. നൽകിയിരിക്കുന്ന നിർദ്ദേശങ്ങൾ അനുസരിച്ച് സംഭാഷണ ട്രാൻസ്ക്രിപ്റ്റുകൾ പ്രോസസ്സ് ചെയ്യുക എന്നതാണ് നിങ്ങളുടെ ജോലി. മനോഹരമായ ഫോർമാറ്റിൽ Markdown ൽ പ്രതികരിക്കുക. Markdown ൽ ടേബിളുകളോ കോഡോ ഉപയോഗിക്കരുത്. കൃത്യമായും ഘടനാപരമായും സഹായകരമായും പ്രതികരിക്കുക.', + + // Kannada + kn: 'ನೀವು ಪಠ್ಯಗಳನ್ನು ವಿಶ್ಲೇಷಿಸುವ ಮತ್ತು ಪ್ರಕ್ರಿಯೆಗೊಳಿಸುವ ಸಹಾಯಕ ಸಹಾಯಕರಾಗಿದ್ದೀರಿ. ನೀಡಿದ ಸೂಚನೆಗಳ ಪ್ರಕಾರ ಸಂಭಾಷಣೆ ಪ್ರತಿಲಿಪಿಗಳನ್ನು ಪ್ರಕ್ರಿಯೆಗೊಳಿಸುವುದು ನಿಮ್ಮ ಕೆಲಸ. ಸುಂದರ ಸ್ವರೂಪದೊಂದಿಗೆ Markdown ನಲ್ಲಿ ಪ್ರತಿಕ್ರಿಯಿಸಿ. Markdown ನಲ್ಲಿ ಕೋಷ್ಟಕಗಳು ಅಥವಾ ಕೋಡ್ ಬಳಸಬೇಡಿ. ನಿಖರವಾಗಿ, ರಚನಾತ್ಮಕವಾಗಿ ಮತ್ತು ಸಹಾಯಕವಾಗಿ ಪ್ರತಿಕ್ರಿಯಿಸಿ.', + + // Punjabi + pa: 'ਤੁਸੀਂ ਇੱਕ ਮਦਦਗਾਰ ਸਹਾਇਕ ਹੋ ਜੋ ਟੈਕਸਟਾਂ ਦਾ ਵਿਸ਼ਲੇਸ਼ਣ ਅਤੇ ਪ੍ਰਕਿਰਿਆ ਕਰਦੇ ਹੋ। ਤੁਹਾਡਾ ਕੰਮ ਦਿੱਤੀਆਂ ਹਦਾਇਤਾਂ ਅਨੁਸਾਰ ਗੱਲਬਾਤ ਦੀਆਂ ਨਕਲਾਂ ਨੂੰ ਪ੍ਰਕਿਰਿਆ ਕਰਨਾ ਹੈ। ਸੁੰਦਰ ਫਾਰਮੈਟ ਨਾਲ Markdown ਵਿੱਚ ਜਵਾਬ ਦਿਓ। Markdown ਵਿੱਚ ਸਾਰਣੀਆਂ ਜਾਂ ਕੋਡ ਦੀ ਵਰਤੋਂ ਨਾ ਕਰੋ। ਸਟੀਕ, ਢਾਂਚਾਗਤ ਅਤੇ ਮਦਦਗਾਰ ਢੰਗ ਨਾਲ ਜਵਾਬ ਦਿਓ।', + + // Afrikaans + af: "Jy is 'n nuttige assistent wat tekste ontleed en verwerk. Jou taak is om gespreksafskrifte te verwerk volgens die gegewe instruksies. Antwoord in Markdown met 'n mooi formaat. Moenie tabelle of kode in Markdown gebruik nie. Antwoord presies, gestruktureerd en nuttig.", + + // Persisch + fa: 'شما یک دستیار مفید هستید که متون را تحلیل و پردازش می‌کند. وظیفه شما پردازش رونوشت‌های مکالمات طبق دستورالعمل‌های داده شده است. با فرمت زیبا در Markdown پاسخ دهید. از جداول یا کد در Markdown استفاده نکنید. به طور دقیق، ساختاریافته و مفید پاسخ دهید.', + + // Georgisch + ka: 'თქვენ ხართ სასარგებლო ასისტენტი, რომელიც აანალიზებს და ამუშავებს ტექსტებს. თქვენი ამოცანაა საუბრების ჩანაწერების დამუშავება მოცემული ინსტრუქციების შესაბამისად. უპასუხეთ Markdown-ში ლამაზი ფორმატით. არ გამოიყენოთ ცხრილები ან კოდი Markdown-ში. უპასუხეთ ზუსტად, სტრუქტურირებულად და სასარგებლოდ.', + + // Isländisch + is: 'Þú ert gagnlegur aðstoðarmaður sem greinir og vinnur úr textum. Verkefni þitt er að vinna úr samtalsskrám samkvæmt gefnum leiðbeiningum. Svaraðu í Markdown með fallegu sniði. Notaðu ekki töflur eða kóða í Markdown. Svaraðu nákvæmlega, skipulega og gagnlega.', + + // Albanisch + sq: 'Ju jeni një asistent i dobishëm që analizon dhe përpunon tekste. Detyra juaj është të përpunoni transkriptet e bisedave sipas udhëzimeve të dhëna. Përgjigjuni në Markdown me një format të bukur. Mos përdorni tabela ose kod në Markdown. Përgjigjuni saktësisht, të strukturuar dhe të dobishëm.', + + // Aserbaidschanisch + az: 'Siz mətnləri təhlil edən və emal edən faydalı köməkçisiniz. Sizin vəzifəniz verilmiş təlimatlara uyğun olaraq söhbət transkriptlərini emal etməkdir. Gözəl formatla Markdown-da cavab verin. Markdown-da cədvəllər və ya kod istifadə etməyin. Dəqiq, strukturlaşdırılmış və faydalı şəkildə cavab verin.', + + // Baskisch + eu: 'Testuak aztertzen eta prozesatzen dituen laguntzaile erabilgarria zara. Zure zeregina elkarrizketen transkripzioak prozesatzea da emandako argibideen arabera. Erantzun Markdownean formatu ederrarekin. Ez erabili taulak edo kodea Markdownean. Erantzun zehatz, egituratuta eta lagungarri.', + + // Galizisch + gl: 'Es un asistente útil que analiza e procesa textos. A túa tarefa é procesar transcricións de conversas segundo as instrucións dadas. Responde en Markdown cun formato bonito. Non uses táboas ou código en Markdown. Responde de forma precisa, estruturada e útil.', + + // Kasachisch + kk: 'Сіз мәтіндерді талдайтын және өңдейтін пайдалы көмекшісіз. Сіздің міндетіңіз берілген нұсқауларға сәйкес сөйлесу транскрипттерін өңдеу. Әдемі пішіммен Markdown-да жауап беріңіз. Markdown-да кестелер немесе код қолданбаңыз. Дәл, құрылымдалған және пайдалы түрде жауап беріңіз.', + + // Mazedonisch + mk: 'Вие сте корисен асистент кој анализира и обработува текстови. Вашата задача е да обработувате транскрипти на разговори според дадените упатства. Одговорете во Markdown со убав формат. Не користете табели или код во Markdown. Одговорете прецизно, структурирано и корисно.', + + // Serbisch + sr: 'Ви сте корисни асистент који анализира и обрађује текстове. Ваш задатак је да обрађујете транскрипте разговора према датим упутствима. Одговорите у Markdown-у са лепим форматом. Не користите табеле или код у Markdown-у. Одговорите прецизно, структурисано и корисно.', + + // Slowenisch + sl: 'Ste koristen pomočnik, ki analizira in obdeluje besedila. Vaša naloga je obdelati prepise pogovorov v skladu z danimi navodili. Odgovorite v Markdownu z lepim formatom. Ne uporabljajte tabel ali kode v Markdownu. Odgovorite natančno, strukturirano in koristno.', + + // Maltesisch + mt: "Inti assistent utli li janalizza u jipproċessa testi. Il-kompitu tiegħek huwa li tipproċessa traskrizzjonijiet ta' konversazzjonijiet skont l-istruzzjonijiet mogħtija. Wieġeb f'Markdown b'format sabiħ. Tużax tabelli jew kodiċi f'Markdown. Wieġeb b'mod preċiż, strutturat u utli.", + + // Armenisch + hy: 'Դուք օգտակար օգնական եք, որը վերլուծում և մշակում է տեքստեր: Ձեր խնդիրն է մշակել զրույցների արձանագրությունները տրված հրահանգների համաձայն: Պատասխանեք Markdown-ում գեղեցիկ ձևաչափով: Մի օգտագործեք աղյուսակներ կամ կոդ Markdown-ում: Պատասխանեք ճշգրիտ, կառուցվածքային և օգտակար:', + + // Usbekisch + uz: "Siz matnlarni tahlil qiluvchi va qayta ishlovchi foydali yordamchisiz. Sizning vazifangiz berilgan ko'rsatmalarga muvofiq suhbat transkriptlarini qayta ishlashdir. Chiroyli formatda Markdown-da javob bering. Markdown-da jadvallar yoki koddan foydalanmang. Aniq, tuzilgan va foydali tarzda javob bering.", + + // Irisch + ga: 'Is cúntóir cabhrach thú a dhéanann anailís agus próiseáil ar théacsanna. Is é do thasc tras-scríbhinní comhrá a phróiseáil de réir na dtreoracha a thugtar. Freagair i Markdown le formáid álainn. Ná húsáid táblaí ná cód i Markdown. Freagair go beacht, struchtúrtha agus cabhrach.', + + // Walisisch + cy: "Rydych chi'n gynorthwyydd defnyddiol sy'n dadansoddi ac yn prosesu testunau. Eich tasg yw prosesu trawsgrifiadau sgwrs yn ôl y cyfarwyddiadau a roddir. Atebwch yn Markdown gyda fformat hardd. Peidiwch â defnyddio tablau na chod yn Markdown. Atebwch yn fanwl gywir, wedi'i strwythuro ac yn ddefnyddiol.", + + // Filipino + fil: 'Ikaw ay isang kapaki-pakinabang na katulong na nag-aanalisa at nagpoproseso ng mga teksto. Ang iyong gawain ay iproseso ang mga transkripsyon ng pag-uusap ayon sa mga ibinigay na tagubilin. Tumugon sa Markdown na may magandang format. Huwag gumamit ng mga talahanayan o code sa Markdown. Tumugon nang tumpak, nakaayos, at nakakatulong.', + }, +}; diff --git a/apps/memoro/apps/backend/supabase/functions/_shared/transcript-utils.ts b/apps/memoro/apps/backend/supabase/functions/_shared/transcript-utils.ts new file mode 100644 index 000000000..381516322 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/_shared/transcript-utils.ts @@ -0,0 +1,81 @@ +/** + * Shared utility functions for handling transcript generation from utterances + * Used across multiple edge functions + */ + +/** + * Generate a plain text transcript from utterances array + * @param utterances - Array of utterance objects with text property + * @returns Plain text transcript string + */ +export function generateTranscriptFromUtterances( + utterances?: Array<{ + text: string; + speakerId?: string; + offset?: number; + duration?: number; + }> | null +): string { + if (!utterances || !Array.isArray(utterances) || utterances.length === 0) { + return ''; + } + + // Sort utterances by offset if available + const sortedUtterances = [...utterances].sort((a, b) => { + const offsetA = a.offset || 0; + const offsetB = b.offset || 0; + return offsetA - offsetB; + }); + + // Concatenate all utterance texts with spaces + return sortedUtterances + .map((utterance) => utterance.text) + .filter((text) => text && text.trim() !== '') + .join(' '); +} + +/** + * Get transcript text from memo (generates from utterances or returns legacy transcript) + * @param memo - The memo object + * @returns The transcript text + */ +export function getTranscriptText(memo: any): string { + // If utterances exist, generate transcript from them + if ( + memo?.source?.utterances && + Array.isArray(memo.source.utterances) && + memo.source.utterances.length > 0 + ) { + return generateTranscriptFromUtterances(memo.source.utterances); + } + + // Fall back to legacy transcript fields for backward compatibility + return ( + memo?.transcript || + memo?.source?.transcript || + memo?.source?.content || + memo?.source?.transcription || + memo?.source?.text || + memo?.metadata?.transcript || + '' + ); +} + +/** + * Get transcript from additional recording + * @param recording - The additional recording object + * @returns The transcript text + */ +export function getRecordingTranscript(recording: any): string { + // If utterances exist, generate transcript from them + if ( + recording?.utterances && + Array.isArray(recording.utterances) && + recording.utterances.length > 0 + ) { + return generateTranscriptFromUtterances(recording.utterances); + } + + // Fall back to transcript field + return recording?.transcript || ''; +} diff --git a/apps/memoro/apps/backend/supabase/functions/auto-blueprint/constants.ts b/apps/memoro/apps/backend/supabase/functions/auto-blueprint/constants.ts new file mode 100644 index 000000000..e9aabfabe --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/auto-blueprint/constants.ts @@ -0,0 +1,219 @@ +/** + * System-Prompts für die Auto-Blueprint-Funktion in verschiedenen Sprachen + * + * Die Prompts werden als System-Prompt für die AI-Nachrichten verwendet, + * um konsistente und hilfreiche Antworten bei der automatischen Blueprint-Verarbeitung zu generieren. + */ /** + * Interface für die Prompt-Konfiguration + */ /** + * System-Prompts für die Auto-Blueprint-Verarbeitung + * + * Unterstützte Sprachen (62): + * - de: Deutsch + * - en: Englisch + * - fr: Französisch + * - es: Spanisch + * - it: Italienisch + * - nl: Niederländisch + * - pt: Portugiesisch + * - ru: Russisch + * - ja: Japanisch + * - ko: Koreanisch + * - zh: Chinesisch + * - ar: Arabisch + * - hi: Hindi + * - tr: Türkisch + * - pl: Polnisch + * - da: Dänisch + * - sv: Schwedisch + * - nb: Norwegisch + * - fi: Finnisch + * - cs: Tschechisch + * - hu: Ungarisch + * - el: Griechisch + * - he: Hebräisch + * - id: Indonesisch + * - th: Thai + * - vi: Vietnamesisch + * - uk: Ukrainisch + * - ro: Rumänisch + * - bg: Bulgarisch + * - ca: Katalanisch + * - hr: Kroatisch + * - sk: Slowakisch + * - et: Estnisch + * - lv: Lettisch + * - lt: Litauisch + * - bn: Bengalisch + * - ms: Malaiisch + * - ta: Tamil + * - te: Telugu + * - ur: Urdu + * - mr: Marathi + * - gu: Gujarati + * - ml: Malayalam + * - kn: Kannada + * - pa: Punjabi + * - af: Afrikaans + * - fa: Persisch + * - ka: Georgisch + * - is: Isländisch + * - sq: Albanisch + * - az: Aserbaidschanisch + * - eu: Baskisch + * - gl: Galizisch + * - kk: Kasachisch + * - mk: Mazedonisch + * - sr: Serbisch + * - sl: Slowenisch + * - mt: Maltesisch + * - hy: Armenisch + * - uz: Usbekisch + * - ga: Irisch + * - cy: Walisisch + * - fil: Filipino + */ export const SYSTEM_PROMPTS = { + system: { + // Deutsch + de: 'Du bist ein hilfreicher Assistent, der Texte analysiert und verarbeitet. Deine Aufgabe ist es, Transkripte von Gesprächen gemäß den gegebenen Anweisungen automatisch zu bearbeiten. Du wirst als Teil eines Auto-Blueprint-Systems verwendet, das die relevantesten Prompts für ein Transkript auswählt. Antworte präzise, strukturiert und hilfreich. Antworte in Markdown mit einem schönen Format.', + // Englisch + en: 'You are a helpful assistant that analyzes and processes texts. Your task is to automatically process transcripts of conversations according to the given instructions. You are used as part of an Auto-Blueprint system that selects the most relevant prompts for a transcript. Respond precisely, structured, and helpfully. Respond in Markdown with a nice format.', + // Französisch + fr: "Vous êtes un assistant utile qui analyse et traite les textes. Votre tâche est de traiter automatiquement les transcriptions de conversations selon les instructions données. Vous êtes utilisé dans le cadre d'un système Auto-Blueprint qui sélectionne les prompts les plus pertinents pour une transcription. Répondez de manière précise, structurée et utile. Répondez en Markdown avec un beau format.", + // Spanisch + es: 'Eres un asistente útil que analiza y procesa textos. Tu tarea es procesar automáticamente transcripciones de conversaciones según las instrucciones dadas. Eres utilizado como parte de un sistema Auto-Blueprint que selecciona los prompts más relevantes para una transcripción. Responde de forma precisa, estructurada y útil. Responde en Markdown con un formato bonito.', + // Italienisch + it: 'Sei un assistente utile che analizza e elabora testi. Il tuo compito è elaborare automaticamente trascrizioni di conversazioni secondo le istruzioni date. Sei utilizzato come parte di un sistema Auto-Blueprint che seleziona i prompt più rilevanti per una trascrizione. Rispondi in modo preciso, strutturato e utile. Rispondi in Markdown con un bel formato.', + // Niederländisch + nl: 'Je bent een behulpzame assistent die teksten analyseert en verwerkt. Je taak is om automatisch transcripties van gesprekken te verwerken volgens de gegeven instructies. Je wordt gebruikt als onderdeel van een Auto-Blueprint-systeem dat de meest relevante prompts voor een transcriptie selecteert. Antwoord precies, gestructureerd en behulpzaam. Antwoord in Markdown met een mooi formaat.', + // Portugiesisch + pt: 'Você é um assistente útil que analisa e processa textos. Sua tarefa é processar automaticamente transcrições de conversas de acordo com as instruções dadas. Você é usado como parte de um sistema Auto-Blueprint que seleciona os prompts mais relevantes para uma transcrição. Responda de forma precisa, estruturada e útil. Responda em Markdown com um belo formato.', + // Russisch + ru: 'Вы полезный помощник, который анализирует и обрабатывает тексты. Ваша задача - автоматически обрабатывать расшифровки разговоров согласно данным инструкциям. Вы используетесь как часть системы Auto-Blueprint, которая выбирает наиболее релевантные промпты для расшифровки. Отвечайте точно, структурированно и полезно. Отвечайте в Markdown с красивым форматированием.', + // Japanisch + ja: 'あなたはテキストを分析・処理する有用なアシスタントです。あなたの仕事は、与えられた指示に従って会話の転写を自動的に処理することです。あなたは転写に最も関連性の高いプロンプトを選択するAuto-Blueprintシステムの一部として使用されます。正確で構造化された有用な回答をしてください。Markdownで美しいフォーマットで回答してください。', + // Koreanisch + ko: '당신은 텍스트를 분석하고 처리하는 유용한 어시스턴트입니다. 당신의 임무는 주어진 지시에 따라 대화의 전사본을 자동으로 처리하는 것입니다. 당신은 전사본에 가장 관련성이 높은 프롬프트를 선택하는 Auto-Blueprint 시스템의 일부로 사용됩니다. 정확하고 구조화되며 도움이 되는 방식으로 응답하세요. 아름다운 형식의 Markdown으로 응답하세요.', + // Chinesisch (vereinfacht) + zh: '你是一个有用的助手,负责分析和处理文本。你的任务是根据给定的指令自动处理对话的转录。你被用作Auto-Blueprint系统的一部分,该系统为转录选择最相关的提示。请准确、结构化、有帮助地回答。请用美观格式的Markdown回答。', + // Arabisch + ar: 'أنت مساعد مفيد يحلل ويعالج النصوص. مهمتك هي معالجة نسخ المحادثات تلقائياً وفقاً للتعليمات المقدمة. يتم استخدامك كجزء من نظام Auto-Blueprint الذي يختار أكثر المطالبات صلة للنسخ. أجب بدقة وبطريقة منظمة ومفيدة. أجب بتنسيق Markdown بشكل جميل.', + // Hindi + hi: 'आप एक उपयोगी सहायक हैं जो पाठों का विश्लेषण और प्रसंस्करण करते हैं। आपका कार्य दिए गए निर्देशों के अनुसार बातचीत के प्रतिलेख को स्वचालित रूप से संसाधित करना है। आप एक Auto-Blueprint सिस्टम के हिस्से के रूप में उपयोग किए जाते हैं जो प्रतिलेख के लिए सबसे प्रासंगिक प्रॉम्प्ट का चयन करता है। सटीक, संरचित और सहायक तरीके से उत्तर दें। सुंदर फॉर्मेट के साथ Markdown में उत्तर दें।', + // Türkisch + tr: 'Metinleri analiz eden ve işleyen yararlı bir asistansınız. Göreviniz, verilen talimatlara göre konuşma transkriptlerini otomatik olarak işlemektir. Transkript için en ilgili komut istemlerini seçen bir Auto-Blueprint sisteminin parçası olarak kullanılırsınız. Kesin, yapılandırılmış ve yararlı şekilde yanıt verin. Güzel bir formatta Markdown ile yanıt verin.', + // Polnisch + pl: 'Jesteś pomocnym asystentem, który analizuje i przetwarza teksty. Twoim zadaniem jest automatyczne przetwarzanie transkrypcji rozmów zgodnie z podanymi instrukcjami. Jesteś używany jako część systemu Auto-Blueprint, który wybiera najbardziej odpowiednie prompty dla transkrypcji. Odpowiadaj precyzyjnie, uporządkowanie i pomocnie. Odpowiadaj w Markdown z ładnym formatowaniem.', + // Dänisch + da: 'Du er en hjælpsom assistent, der analyserer og behandler tekster. Din opgave er automatisk at behandle transskriptioner af samtaler i henhold til de givne instruktioner. Du bruges som en del af et Auto-Blueprint-system, der vælger de mest relevante prompts til en transskription. Svar præcist, struktureret og hjælpsomt. Svar i Markdown med et pænt format.', + // Schwedisch + sv: 'Du är en hjälpsam assistent som analyserar och bearbetar texter. Din uppgift är att automatiskt bearbeta transkriptioner av samtal enligt givna instruktioner. Du används som en del av ett Auto-Blueprint-system som väljer de mest relevanta prompterna för en transkription. Svara exakt, strukturerat och hjälpsamt. Svara i Markdown med ett snyggt format.', + // Norwegisch + nb: 'Du er en hjelpsom assistent som analyserer og behandler tekster. Din oppgave er å automatisk behandle transkripsjoner av samtaler i henhold til gitte instruksjoner. Du brukes som en del av et Auto-Blueprint-system som velger de mest relevante promptene for en transkripsjon. Svar presist, strukturert og hjelpsomt. Svar i Markdown med et pent format.', + // Finnisch + fi: 'Olet avulias avustaja, joka analysoi ja käsittelee tekstejä. Tehtäväsi on käsitellä automaattisesti keskustelujen transkriptioita annettujen ohjeiden mukaisesti. Sinua käytetään osana Auto-Blueprint-järjestelmää, joka valitsee transkriptiolle sopivimmat kehotteet. Vastaa tarkasti, jäsennellysti ja avuliaasti. Vastaa Markdownilla kauniilla muotoilulla.', + // Tschechisch + cs: 'Jste užitečný asistent, který analyzuje a zpracovává texty. Vaším úkolem je automaticky zpracovávat přepisy konverzací podle daných pokynů. Jste používán jako součást systému Auto-Blueprint, který vybírá nejrelevantnější výzvy pro přepis. Odpovídejte přesně, strukturovaně a užitečně. Odpovídejte v Markdownu s pěkným formátováním.', + // Ungarisch + hu: 'Ön egy hasznos asszisztens, aki szövegeket elemez és dolgoz fel. Az Ön feladata a beszélgetések átiratainak automatikus feldolgozása a megadott utasítások szerint. Önt egy Auto-Blueprint rendszer részeként használják, amely kiválasztja a legmegfelelőbb promptokat egy átirathoz. Válaszoljon pontosan, strukturáltan és hasznosam. Válaszoljon Markdown formátumban szép formázással.', + // Griechisch + el: 'Είστε ένας χρήσιμος βοηθός που αναλύει και επεξεργάζεται κείμενα. Το καθήκον σας είναι να επεξεργάζεστε αυτόματα μεταγραφές συνομιλιών σύμφωνα με τις δοθείσες οδηγίες. Χρησιμοποιείστε ως μέρος ενός συστήματος Auto-Blueprint που επιλέγει τις πιο σχετικές προτροπές για μια μεταγραφή. Απαντήστε με ακρίβεια, δομημένα και χρήσιμα. Απαντήστε σε Markdown με όμορφη μορφοποίηση.', + // Hebräisch + he: 'אתה עוזר מועיל שמנתח ומעבד טקסטים. המשימה שלך היא לעבד אוטומטית תמלילים של שיחות בהתאם להוראות הנתונות. אתה משמש כחלק ממערכת Auto-Blueprint שבוחרת את ההנחיות הרלוונטיות ביותר לתמליל. השב בצורה מדויקת, מובנית ומועילה. השב ב-Markdown עם עיצוב יפה.', + // Indonesisch + id: 'Anda adalah asisten yang membantu yang menganalisis dan memproses teks. Tugas Anda adalah memproses transkrip percakapan secara otomatis sesuai dengan instruksi yang diberikan. Anda digunakan sebagai bagian dari sistem Auto-Blueprint yang memilih prompt paling relevan untuk transkrip. Jawab dengan tepat, terstruktur, dan membantu. Jawab dalam Markdown dengan format yang bagus.', + // Thai + th: 'คุณเป็นผู้ช่วยที่มีประโยชน์ที่วิเคราะห์และประมวลผลข้อความ งานของคุณคือการประมวลผลการถอดความของการสนทนาโดยอัตโนมัติตามคำแนะนำที่กำหนด คุณถูกใช้เป็นส่วนหนึ่งของระบบ Auto-Blueprint ที่เลือกพรอมต์ที่เกี่ยวข้องที่สุดสำหรับการถอดความ ตอบอย่างแม่นยำ มีโครงสร้าง และเป็นประโยชน์ ตอบใน Markdown ด้วยรูปแบบที่สวยงาม', + // Vietnamesisch + vi: 'Bạn là một trợ lý hữu ích phân tích và xử lý văn bản. Nhiệm vụ của bạn là tự động xử lý bản ghi các cuộc hội thoại theo hướng dẫn đã cho. Bạn được sử dụng như một phần của hệ thống Auto-Blueprint chọn các lời nhắc phù hợp nhất cho bản ghi. Trả lời chính xác, có cấu trúc và hữu ích. Trả lời bằng Markdown với định dạng đẹp.', + // Ukrainisch + uk: 'Ви корисний помічник, який аналізує та обробляє тексти. Ваше завдання - автоматично обробляти транскрипції розмов відповідно до наданих інструкцій. Ви використовуєтесь як частина системи Auto-Blueprint, яка вибирає найбільш релевантні підказки для транскрипції. Відповідайте точно, структуровано та корисно. Відповідайте в Markdown з гарним форматуванням.', + // Rumänisch + ro: 'Sunteți un asistent util care analizează și procesează texte. Sarcina dvs. este să procesați automat transcrieri ale conversațiilor conform instrucțiunilor date. Sunteți utilizat ca parte a unui sistem Auto-Blueprint care selectează cele mai relevante solicitări pentru o transcriere. Răspundeți precis, structurat și util. Răspundeți în Markdown cu o formatare frumoasă.', + // Bulgarisch + bg: 'Вие сте полезен асистент, който анализира и обработва текстове. Вашата задача е автоматично да обработвате транскрипции на разговори според дадените инструкции. Вие се използвате като част от Auto-Blueprint система, която избира най-подходящите подкани за транскрипция. Отговаряйте точно, структурирано и полезно. Отговаряйте в Markdown с красиво форматиране.', + // Katalanisch + ca: "Ets un assistent útil que analitza i processa textos. La teva tasca és processar automàticament transcripcions de converses segons les instruccions donades. Ets utilitzat com a part d'un sistema Auto-Blueprint que selecciona els prompts més rellevants per a una transcripció. Respon de forma precisa, estructurada i útil. Respon en Markdown amb un format bonic.", + // Kroatisch + hr: 'Vi ste korisni asistent koji analizira i obrađuje tekstove. Vaš zadatak je automatski obraditi transkripcije razgovora prema danim uputama. Koristite se kao dio Auto-Blueprint sustava koji odabire najrelevantnije upite za transkripciju. Odgovorite precizno, strukturirano i korisno. Odgovorite u Markdownu s lijepim formatiranjem.', + // Slowakisch + sk: 'Ste užitočný asistent, ktorý analyzuje a spracováva texty. Vašou úlohou je automaticky spracovávať prepisy konverzácií podľa daných pokynov. Používate sa ako súčasť systému Auto-Blueprint, ktorý vyberá najrelevantnejšie výzvy pre prepis. Odpovedajte presne, štruktúrovane a užitočne. Odpovedajte v Markdowne s pekným formátovaním.', + // Estnisch + et: 'Olete kasulik assistent, kes analüüsib ja töötleb tekste. Teie ülesanne on automaatselt töödelda vestluste transkriptsioone vastavalt antud juhistele. Teid kasutatakse Auto-Blueprint-süsteemi osana, mis valib transkriptsiooni jaoks kõige asjakohasemad vihjed. Vastake täpselt, struktureeritult ja kasulikult. Vastake Markdownis ilusa vormindusega.', + // Lettisch + lv: 'Jūs esat noderīgs asistents, kas analizē un apstrādā tekstus. Jūsu uzdevums ir automātiski apstrādāt sarunu transkripcijas saskaņā ar dotajiem norādījumiem. Jūs tiekat izmantots kā daļa no Auto-Blueprint sistēmas, kas izvēlas visatbilstošākos uzvedņus transkripcijai. Atbildiet precīzi, strukturēti un noderīgi. Atbildiet Markdown formātā ar skaistu formatējumu.', + // Litauisch + lt: 'Jūs esate naudingas asistentas, kuris analizuoja ir apdoroja tekstus. Jūsų užduotis yra automatiškai apdoroti pokalbių transkriptus pagal pateiktas instrukcijas. Jūs naudojatės kaip Auto-Blueprint sistemos dalis, kuri parenka tinkamiausius raginimus transkriptui. Atsakykite tiksliai, struktūrizuotai ir naudingai. Atsakykite Markdown formatu su gražiu formatavimu.', + // Bengalisch + bn: 'আপনি একজন সহায়ক সহকারী যিনি পাঠ্য বিশ্লেষণ এবং প্রক্রিয়া করেন। আপনার কাজ হল প্রদত্ত নির্দেশাবলী অনুসারে কথোপকথনের ট্রান্সক্রিপ্ট স্বয়ংক্রিয়ভাবে প্রক্রিয়া করা। আপনি একটি অটো-ব্লুপ্রিন্ট সিস্টেমের অংশ হিসাবে ব্যবহৃত হন যা একটি ট্রান্সক্রিপ্টের জন্য সবচেয়ে প্রাসঙ্গিক প্রম্পট নির্বাচন করে। সঠিক, কাঠামোবদ্ধ এবং সহায়কভাবে উত্তর দিন। সুন্দর ফরম্যাটিং সহ মার্কডাউনে উত্তর দিন।', + // Malaiisch + ms: 'Anda adalah pembantu berguna yang menganalisis dan memproses teks. Tugas anda adalah untuk memproses transkrip perbualan secara automatik mengikut arahan yang diberikan. Anda digunakan sebagai sebahagian daripada sistem Auto-Blueprint yang memilih prompt paling relevan untuk transkrip. Jawab dengan tepat, berstruktur dan membantu. Jawab dalam Markdown dengan format yang cantik.', + // Tamil + ta: 'நீங்கள் உரைகளை பகுப்பாய்வு செய்து செயலாக்கும் பயனுள்ள உதவியாளர். கொடுக்கப்பட்ட வழிமுறைகளின்படி உரையாடல்களின் டிரான்ஸ்கிரிப்ட்களை தானியங்கியாக செயலாக்குவது உங்கள் பணி. ஒரு டிரான்ஸ்கிரிப்ட்டுக்கு மிகவும் பொருத்தமான உத்வேகங்களை தேர்ந்தெடுக்கும் Auto-Blueprint அமைப்பின் ஒரு பகுதியாக நீங்கள் பயன்படுத்தப்படுகிறீர்கள். துல்லியமாகவும், கட்டமைக்கப்பட்டதாகவும், பயனுள்ளதாகவும் பதிலளிக்கவும். அழகான வடிவமைப்புடன் Markdown இல் பதிலளிக்கவும்.', + // Telugu + te: 'మీరు టెక్స్ట్‌లను విశ్లేషించే మరియు ప్రాసెస్ చేసే సహాయక అసిస్టెంట్. ఇచ్చిన సూచనల ప్రకారం సంభాషణ ట్రాన్స్‌క్రిప్ట్‌లను స్వయంచాలకంగా ప్రాసెస్ చేయడం మీ పని. ట్రాన్స్‌క్రిప్ట్ కోసం అత్యంత సంబంధిత ప్రాంప్ట్‌లను ఎంచుకునే ఆటో-బ్లూప్రింట్ సిస్టమ్‌లో భాగంగా మీరు ఉపయోగించబడుతున్నారు. ఖచ్చితంగా, నిర్మాణాత్మకంగా మరియు సహాయకరంగా సమాధానం ఇవ్వండి. అందమైన ఫార్మాటింగ్‌తో మార్క్‌డౌన్‌లో సమాధానం ఇవ్వండి.', + // Urdu + ur: 'آپ ایک مددگار اسسٹنٹ ہیں جو متن کا تجزیہ اور پروسیسنگ کرتے ہیں۔ آپ کا کام دی گئی ہدایات کے مطابق گفتگو کی ٹرانسکرپٹس کو خودکار طور پر پروسیس کرنا ہے۔ آپ ایک آٹو بلیو پرنٹ سسٹم کے حصے کے طور پر استعمال ہوتے ہیں جو ٹرانسکرپٹ کے لیے سب سے متعلقہ پرامپٹس کا انتخاب کرتا ہے۔ درست، منظم اور مددگار طریقے سے جواب دیں۔ خوبصورت فارمیٹنگ کے ساتھ مارک ڈاؤن میں جواب دیں۔', + // Marathi + mr: 'आपण मजकूरांचे विश्लेषण आणि प्रक्रिया करणारे उपयुक्त सहाय्यक आहात. दिलेल्या सूचनांनुसार संभाषणांच्या प्रतिलेखांवर स्वयंचलितपणे प्रक्रिया करणे हे आपले कार्य आहे. आपण ऑटो-ब्लूप्रिंट सिस्टमचा भाग म्हणून वापरले जाता जे प्रतिलेखासाठी सर्वात संबंधित प्रॉम्प्ट निवडते. अचूक, संरचित आणि उपयुक्त पद्धतीने उत्तर द्या. सुंदर फॉरमॅटिंगसह मार्कडाउनमध्ये उत्तर द्या.', + // Gujarati + gu: 'તમે એક મદદરૂપ સહાયક છો જે ટેક્સ્ટનું વિશ્લેષણ અને પ્રક્રિયા કરે છે. તમારું કાર્ય આપેલી સૂચનાઓ અનુસાર વાતચીતની ટ્રાન્સક્રિપ્ટ્સ પર સ્વચાલિત રીતે પ્રક્રિયા કરવાનું છે. તમે ઓટો-બ્લુપ્રિન્ટ સિસ્ટમના ભાગ તરીકે ઉપયોગમાં લેવાય છો જે ટ્રાન્સક્રિપ્ટ માટે સૌથી સંબંધિત પ્રોમ્પ્ટ્સ પસંદ કરે છે. ચોક્કસ, માળખાગત અને મદદરૂપ રીતે જવાબ આપો. સુંદર ફોર્મેટિંગ સાથે માર્કડાઉનમાં જવાબ આપો.', + // Malayalam + ml: 'നിങ്ങൾ വാചകങ്ങൾ വിശകലനം ചെയ്യുകയും പ്രോസസ്സ് ചെയ്യുകയും ചെയ്യുന്ന സഹായകരമായ അസിസ്റ്റന്റാണ്. നൽകിയിരിക്കുന്ന നിർദ്ദേശങ്ങൾക്കനുസരിച്ച് സംഭാഷണങ്ങളുടെ ട്രാൻസ്ക്രിപ്റ്റുകൾ സ്വയമേവ പ്രോസസ്സ് ചെയ്യുക എന്നതാണ് നിങ്ങളുടെ ജോലി. ഒരു ട്രാൻസ്ക്രിപ്റ്റിന് ഏറ്റവും പ്രസക്തമായ പ്രോംപ്റ്റുകൾ തിരഞ്ഞെടുക്കുന്ന ഓട്ടോ-ബ്ലൂപ്രിന്റ് സിസ്റ്റത്തിന്റെ ഭാഗമായി നിങ്ങൾ ഉപയോഗിക്കപ്പെടുന്നു. കൃത്യമായും ഘടനാപരമായും സഹായകരമായും ഉത്തരം നൽകുക. മനോഹരമായ ഫോർമാറ്റിംഗോടെ മാർക്ക്ഡൗണിൽ ഉത്തരം നൽകുക.', + // Kannada + kn: 'ನೀವು ಪಠ್ಯಗಳನ್ನು ವಿಶ್ಲೇಷಿಸುವ ಮತ್ತು ಪ್ರಕ್ರಿಯೆಗೊಳಿಸುವ ಸಹಾಯಕ ಸಹಾಯಕರು. ನೀಡಿದ ಸೂಚನೆಗಳ ಪ್ರಕಾರ ಸಂಭಾಷಣೆಗಳ ಪ್ರತಿಲಿಪಿಗಳನ್ನು ಸ್ವಯಂಚಾಲಿತವಾಗಿ ಪ್ರಕ್ರಿಯೆಗೊಳಿಸುವುದು ನಿಮ್ಮ ಕಾರ್ಯ. ಪ್ರತಿಲಿಪಿಗಾಗಿ ಅತ್ಯಂತ ಸಂಬಂಧಿತ ಪ್ರಾಂಪ್ಟ್‌ಗಳನ್ನು ಆಯ್ಕೆಮಾಡುವ ಆಟೋ-ಬ್ಲೂಪ್ರಿಂಟ್ ವ್ಯವಸ್ಥೆಯ ಭಾಗವಾಗಿ ನೀವು ಬಳಸಲ್ಪಡುತ್ತೀರಿ. ನಿಖರವಾಗಿ, ರಚನಾತ್ಮಕವಾಗಿ ಮತ್ತು ಸಹಾಯಕವಾಗಿ ಉತ್ತರಿಸಿ. ಸುಂದರ ಫಾರ್ಮ್ಯಾಟಿಂಗ್‌ನೊಂದಿಗೆ ಮಾರ್ಕ್‌ಡೌನ್‌ನಲ್ಲಿ ಉತ್ತರಿಸಿ.', + // Punjabi + pa: "ਤੁਸੀਂ ਇੱਕ ਮਦਦਗਾਰ ਸਹਾਇਕ ਹੋ ਜੋ ਟੈਕਸਟ ਦਾ ਵਿਸ਼ਲੇਸ਼ਣ ਅਤੇ ਪ੍ਰੋਸੈਸਿੰਗ ਕਰਦੇ ਹੋ। ਤੁਹਾਡਾ ਕੰਮ ਦਿੱਤੀਆਂ ਹਦਾਇਤਾਂ ਅਨੁਸਾਰ ਗੱਲਬਾਤ ਦੀਆਂ ਟ੍ਰਾਂਸਕ੍ਰਿਪਟਾਂ ਨੂੰ ਆਟੋਮੈਟਿਕ ਤੌਰ 'ਤੇ ਪ੍ਰੋਸੈਸ ਕਰਨਾ ਹੈ। ਤੁਸੀਂ ਇੱਕ ਆਟੋ-ਬਲੂਪ੍ਰਿੰਟ ਸਿਸਟਮ ਦੇ ਹਿੱਸੇ ਵਜੋਂ ਵਰਤੇ ਜਾਂਦੇ ਹੋ ਜੋ ਟ੍ਰਾਂਸਕ੍ਰਿਪਟ ਲਈ ਸਭ ਤੋਂ ਸੰਬੰਧਿਤ ਪ੍ਰੌਂਪਟ ਚੁਣਦਾ ਹੈ। ਸਟੀਕ, ਢਾਂਚਾਗਤ ਅਤੇ ਮਦਦਗਾਰ ਤਰੀਕੇ ਨਾਲ ਜਵਾਬ ਦਿਓ। ਸੁੰਦਰ ਫਾਰਮੈਟਿੰਗ ਦੇ ਨਾਲ ਮਾਰਕਡਾਊਨ ਵਿੱਚ ਜਵਾਬ ਦਿਓ।", + // Afrikaans + af: "Jy is 'n nuttige assistent wat tekste analiseer en verwerk. Jou taak is om transkripsies van gesprekke outomaties te verwerk volgens die gegewe instruksies. Jy word gebruik as deel van 'n Auto-Blueprint-stelsel wat die mees relevante aanwysings vir 'n transkripsie kies. Antwoord presies, gestruktureerd en nuttig. Antwoord in Markdown met 'n mooi formatering.", + // Persisch/Farsi + fa: 'شما یک دستیار مفید هستید که متون را تحلیل و پردازش می‌کند. وظیفه شما پردازش خودکار رونوشت مکالمات طبق دستورالعمل‌های داده شده است. شما به عنوان بخشی از سیستم Auto-Blueprint استفاده می‌شوید که مرتبط‌ترین اعلان‌ها را برای رونوشت انتخاب می‌کند. دقیق، ساختارمند و مفید پاسخ دهید. با قالب‌بندی زیبا در Markdown پاسخ دهید.', + // Georgisch + ka: 'თქვენ ხართ სასარგებლო ასისტენტი, რომელიც აანალიზებს და ამუშავებს ტექსტებს. თქვენი ამოცანაა საუბრების ტრანსკრიპტების ავტომატური დამუშავება მოცემული ინსტრუქციების შესაბამისად. თქვენ გამოიყენებით როგორც Auto-Blueprint სისტემის ნაწილი, რომელიც ირჩევს ყველაზე შესაბამის მოთხოვნებს ტრანსკრიპტისთვის. უპასუხეთ ზუსტად, სტრუქტურირებულად და სასარგებლოდ. უპასუხეთ Markdown-ში ლამაზი ფორმატირებით.', + // Isländisch + is: 'Þú ert hjálplegur aðstoðarmaður sem greinir og vinnur úr textum. Verkefni þitt er að vinna sjálfkrafa úr afritum af samtölum samkvæmt gefnum leiðbeiningum. Þú ert notaður sem hluti af Auto-Blueprint kerfi sem velur viðeigandi hvöt fyrir afrit. Svaraðu nákvæmlega, skipulega og hjálplega. Svaraðu í Markdown með fallegu sniði.', + // Albanisch + sq: 'Ju jeni një asistent i dobishëm që analizon dhe përpunon tekste. Detyra juaj është të përpunoni automatikisht transkriptimet e bisedave sipas udhëzimeve të dhëna. Ju përdoreni si pjesë e një sistemi Auto-Blueprint që zgjedh kërkesat më të përshtatshme për një transkriptim. Përgjigjuni saktë, të strukturuar dhe të dobishëm. Përgjigjuni në Markdown me një formatim të bukur.', + // Aserbaidschanisch + az: 'Siz mətnləri təhlil edən və emal edən faydalı köməkçisiniz. Sizin vəzifəniz verilmiş təlimatlara uyğun olaraq söhbətlərin transkriptlərini avtomatik emal etməkdir. Siz transkript üçün ən uyğun sorğuları seçən Auto-Blueprint sisteminin bir hissəsi kimi istifadə olunursunuz. Dəqiq, strukturlaşdırılmış və faydalı cavab verin. Gözəl formatlaşdırma ilə Markdown-da cavab verin.', + // Baskisch + eu: 'Testuak aztertzen eta prozesatzen dituen laguntzaile erabilgarria zara. Zure zeregina elkarrizketen transkripzioak automatikoki prozesatzea da emandako argibideen arabera. Transkripzio baterako gonbidapen garrantzitsuenak hautatzen dituen Auto-Blueprint sistema baten zati gisa erabiltzen zara. Erantzun zehatz, egituratuta eta lagungarri. Erantzun Markdown-en formatu eder batekin.', + // Galizisch + gl: 'Es un asistente útil que analiza e procesa textos. A túa tarefa é procesar automaticamente transcricións de conversas segundo as instrucións dadas. Utilizaste como parte dun sistema Auto-Blueprint que selecciona os prompts máis relevantes para unha transcrición. Responde de forma precisa, estruturada e útil. Responde en Markdown cun formato bonito.', + // Kasachisch + kk: 'Сіз мәтіндерді талдайтын және өңдейтін пайдалы көмекшісіз. Сіздің міндетіңіз - берілген нұсқауларға сәйкес әңгімелердің транскрипттерін автоматты түрде өңдеу. Сіз транскрипт үшін ең қатысты сұрауларды таңдайтын Auto-Blueprint жүйесінің бөлігі ретінде пайдаланыласыз. Дәл, құрылымды және пайдалы жауап беріңіз. Әдемі пішімдеумен Markdown-да жауап беріңіз.', + // Mazedonisch + mk: 'Вие сте корисен асистент кој анализира и обработува текстови. Вашата задача е автоматски да обработувате транскрипти на разговори според дадените упатства. Вие се користите како дел од Auto-Blueprint систем кој ги избира најрелевантните покани за транскрипт. Одговорете прецизно, структурирано и корисно. Одговорете во Markdown со убаво форматирање.', + // Serbisch + sr: 'Ви сте корисни асистент који анализира и обрађује текстове. Ваш задатак је да аутоматски обрађујете транскрипте разговора према датим упутствима. Користите се као део Auto-Blueprint система који бира најрелевантније упите за транскрипт. Одговорите прецизно, структурирано и корисно. Одговорите у Markdown-у са лепим форматирањем.', + // Slowenisch + sl: 'Ste koristen pomočnik, ki analizira in obdeluje besedila. Vaša naloga je samodejno obdelati prepise pogovorov v skladu z danimi navodili. Uporabljate se kot del sistema Auto-Blueprint, ki izbere najustreznejše pozive za prepis. Odgovorite natančno, strukturirano in koristno. Odgovorite v Markdownu z lepim oblikovanjem.', + // Maltesisch + mt: "Int assistent utli li janalizza u jipproċessa testi. Il-kompitu tiegħek huwa li tipproċessa awtomatikament traskrizzjonijiet ta' konversazzjonijiet skont l-istruzzjonijiet mogħtija. Int użat bħala parti minn sistema Auto-Blueprint li tagħżel l-aktar prompts rilevanti għal traskrizzjoni. Wieġeb b'mod preċiż, strutturat u utli. Wieġeb f'Markdown b'format sabiħ.", + // Armenisch + hy: 'Դուք օգտակար օգնական եք, որը վերլուծում և մշակում է տեքստեր: Ձեր խնդիրն է ավտոմատ կերպով մշակել զրույցների արձանագրությունները տրված հրահանգների համաձայն: Դուք օգտագործվում եք որպես Auto-Blueprint համակարգի մաս, որը ընտրում է ամենահարմար հուշումները արձանագրության համար: Պատասխանեք ճշգրիտ, կառուցվածքային և օգտակար: Պատասխանեք Markdown-ում գեղեցիկ ձևաչափով:', + // Usbekisch + uz: "Siz matnlarni tahlil qiluvchi va qayta ishlovchi foydali yordamchisiz. Sizning vazifangiz berilgan ko'rsatmalarga muvofiq suhbatlar transkriptlarini avtomatik qayta ishlashdir. Siz transkript uchun eng tegishli so'rovlarni tanlaydigan Auto-Blueprint tizimining bir qismi sifatida foydalanilasiz. Aniq, tuzilgan va foydali javob bering. Chiroyli formatlash bilan Markdown da javob bering.", + // Irisch + ga: 'Is cúntóir cabhrach thú a dhéanann anailís agus próiseáil ar théacsanna. Is é do thasc trascríbhinní comhráite a phróiseáil go huathoibríoch de réir na dtreoracha tugtha. Úsáidtear thú mar chuid de chóras Auto-Blueprint a roghnaíonn na leideanna is ábhartha do thrascríbhinn. Freagair go beacht, struchtúrtha agus cabhrach. Freagair i Markdown le formáidiú álainn.', + // Walisisch + cy: "Rydych chi'n gynorthwyydd defnyddiol sy'n dadansoddi a phrosesu testunau. Eich tasg yw prosesu trawsgrifiadau o sgyrsiau yn awtomatig yn unol â'r cyfarwyddiadau a roddwyd. Rydych yn cael eich defnyddio fel rhan o system Auto-Blueprint sy'n dewis yr ysgogiadau mwyaf perthnasol ar gyfer trawsgrifiad. Atebwch yn fanwl gywir, yn strwythuredig ac yn ddefnyddiol. Atebwch yn Markdown gyda fformat hardd.", + // Filipino + fil: 'Ikaw ay isang kapaki-pakinabang na katulong na nag-aanalisa at nagpoproseso ng mga teksto. Ang iyong gawain ay awtomatikong magproseso ng mga transkripsyon ng mga pag-uusap ayon sa mga ibinigay na tagubilin. Ginagamit ka bilang bahagi ng isang Auto-Blueprint system na pumipili ng pinaka-kaugnay na mga prompt para sa isang transkripsyon. Tumugon nang tumpak, nakabalangkas, at nakakatulong. Tumugon sa Markdown na may magandang format.', + }, +}; +/** + * Hilfsfunktion zum Abrufen des System-Prompts für eine bestimmte Sprache + * @param language Sprache (z.B. 'de', 'en', 'fr') + * @returns System-Prompt für die angegebene Sprache oder Fallback + */ export function getSystemPrompt(language) { + const lang = language.toLowerCase().split('-')[0]; // z.B. 'de-DE' -> 'de' + // Versuche spezifische Sprache, dann Deutsch, dann Englisch, dann erste verfügbare + return ( + SYSTEM_PROMPTS.system[lang] || + SYSTEM_PROMPTS.system['de'] || + SYSTEM_PROMPTS.system['en'] || + Object.values(SYSTEM_PROMPTS.system)[0] || + 'You are a helpful assistant.' + ); +} diff --git a/apps/memoro/apps/backend/supabase/functions/auto-blueprint/index.ts b/apps/memoro/apps/backend/supabase/functions/auto-blueprint/index.ts new file mode 100644 index 000000000..b381b486d --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/auto-blueprint/index.ts @@ -0,0 +1,796 @@ +// Follow this setup guide to integrate the Deno language server with your editor: +// https://deno.land/manual/getting_started/setup_your_environment +// This enables autocomplete, go to definition, etc. +// Setup type definitions for built-in Supabase Runtime APIs +import 'jsr:@supabase/functions-js/edge-runtime.d.ts'; +import { serve } from 'https://deno.land/std@0.215.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'; +import { getSystemPrompt } from './constants.ts'; +import { getTranscriptText } from '../_shared/transcript-utils.ts'; +import { ROOT_SYSTEM_PROMPTS } from '../_shared/system-prompt.ts'; +/** + * Auto-Blueprint Edge Function + * + * Diese Funktion wird getriggert, wenn kein spezifischer Blueprint ausgewählt ist. + * Sie lädt alle verfügbaren Prompts und verwendet Gemini 2.0 Flash, um die 5 + * relevantesten Prompts für das gegebene Transcript auszuwählen und zu verarbeiten. + * + * @version 1.0.0 + * @date 2025-05-26 + */ // ─── Umgebungsvariablen ────────────────────────────────────────────── +const SUPABASE_URL = Deno.env.get('SUPABASE_URL'); +if (!SUPABASE_URL) { + throw new Error('SUPABASE_URL not configured'); +} +const SERVICE_KEY = Deno.env.get('C_SUPABASE_SECRET_KEY'); +if (!SERVICE_KEY) { + throw new Error('C_SUPABASE_SECRET_KEY not configured'); +} +// Gemini 2.0 Flash API Configuration +const GEMINI_API_KEY = Deno.env.get('CREATE_AUTOBLUEPRINT_GEMINI_MEMORO') || ''; +const GEMINI_MODEL = 'gemini-2.0-flash'; +const GEMINI_ENDPOINT = 'https://generativelanguage.googleapis.com/v1beta/models'; +// Azure OpenAI für finale Prompt-Verarbeitung +const AZURE_OPENAI_ENDPOINT = 'https://memoroseopenai.openai.azure.com'; +const AZURE_OPENAI_KEY = Deno.env.get('AZURE_OPENAI_KEY'); +if (!AZURE_OPENAI_KEY) { + throw new Error('AZURE_OPENAI_KEY not configured'); +} +const AZURE_OPENAI_DEPLOYMENT = 'gpt-4.1-mini-se'; +const AZURE_OPENAI_API_VERSION = '2025-01-01-preview'; +const memoro_sb = createClient(SUPABASE_URL, SERVICE_KEY); +// ─── Error Handler Functions ────────────────────────────────────────────── +/** + * Setzt den Status eines Prozesses auf 'processing' + */ async function setMemoProcessingStatus(supabaseClient, memoId, processName) { + const timestamp = new Date().toISOString(); + try { + const { data: currentMemo, error: fetchError } = await supabaseClient + .from('memos') + .select('metadata') + .eq('id', memoId) + .single(); + if (fetchError) { + console.error( + `[${processName}] Fehler beim Abrufen der aktuellen Metadaten für Memo ${memoId}:`, + fetchError + ); + } + const currentMetadata = currentMemo?.metadata || {}; + const newMetadata = { + ...currentMetadata, + processing: { + ...(currentMetadata.processing || {}), + [processName]: { + status: 'processing', + timestamp, + }, + }, + }; + const { error: updateError } = await supabaseClient + .from('memos') + .update({ + metadata: newMetadata, + }) + .eq('id', memoId); + if (updateError) { + console.error( + `[${processName}] Fehler beim Setzen des Processing-Status für Memo ${memoId}:`, + updateError + ); + } else { + console.log(`[${processName}] Processing-Status für Memo ${memoId} erfolgreich gesetzt.`); + } + } catch (dbError) { + console.error( + `[${processName}] Unerwarteter Fehler beim Setzen des Processing-Status für Memo ${memoId}:`, + dbError + ); + } +} +/** + * Setzt den Status eines Prozesses auf 'completed' + */ async function setMemoCompletedStatus(supabaseClient, memoId, processName, details) { + const timestamp = new Date().toISOString(); + try { + const { data: currentMemo, error: fetchError } = await supabaseClient + .from('memos') + .select('metadata') + .eq('id', memoId) + .single(); + if (fetchError) { + console.error( + `[${processName}] Fehler beim Abrufen der aktuellen Metadaten für Memo ${memoId}:`, + fetchError + ); + } + const currentMetadata = currentMemo?.metadata || {}; + const newMetadata = { + ...currentMetadata, + processing: { + ...(currentMetadata.processing || {}), + [processName]: { + status: 'completed', + timestamp, + ...(details && { + details, + }), + }, + }, + }; + const { error: updateError } = await supabaseClient + .from('memos') + .update({ + metadata: newMetadata, + }) + .eq('id', memoId); + if (updateError) { + console.error( + `[${processName}] Fehler beim Setzen des Completed-Status für Memo ${memoId}:`, + updateError + ); + } else { + console.log(`[${processName}] Completed-Status für Memo ${memoId} erfolgreich gesetzt.`); + } + } catch (dbError) { + console.error( + `[${processName}] Unerwarteter Fehler beim Setzen des Completed-Status für Memo ${memoId}:`, + dbError + ); + } +} +/** + * Aktualisiert die Metadaten eines Memos, um einen Fehlerstatus zu setzen. + */ async function setMemoErrorStatus(supabaseClient, memoId, processName, error, details) { + if (!memoId) { + console.error(`[${processName}] Kann Fehlerstatus nicht setzen: Keine memoId angegeben.`); + return; + } + const errorMessage = error instanceof Error ? error.message : String(error); + const timestamp = new Date().toISOString(); + console.error(`[${processName}] Fehler bei Memo ${memoId}: ${errorMessage}`); + try { + const { data: currentMemo, error: fetchError } = await supabaseClient + .from('memos') + .select('metadata') + .eq('id', memoId) + .single(); + if (fetchError) { + console.error( + `[${processName}] Fehler beim Abrufen der aktuellen Metadaten für Memo ${memoId}:`, + fetchError + ); + } + const currentMetadata = currentMemo?.metadata || {}; + const newMetadata = { + ...currentMetadata, + processing: { + ...(currentMetadata.processing || {}), + [processName]: { + status: 'error', + reason: errorMessage, + timestamp, + ...(details && { + details, + }), + }, + }, + }; + const { error: updateError } = await supabaseClient + .from('memos') + .update({ + metadata: newMetadata, + }) + .eq('id', memoId); + if (updateError) { + console.error( + `[${processName}] Kritischer Fehler: Konnte Fehlerstatus für Memo ${memoId} nicht in DB schreiben:`, + updateError + ); + } else { + console.log(`[${processName}] Fehlerstatus für Memo ${memoId} erfolgreich in DB gesetzt.`); + } + } catch (dbError) { + console.error( + `[${processName}] Unerwarteter Fehler beim Setzen des DB-Fehlerstatus für Memo ${memoId}:`, + dbError + ); + } +} +/** + * Erstellt eine standardisierte Fehlerantwort für Edge Functions + */ function createSuccessResponse(data, corsHeaders = {}) { + return new Response( + JSON.stringify({ + success: true, + ...data, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 200, + } + ); +} +function createErrorResponse(error, status = 500, corsHeaders = {}) { + const errorMessage = error instanceof Error ? error.message : String(error); + return new Response( + JSON.stringify({ + error: errorMessage, + timestamp: new Date().toISOString(), + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status, + } + ); +} +// ─── Logging-Funktion ────────────────────────────────────────────── +/** + * Erweiterte Logging-Funktion mit Zeitstempel und Log-Level + */ function log(level, message, data) { + const timestamp = new Date().toISOString(); + const logMessage = `[${timestamp}] [${level.toUpperCase()}] ${message}`; + switch (level.toUpperCase()) { + case 'INFO': + console.log(logMessage); + break; + case 'DEBUG': + console.debug(logMessage); + break; + case 'WARN': + console.warn(logMessage); + break; + case 'ERROR': + console.error(logMessage); + break; + default: + console.log(logMessage); + break; + } + if (data) { + if (level.toUpperCase() === 'ERROR') { + console.error(data); + } else { + console.log(typeof data === 'object' ? JSON.stringify(data, null, 2) : data); + } + } +} +/** + * Sendet Anfrage an Gemini 2.0 Flash zur Auswahl der relevantesten Prompts + * @param transcript - Das Transkript + * @param promptDescriptions - Array von Prompt-Beschreibungen mit IDs + * @param language - Sprache für die Antwort + * @returns Array der ausgewählten Prompt-IDs + */ async function selectRelevantstPrompts( + transcript, + promptDescriptions, + language, + functionIdForLog = 'global', + targetCount = 5 +) { + const requestId = crypto.randomUUID().substring(0, 8); + log( + 'INFO', + `[${functionIdForLog}][GEMINI-${requestId}] Starte Gemini-Anfrage zur Prompt-Auswahl.` + ); + try { + // Erstelle die Prompt-Liste für Gemini mit Index-Referenzen + const promptListText = promptDescriptions + .map((p, index) => `${index + 1}. Titel: "${p.title}", Beschreibung: "${p.description}"`) + .join('\n'); + const selectionPrompt = + language === 'de' + ? `Analysiere das folgende Transcript und wähle die ${targetCount} relevantesten Prompts aus der Liste aus. + +Transcript: +${transcript} + +Verfügbare Prompts: +${promptListText} + +Bitte antworte nur mit den Nummern der ${targetCount} ausgewählten Prompts, getrennt durch Kommas (z.B. "1,3,5"). Wähle die Prompts aus, die am besten zum Inhalt und Kontext des Transcripts passen.` + : `Analyze the following transcript and select the ${targetCount} most relevant prompts from the list. + +Transcript: +${transcript} + +Available Prompts: +${promptListText} + +Please respond only with the numbers of the ${targetCount} selected prompts, separated by commas (e.g. "1,3,5"). Choose the prompts that best match the content and context of the transcript.`; + log( + 'DEBUG', + `[${functionIdForLog}][GEMINI-${requestId}] Sende Prompt-Auswahl-Anfrage (${promptDescriptions.length} Prompts verfügbar).` + ); + const startTime = Date.now(); + const response = await fetch( + `${GEMINI_ENDPOINT}/${GEMINI_MODEL}:generateContent?key=${GEMINI_API_KEY}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + contents: [ + { + parts: [ + { + text: selectionPrompt, + }, + ], + }, + ], + generationConfig: { + maxOutputTokens: 8192, + temperature: 0.3, + }, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][GEMINI-${requestId}] Gemini Antwort erhalten in ${duration}ms, Status: ${response.status}` + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][GEMINI-${requestId}] Gemini API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Gemini API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.candidates?.[0]?.content?.parts?.[0]?.text?.trim() || ''; + log('DEBUG', `[${functionIdForLog}][GEMINI-${requestId}] Gemini Antwort: ${content}`); + // Parse die Antwort - erwarte kommaseparierte Index-Nummern + const selectedIndices = content + .split(',') + .map((index) => parseInt(index.trim(), 10)) + .filter((index) => !isNaN(index) && index >= 1 && index <= promptDescriptions.length); + log( + 'INFO', + `[${functionIdForLog}][GEMINI-${requestId}] ${selectedIndices.length} Prompt-Indizes ausgewählt: ${selectedIndices.join(', ')}` + ); + // Konvertiere Indizes zu IDs (Index ist 1-basiert, Array ist 0-basiert) + const validIds = selectedIndices.map((index) => promptDescriptions[index - 1].id); + log( + 'INFO', + `[${functionIdForLog}][GEMINI-${requestId}] Entsprechende Prompt-IDs: ${validIds.join(', ')}` + ); + return validIds.slice(0, targetCount); // Maximal targetCount Prompts + } catch (error) { + log('ERROR', `[${functionIdForLog}][GEMINI-${requestId}] Fehler bei Gemini-Anfrage:`, error); + // Fallback: Nimm die ersten targetCount Prompts + return promptDescriptions.slice(0, targetCount).map((p) => p.id); + } +} +/** + * Sendet Prompt an Azure OpenAI und gibt die Antwort zurück + */ async function runPromptWithTranscript( + prompt, + transcript, + language = 'de', + functionIdForLog = 'global' +) { + const systemPrompt = getSystemPrompt(language); + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][LLM-${requestId}] Starte LLM-Anfrage.`); + try { + let fullPrompt; + if (prompt.includes('{transcript}')) { + fullPrompt = prompt.replace('{transcript}', transcript); + log('DEBUG', `[${functionIdForLog}][LLM-${requestId}] Platzhalter im Prompt ersetzt.`); + } else { + fullPrompt = `${prompt}\n\nText: ${transcript}`; + log( + 'DEBUG', + `[${functionIdForLog}][LLM-${requestId}] Kein Platzhalter, Transkript am Ende angehängt.` + ); + } + log( + 'DEBUG', + `[${functionIdForLog}][LLM-${requestId}] System-Prompt (${language}): ${systemPrompt}` + ); + log( + 'DEBUG', + `[${functionIdForLog}][LLM-${requestId}] User-Prompt (erste 200 Zeichen): ${fullPrompt.substring(0, 200)}...` + ); + const startTime = Date.now(); + const response = await fetch( + `${AZURE_OPENAI_ENDPOINT}/openai/deployments/${AZURE_OPENAI_DEPLOYMENT}/chat/completions?api-version=${AZURE_OPENAI_API_VERSION}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'api-key': AZURE_OPENAI_KEY, + }, + body: JSON.stringify({ + messages: [ + { + role: 'system', + content: systemPrompt, + }, + { + role: 'user', + content: fullPrompt, + }, + ], + max_tokens: 8192, + temperature: 0.7, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Azure OpenAI Antwort erhalten in ${duration}ms, Status: ${response.status}` + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][LLM-${requestId}] Azure OpenAI API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Azure OpenAI API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.choices[0]?.message?.content?.trim() || ''; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Erfolgreiche LLM-Antwort (Länge: ${content.length}).` + ); + return content; + } catch (error) { + log('ERROR', `[${functionIdForLog}][LLM-${requestId}] Fehler beim LLM-Request:`, error); + return ''; + } +} +serve(async (req) => { + const functionId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionId}] Auto-Blueprint-Funktion gestartet`); + const corsHeaders = { + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': 'POST, OPTIONS', + 'Access-Control-Allow-Headers': 'Content-Type, Authorization', + }; + if (req.method === 'OPTIONS') { + log('DEBUG', `[${functionId}] CORS Preflight-Anfrage bearbeitet`); + return new Response(null, { + headers: corsHeaders, + status: 204, + }); + } + let memo_id_to_update = null; + try { + const requestData = await req.json(); + const { memo_id, primary_language } = requestData; + memo_id_to_update = memo_id; + log( + 'INFO', + `[${functionId}] Anfrage erhalten für memo_id: ${memo_id}, primäre Sprache: ${primary_language || 'nicht angegeben'}` + ); + if (!memo_id) { + log('ERROR', `[${functionId}] Keine memo_id in der Anfrage gefunden`); + return createErrorResponse('memo_id ist erforderlich', 400, corsHeaders); + } + // Kurz warten um Race Condition mit blueprint_id Setzung zu vermeiden + log( + 'INFO', + `[${functionId}] Warte 2 Sekunden um potentielle blueprint_id Setzung abzuwarten...` + ); + await new Promise((resolve) => setTimeout(resolve, 2000)); + log('INFO', `[${functionId}] Rufe Memo mit ID ${memo_id} aus der Datenbank ab`); + const { data: memo, error: memoError } = await memoro_sb + .from('memos') + .select('*') + .eq('id', memo_id) + .single(); + // Prüfe nochmal ob inzwischen blueprint_id gesetzt wurde + if (memo?.metadata?.blueprint_id) { + log( + 'INFO', + `[${functionId}] Blueprint ID ${memo.metadata.blueprint_id} wurde inzwischen gesetzt, überspringe Auto-Blueprint` + ); + return createSuccessResponse( + { + message: 'Blueprint ID wurde gesetzt, Auto-Blueprint übersprungen', + blueprintId: memo.metadata.blueprint_id, + }, + corsHeaders + ); + } + // Set processing status erst nach blueprint_id Check + await setMemoProcessingStatus(memoro_sb, memo_id, 'auto_blueprint'); + if (memoError || !memo) { + log('ERROR', `[${functionId}] Memo ${memo_id} nicht gefunden:`, memoError); + await setMemoErrorStatus(memoro_sb, memo_id, 'auto_blueprint', 'Memo nicht gefunden'); + return createErrorResponse('Memo nicht gefunden', 404, corsHeaders); + } + // Transcript extrahieren (from utterances or legacy fields) + const transcript = getTranscriptText(memo); + log( + 'INFO', + `[${functionId}] Extrahiertes Transkript (Länge: ${transcript.length}, erste 100 Zeichen): ${transcript.substring(0, 100)}...` + ); + if (!transcript) { + log('ERROR', `[${functionId}] Kein Transkript im Memo ${memo_id} gefunden`); + await setMemoErrorStatus( + memoro_sb, + memo_id, + 'auto_blueprint', + 'Kein Transkript im Memo gefunden' + ); + return createErrorResponse('Kein Transkript im Memo gefunden', 400, corsHeaders); + } + // Alle verfügbaren Prompts laden + log('INFO', `[${functionId}] Lade alle verfügbaren Prompts aus der Datenbank`); + const { data: allPrompts, error: promptsError } = await memoro_sb + .from('prompts') + .select('*') + .eq('is_public', true); + if (promptsError || !Array.isArray(allPrompts) || allPrompts.length === 0) { + log('ERROR', `[${functionId}] Keine öffentlichen Prompts gefunden:`, promptsError); + await setMemoErrorStatus( + memoro_sb, + memo_id, + 'auto_blueprint', + 'Keine verfügbaren Prompts gefunden' + ); + return createErrorResponse('Keine verfügbaren Prompts gefunden', 404, corsHeaders); + } + log('INFO', `[${functionId}] ${allPrompts.length} öffentliche Prompts gefunden`); + // Basis-Sprache aus primary_language ermitteln + let baseMemoLang = 'de'; // Standard: Deutsch + if (primary_language && typeof primary_language === 'string') { + baseMemoLang = primary_language.split('-')[0].toLowerCase(); + log( + 'DEBUG', + `[${functionId}] Ermittelte Basis-Sprache: ${baseMemoLang} (aus ${primary_language})` + ); + } else { + log( + 'DEBUG', + `[${functionId}] Keine primäre Sprache übergeben. Nutze Standard: ${baseMemoLang}` + ); + } + const defaultPreferredLang = 'de'; + const defaultFallbackLang = 'en'; + // Prompt-Beschreibungen für Gemini-Auswahl zusammenstellen + const promptDescriptions = allPrompts.map((prompt) => { + let description = ''; + if (prompt.description && typeof prompt.description === 'object') { + description = + (baseMemoLang && prompt.description[baseMemoLang]) || + prompt.description[defaultPreferredLang] || + prompt.description[defaultFallbackLang] || + Object.values(prompt.description)[0] || + 'Keine Beschreibung verfügbar'; + } else { + description = 'Keine Beschreibung verfügbar'; + } + let title = ''; + if (prompt.memory_title && typeof prompt.memory_title === 'object') { + title = + (baseMemoLang && prompt.memory_title[baseMemoLang]) || + prompt.memory_title[defaultPreferredLang] || + prompt.memory_title[defaultFallbackLang] || + Object.values(prompt.memory_title)[0] || + 'Ohne Titel'; + } else { + title = 'Ohne Titel'; + } + return { + id: prompt.id, + description: description, + title: title, + }; + }); + // Bestimme die optimale Anzahl Prompts basierend auf Transkript-Länge + const wordCount = transcript.split(/\s+/).filter((word) => word.length > 0).length; + let targetPromptCount; + if (wordCount <= 100) { + targetPromptCount = Math.floor(Math.random() * 2) + 1; // 1-2 Prompts + } else if (wordCount <= 300) { + targetPromptCount = Math.floor(Math.random() * 2) + 2; // 2-3 Prompts + } else if (wordCount <= 500) { + targetPromptCount = Math.floor(Math.random() * 2) + 3; // 3-4 Prompts + } else if (wordCount <= 1000) { + targetPromptCount = Math.floor(Math.random() * 2) + 4; // 4-5 Prompts + } else { + targetPromptCount = Math.floor(Math.random() * 2) + 5; // 5-6 Prompts + } + log( + 'INFO', + `[${functionId}] Transkript hat ${wordCount} Wörter → ${targetPromptCount} Prompts werden ausgewählt` + ); + log( + 'INFO', + `[${functionId}] Verwende Gemini zur Auswahl der ${targetPromptCount} relevantesten Prompts` + ); + const selectedPromptIds = await selectRelevantstPrompts( + transcript, + promptDescriptions, + baseMemoLang, + functionId, + targetPromptCount + ); + if (selectedPromptIds.length === 0) { + log('ERROR', `[${functionId}] Keine Prompts von Gemini ausgewählt`); + await setMemoErrorStatus( + memoro_sb, + memo_id, + 'auto_blueprint', + 'Keine relevanten Prompts gefunden' + ); + return createErrorResponse('Keine relevanten Prompts gefunden', 400, corsHeaders); + } + // Ausgewählte Prompts laden + const selectedPrompts = allPrompts.filter((p) => selectedPromptIds.includes(p.id)); + log( + 'INFO', + `[${functionId}] ${selectedPrompts.length} Prompts ausgewählt, beginne mit der Verarbeitung` + ); + const results = []; + for (const prompt of selectedPrompts) { + const promptId = prompt.id; + log('INFO', `[${functionId}] Verarbeite Prompt mit ID ${promptId}`); + let promptText = ''; + if (prompt.prompt_text && typeof prompt.prompt_text === 'object') { + promptText = + (baseMemoLang && prompt.prompt_text[baseMemoLang]) || + prompt.prompt_text[defaultPreferredLang] || + prompt.prompt_text[defaultFallbackLang] || + Object.values(prompt.prompt_text)[0] || + ''; + } + // Prepend system prompt if available for the language + const systemPrePrompt = + ROOT_SYSTEM_PROMPTS.PRE_PROMPT[baseMemoLang] || ROOT_SYSTEM_PROMPTS.PRE_PROMPT['de']; + if (systemPrePrompt && promptText) { + promptText = systemPrePrompt + '\n\n' + promptText; + } + let memoryTitle = ''; + if (prompt.memory_title && typeof prompt.memory_title === 'object') { + memoryTitle = + (baseMemoLang && prompt.memory_title[baseMemoLang]) || + prompt.memory_title[defaultPreferredLang] || + prompt.memory_title[defaultFallbackLang] || + Object.values(prompt.memory_title)[0] || + ''; + } + if (!promptText) { + log( + 'WARN', + `[${functionId}] Kein Prompt-Text für Prompt ${promptId} nach Sprachauswahl. Überspringe.` + ); + results.push({ + prompt_id: promptId, + error: 'Kein Prompt-Text verfügbar in passender Sprache', + }); + continue; + } + log( + 'INFO', + `[${functionId}] Sende Prompt "${memoryTitle || 'Ohne Titel'}" (ID: ${promptId}) an LLM mit Sprache: ${baseMemoLang}` + ); + log( + 'DEBUG', + `[${functionId}] Prompt-Text (erste 150 Zeichen): ${promptText.substring(0, 150)}...` + ); + const answer = await runPromptWithTranscript( + promptText, + transcript, + baseMemoLang, + functionId + ); + if (!answer) { + log('WARN', `[${functionId}] Keine Antwort vom LLM für Prompt ${promptId} erhalten`); + results.push({ + prompt_id: promptId, + error: 'Keine Antwort vom LLM erhalten', + }); + continue; + } + // Get the highest sort_order for this memo + const { data: maxSortData, error: maxSortError } = await memoro_sb + .from('memories') + .select('sort_order') + .eq('memo_id', memo_id) + .order('sort_order', { + ascending: false, + }) + .limit(1) + .single(); + // If error or no data, use random number above 5000, otherwise increment + const nextSortOrder = + maxSortError || !maxSortData?.sort_order + ? Math.floor(Math.random() * 5000) + 5000 // Random between 5000-9999 + : maxSortData.sort_order + 1; + log( + 'INFO', + `[${functionId}] Erstelle neues Memory für Memo ${memo_id} mit Titel "${memoryTitle || 'Auto-Blueprint-Antwort'}" mit sort_order ${nextSortOrder}` + ); + const { data: newMemory, error: newMemoryError } = await memoro_sb + .from('memories') + .insert({ + memo_id: memo_id, + title: memoryTitle || 'Auto-Blueprint-Antwort', + content: answer, + media: null, + sort_order: nextSortOrder, + metadata: { + type: 'auto_blueprint', + prompt_id: promptId, + created_by: 'auto_blueprint_function', + selection_method: 'gemini_ai', + }, + }) + .select() + .single(); + if (newMemoryError) { + log( + 'ERROR', + `[${functionId}] Fehler beim Erstellen des Memories für Prompt ${promptId}:`, + newMemoryError + ); + results.push({ + prompt_id: promptId, + error: newMemoryError.message, + }); + } else { + log( + 'INFO', + `[${functionId}] Memory erfolgreich erstellt mit ID ${newMemory.id} für Prompt ${promptId}` + ); + results.push({ + prompt_id: promptId, + memory_id: newMemory.id, + }); + } + } + // Set completed status + await setMemoCompletedStatus(memoro_sb, memo_id, 'auto_blueprint', { + results_count: results.length, + selected_prompts_count: selectedPrompts.length, + total_prompts_available: allPrompts.length, + selection_method: 'gemini_ai', + }); + log( + 'INFO', + `[${functionId}] Auto-Blueprint-Verarbeitung erfolgreich abgeschlossen mit ${results.length} Ergebnissen für Memo ${memo_id}` + ); + return new Response( + JSON.stringify({ + success: true, + results, + meta: { + selected_prompts: selectedPromptIds, + total_available: allPrompts.length, + }, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 200, + } + ); + } catch (error) { + log('ERROR', `[${functionId}] Unerwarteter Fehler bei der Auto-Blueprint-Verarbeitung:`, error); + // Set error status in database + const errorToLog = error instanceof Error ? error : new Error(String(error)); + await setMemoErrorStatus(memoro_sb, memo_id_to_update, 'auto_blueprint', errorToLog); + // Return error response + return createErrorResponse(`Unerwarteter Fehler: ${errorToLog.message}`, 500, corsHeaders); + } +}); diff --git a/apps/memoro/apps/backend/supabase/functions/batch-transcribe-callback/index.ts b/apps/memoro/apps/backend/supabase/functions/batch-transcribe-callback/index.ts new file mode 100644 index 000000000..04e44f268 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/batch-transcribe-callback/index.ts @@ -0,0 +1,520 @@ +// Oben bei Ihren Imports: +import 'jsr:@supabase/functions-js/edge-runtime.d.ts'; +import { serve } from 'https://deno.land/std@0.215.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'; +import { + StorageSharedKeyCredential, + generateBlobSASQueryParameters, + BlobSASPermissions, + SASProtocol, +} from 'npm:@azure/storage-blob@12'; // Aktuelle stabile Version prüfen, z.B. @12.17.0 +// --- WICHTIG: Verwenden Sie den Service Role Key! --- +const SUPABASE_URL = Deno.env.get('SUPABASE_URL'); +const C_SUPABASE_SECRET_KEY = Deno.env.get('C_SUPABASE_SECRET_KEY'); // KORRIGIERT! +if (!SUPABASE_URL || !C_SUPABASE_SECRET_KEY) { + console.error('Supabase URL or Service Role Key not set in environment variables!'); + // Im Fehlerfall früh aussteigen oder anders behandeln + Deno.exit(1); // Oder eine Response mit Fehlerstatus senden +} +const supabase = createClient(SUPABASE_URL, C_SUPABASE_SECRET_KEY); +const AZURE_STORAGE_ACCOUNT_NAME = Deno.env.get('BATCH_API_AZURE_STORAGE_ACCOUNT_NAME'); +const AZURE_STORAGE_ACCOUNT_KEY = Deno.env.get('BATCH_API_AZURE_STORAGE_ACCOUNT_KEY'); +if (!AZURE_STORAGE_ACCOUNT_NAME || !AZURE_STORAGE_ACCOUNT_KEY) { + console.error('Azure Storage Account Name or Key not set in environment variables!'); + // Im Fehlerfall früh aussteigen oder anders behandeln +} +// Globale Instanz, da Keys sich nicht ändern +const sharedKeyCredential = + AZURE_STORAGE_ACCOUNT_NAME && AZURE_STORAGE_ACCOUNT_KEY + ? new StorageSharedKeyCredential(AZURE_STORAGE_ACCOUNT_NAME, AZURE_STORAGE_ACCOUNT_KEY) + : null; +// Helper function to ensure we're working with objects +function ensureObject(value) { + if (!value || typeof value !== 'object' || Array.isArray(value)) { + return {}; + } + return value; +} +serve(async (req) => { + const rawBody = await req.text(); // Body einmal lesen + console.log('--- Incoming Request ---'); + console.log('Headers:', JSON.stringify(Object.fromEntries(req.headers), null, 2)); + console.log('Raw Body:', rawBody.substring(0, 500) + (rawBody.length > 500 ? '...' : '')); // Nur Anfang loggen + const corsHeaders = { + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': 'POST, OPTIONS', + 'Access-Control-Allow-Headers': 'Content-Type, aeg-event-type', + }; + if (req.method === 'OPTIONS') { + return new Response(null, { + headers: corsHeaders, + status: 204, + }); + } + if (req.method !== 'POST') { + return new Response('Nur POST erlaubt', { + headers: corsHeaders, + status: 405, + }); + } + let events; + try { + events = JSON.parse(rawBody); + } catch (e) { + console.error('Fehler beim Parsen des JSON-Bodys:', e, rawBody); + return new Response('Ungültiges JSON', { + headers: corsHeaders, + status: 400, + }); + } + if (!Array.isArray(events)) { + console.error('Body ist kein Array:', events); + return new Response('Erwarte Event-Array', { + headers: corsHeaders, + status: 400, + }); + } + const subValidation = events.find( + (e) => e.eventType === 'Microsoft.EventGrid.SubscriptionValidationEvent' + ); + if (subValidation) { + const validationCode = subValidation.data.validationCode; + console.log(`Event Grid Handshake - Code: ${validationCode}`); + return new Response( + JSON.stringify({ + validationResponse: validationCode, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 200, + } + ); + } + for (const ev of events) { + if (ev.eventType !== 'Microsoft.Storage.BlobCreated') { + console.log(`Überspringe Event-Typ: ${ev.eventType}`); + continue; + } + const blobUrlFromEvent = ev.data?.url; // Dies ist die URL OHNE SAS + if (!blobUrlFromEvent) { + console.error('Event ohne data.url:', ev); + continue; + } + try { + if (!sharedKeyCredential) { + throw new Error('Azure Storage Credentials nicht initialisiert in der Funktion.'); + } + const urlObject = new URL(blobUrlFromEvent); + const pathParts = urlObject.pathname.split('/'); // z.B. ['', 'results', 'jobIdFolder', 'blobName.json'] + if (pathParts.length < 4) { + console.log('Pfad zu kurz, überspringe:', blobUrlFromEvent); + continue; + } + const containerName = decodeURIComponent(pathParts[1]); // z.B. 'results' + const jobIdAsFolderName = decodeURIComponent(pathParts[2]); // z.B. 'd43e7090-0871...' + const blobNameFromFile = decodeURIComponent(pathParts[3]); // z.B. 'contenturl_0.json' + console.log( + `Verarbeite Blob: ${blobNameFromFile} in Container ${containerName} für Job: ${jobIdAsFolderName}` + ); + if (blobNameFromFile.endsWith('_report.json') || !blobNameFromFile.endsWith('.json')) { + console.log(`Überspringe Datei (Report oder nicht-JSON): ${blobNameFromFile}`); + continue; + } + // --- SAS-TOKEN-GENERIERUNG START --- + const blobSas = generateBlobSASQueryParameters( + { + containerName: containerName, + blobName: `${jobIdAsFolderName}/${blobNameFromFile}`, + permissions: BlobSASPermissions.parse('r'), + startsOn: new Date(new Date().valueOf() - 5 * 60 * 1000), + expiresOn: new Date(new Date().valueOf() + 10 * 60 * 1000), + protocol: SASProtocol.Https, + }, + sharedKeyCredential + ).toString(); + const urlWithSas = `${blobUrlFromEvent}?${blobSas}`; + console.log(`Lade JSON mit SAS: ${blobUrlFromEvent}?sv=... (SAS-Token gekürzt)`); + // --- SAS-TOKEN-GENERIERUNG ENDE --- + const response = await fetch(urlWithSas); + if (!response.ok) { + const errorText = await response.text(); // Fehlertext von Azure Storage lesen + throw new Error( + `Fetch fehlgeschlagen ${response.status} (${response.statusText}) für ${blobUrlFromEvent}. Azure-Fehler: ${errorText}` + ); + } + const json = await response.json(); + console.log(`JSON für ${jobIdAsFolderName} erfolgreich geladen.`); + // Try different query approaches to find the memo with this jobId + let memo = null; + let fetchErr = null; + // Use proper JSONB operators to find the memo with this jobId + const { data, error } = await supabase + .from('memos') + .select('id, source, metadata, title, style') + .eq('metadata->processing->transcription->>jobId', jobIdAsFolderName) + .limit(1) + .maybeSingle(); + if (!error && data) { + memo = data; + } else { + fetchErr = error; + } + if (fetchErr || !memo) { + console.error( + `Memo nicht gefunden für Job: ${jobIdAsFolderName}`, + fetchErr || 'Kein Memo zurückgegeben' + ); + continue; + } + console.log(`Memo ${memo.id} für Job ${jobIdAsFolderName} gefunden.`); + // Safely handle existing data that might be null or undefined + const existingSource = ensureObject(memo.source); + const existingMeta = ensureObject(memo.metadata); + const existingProc = ensureObject(existingMeta.processing); + const existingTranscription = ensureObject(existingProc.transcription); + const existingStyle = ensureObject(memo.style); + const baseInfo = { + ...existingTranscription, + jobId: jobIdAsFolderName, + batchTranscription: true, + }; + let updateData = {}; + const isResultJson = json.recognizedPhrases || json.combinedRecognizedPhrases; + const hasErrorInJson = json.status === 'Failed' || json.error; // Azure-Fehlerstatus im JSON + if (hasErrorInJson || !isResultJson) { + const errorDetail = + json.error ?? + json.statusMessage ?? + 'Kein Transkript in JSON gefunden oder expliziter Fehlerstatus'; + // For error case, update fields (metadata will be set via RPC after update) + updateData = { + // Set title if not already set + title: memo.title || 'Transkription fehlgeschlagen', + // Ensure updated_at is set + updated_at: new Date().toISOString(), + // Store error details to use after update + _errorDetails: { + transcription: { + ...baseInfo, + status: 'error', + error: errorDetail, + retryable: true, + }, + headline: { + headline: 'Transkription fehlgeschlagen', + intro: `Die Transkription konnte nicht verarbeitet werden: ${errorDetail}`, + language: 'de-DE', + }, + }, + }; + console.warn( + 'Batch-Transkription FEHLER (JSON-Inhalt) für Job', + jobIdAsFolderName, + errorDetail + ); + } else { + // Extract text from nBest array + const text = + json.combinedRecognizedPhrases + ?.map((p) => p.nBest?.[0]?.display || p.display || p.text || '') + .join(' ') || + json.recognizedPhrases + ?.map((p) => p.nBest?.[0]?.display || p.display || p.text || '') + .join(' ') || + ''; + if (!text) { + console.warn( + `Kein Text extrahiert aus JSON für Job ${jobIdAsFolderName}. JSON-Struktur:`, + JSON.stringify(json, null, 2).substring(0, 500) + ); + // Handle empty transcript case (metadata will be set via RPC after update) + updateData = { + title: memo.title || 'Aufnahme ohne Sprache', + transcript: '', + style: { + intro: 'Diese Aufnahme enthält keinen erkennbaren gesprochenen Text.', + }, + updated_at: new Date().toISOString(), + // Store details to use after update + _emptyTranscriptDetails: { + transcription: { + ...baseInfo, + status: 'completed_no_transcript', + error: null, + textLength: 0, + }, + headline: { + headline: 'Aufnahme ohne Sprache', + intro: 'Diese Aufnahme enthält keinen erkennbaren gesprochenen Text.', + language: 'de-DE', + triggered_by: 'empty_transcript_handler', + }, + }, + }; + } else { + console.log(`Extrahierter Text für ${jobIdAsFolderName}: ${text.substring(0, 100)}...`); + // Enhanced speaker processing (following transcribe function pattern) + let enhancedSourceData; + try { + // Extract language information + let primaryAudioLanguage = null; + let allDetectedPhraseLanguages = ['de-DE']; // fallback + const languageProcessingLog = []; + // Check if user selected languages are available in metadata + const userSelectedLanguages = + existingMeta?.processing?.transcription?.userSelectedLanguages; + const hasUserSelectedLanguages = + userSelectedLanguages && + Array.isArray(userSelectedLanguages) && + userSelectedLanguages.length > 0; + if (hasUserSelectedLanguages) { + // Use user-selected languages if available + primaryAudioLanguage = userSelectedLanguages[0]; + allDetectedPhraseLanguages = userSelectedLanguages; + languageProcessingLog.push( + `Verwende vom Benutzer ausgewählte Sprachen: ${userSelectedLanguages.join(', ')}` + ); + } else if (json.locale && typeof json.locale === 'string') { + primaryAudioLanguage = json.locale; + allDetectedPhraseLanguages = [json.locale]; + languageProcessingLog.push( + `Sprache aus dem Top-Level 'locale'-Feld extrahiert: ${primaryAudioLanguage} (Azure erkannt)` + ); + } else if ( + json.recognizedPhrases && + Array.isArray(json.recognizedPhrases) && + json.recognizedPhrases.length > 0 + ) { + const languageCounts = {}; + for (const phrase of json.recognizedPhrases) { + if (phrase.locale && typeof phrase.locale === 'string') { + languageCounts[phrase.locale] = (languageCounts[phrase.locale] || 0) + 1; + } + } + const uniqueLanguagesFromPhrases = Object.keys(languageCounts); + if (uniqueLanguagesFromPhrases.length > 0) { + let mostFrequent = uniqueLanguagesFromPhrases[0] || 'de-DE'; + let maxCount = languageCounts[mostFrequent] || 0; + for (const locale of uniqueLanguagesFromPhrases) { + if (languageCounts[locale] > maxCount) { + mostFrequent = locale; + maxCount = languageCounts[locale]; + } + } + primaryAudioLanguage = mostFrequent; + allDetectedPhraseLanguages = uniqueLanguagesFromPhrases; + languageProcessingLog.push( + `Häufigste Sprache (primär) aus Phrase-Segmenten ermittelt: ${primaryAudioLanguage} (Anzahl: ${maxCount} von ${json.recognizedPhrases.length} Phrasen)` + ); + languageProcessingLog.push( + `Alle in Phrasen erkannten Sprachen: ${allDetectedPhraseLanguages.join(', ')}` + ); + } + } + if (primaryAudioLanguage === null) { + primaryAudioLanguage = 'de-DE'; + languageProcessingLog.push( + `Keine Sprache erkannt. Verwende Fallback-Sprache: ${primaryAudioLanguage}` + ); + } + languageProcessingLog.forEach((msg) => + console[msg.startsWith('WARN:') ? 'warn' : 'log'](msg) + ); + // Process speaker data + const utterances = []; + const speakers = {}; + const segments = json.recognizedPhrases || []; + console.log(`Processing ${segments.length} segments for speaker data`); + segments.forEach((segment) => { + // Check if speaker field exists (including speaker 0) and get display text from nBest + const displayText = segment.nBest?.[0]?.display; + if ('speaker' in segment && displayText) { + const speakerId = `speaker${segment.speaker}`; + utterances.push({ + speakerId, + text: displayText, + offset: segment.offsetInTicks + ? Math.round(segment.offsetInTicks / 10000) + : undefined, + duration: segment.durationInTicks + ? Math.round(segment.durationInTicks / 10000) + : undefined, + }); + // Add speaker to speakers object immediately + if (!speakers[speakerId]) { + speakers[speakerId] = `Speaker ${segment.speaker}`; + } + } + }); + // Sort utterances by time + utterances.sort((a, b) => (a.offset || 0) - (b.offset || 0)); + const speakerCount = Object.keys(speakers).length; + console.log( + `Enhanced batch transcription completed. Text: ${text.length} chars, Language: ${primaryAudioLanguage}, Speakers: ${speakerCount}` + ); + console.log(`Found ${utterances.length} utterances from ${speakerCount} speakers`); + // Build enhanced source data without transcript (moved to separate column) + enhancedSourceData = { + primary_language: primaryAudioLanguage, + languages: allDetectedPhraseLanguages, + utterances: utterances.length > 0 ? utterances : null, + speakers: Object.keys(speakers).length > 0 ? speakers : null, + }; + } catch (speakerError) { + console.warn('Speaker data extraction failed, saving text only:', speakerError); + // Fallback to just language data + enhancedSourceData = { + primary_language: 'de-DE', + languages: ['de-DE'], + }; + } + // Build the complete updated source object safely + const updatedSource = { + ...existingSource, + ...enhancedSourceData, // Add the transcription-specific fields + }; + updateData = { + source: updatedSource, + transcript: text, + updated_at: new Date().toISOString(), + // Store details to use after update + _successDetails: { + transcription: { + ...baseInfo, + status: 'completed', + error: null, + textLength: text.length, + speakerCount: Object.keys(updatedSource.speakers || {}).length, + }, + }, + }; + } + } + // Extract details for RPC calls before removing them from updateData + const errorDetails = updateData._errorDetails; + const emptyTranscriptDetails = updateData._emptyTranscriptDetails; + const successDetails = updateData._successDetails; + + // Remove temporary fields from updateData before actual update + delete updateData._errorDetails; + delete updateData._emptyTranscriptDetails; + delete updateData._successDetails; + + const { error: updErr } = await supabase.from('memos').update(updateData).eq('id', memo.id); + if (updErr) { + console.error(`Fehler beim Updaten von Memo ${memo.id}:`, updErr); + console.error('Update data was:', JSON.stringify(updateData, null, 2)); + } else { + console.log(`Memo ${memo.id} erfolgreich aktualisiert für Job ${jobIdAsFolderName}.`); + console.log('Update included fields:', Object.keys(updateData)); + + // Now update processing statuses atomically via RPC + const timestamp = new Date().toISOString(); + + if (errorDetails) { + // Error case: Update transcription and headline_and_intro + await supabase.rpc('set_memo_process_error', { + p_memo_id: memo.id, + p_process_name: 'transcription', + p_timestamp: timestamp, + p_reason: errorDetails.transcription.error, + p_details: { + jobId: errorDetails.transcription.jobId, + batchTranscription: errorDetails.transcription.batchTranscription, + retryable: errorDetails.transcription.retryable, + }, + }); + + await supabase.rpc('set_memo_process_error', { + p_memo_id: memo.id, + p_process_name: 'headline_and_intro', + p_timestamp: timestamp, + p_reason: 'Transcription failed', + p_details: errorDetails.headline, + }); + } else if (emptyTranscriptDetails) { + // Empty transcript case: Update transcription and headline_and_intro + await supabase.rpc('set_memo_process_status_with_details', { + p_memo_id: memo.id, + p_process_name: 'transcription', + p_status: 'completed_no_transcript', + p_timestamp: timestamp, + p_details: { + jobId: emptyTranscriptDetails.transcription.jobId, + batchTranscription: emptyTranscriptDetails.transcription.batchTranscription, + error: null, + textLength: 0, + }, + }); + + await supabase.rpc('set_memo_process_status_with_details', { + p_memo_id: memo.id, + p_process_name: 'headline_and_intro', + p_status: 'completed_no_transcript', + p_timestamp: timestamp, + p_details: emptyTranscriptDetails.headline, + }); + } else if (successDetails) { + // Success case: Update only transcription + await supabase.rpc('set_memo_process_status_with_details', { + p_memo_id: memo.id, + p_process_name: 'transcription', + p_status: 'completed', + p_timestamp: timestamp, + p_details: { + jobId: successDetails.transcription.jobId, + batchTranscription: successDetails.transcription.batchTranscription, + error: null, + textLength: successDetails.transcription.textLength, + speakerCount: successDetails.transcription.speakerCount, + }, + }); + } + // Send broadcast update to notify clients about the transcription update + try { + const channel = supabase.channel(`memo-updates-${memo.id}`); + // Subscribe first to ensure the channel is ready + channel.subscribe(async (status) => { + if (status === 'SUBSCRIBED') { + await channel.send({ + type: 'broadcast', + event: 'memo-updated', + payload: { + type: 'memo-updated', + memoId: memo.id, + changes: { + source: updateData.source, + transcript: updateData.transcript, + title: updateData.title, + style: updateData.style, + updated_at: updateData.updated_at, + }, + source: 'batch-transcribe-callback', + }, + }); + console.log(`Broadcast sent for memo ${memo.id} transcription update`); + // Clean up the channel after sending + supabase.removeChannel(channel); + } + }); + } catch (broadcastError) { + console.warn('Failed to send broadcast update:', broadcastError); + // Don't fail the function if broadcast fails + } + } + } catch (err) { + console.error( + `Genereller Fehler bei Event-Verarbeitung für URL ${blobUrlFromEvent}:`, + err.message, + err.stack + ); + } + } + return new Response('Events verarbeitet', { + headers: corsHeaders, + status: 200, + }); +}); diff --git a/apps/memoro/apps/backend/supabase/functions/batch-transcription-recovery/index.ts b/apps/memoro/apps/backend/supabase/functions/batch-transcription-recovery/index.ts new file mode 100644 index 000000000..131650e2c --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/batch-transcription-recovery/index.ts @@ -0,0 +1,248 @@ +import 'jsr:@supabase/functions-js/edge-runtime.d.ts'; +import { serve } from 'https://deno.land/std@0.215.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'; +const SUPABASE_URL = Deno.env.get('SUPABASE_URL'); +const C_SUPABASE_SECRET_KEY = Deno.env.get('C_SUPABASE_SECRET_KEY'); +const AZURE_SPEECH_KEY = Deno.env.get('AZURE_SPEECH_KEY'); +const AZURE_SPEECH_REGION = Deno.env.get('AZURE_SPEECH_REGION'); +const supabase = createClient(SUPABASE_URL, C_SUPABASE_SECRET_KEY); +serve(async (req) => { + if (req.method !== 'POST') { + return new Response('Method not allowed', { + status: 405, + }); + } + try { + const { memoId, jobId } = await req.json(); + console.log(`Checking recovery for memo ${memoId}, job ${jobId}`); + // 1. Check Azure job status + const azureStatus = await checkAzureJobStatus(jobId); + // 2. Handle different scenarios + if (azureStatus.status === 'Succeeded') { + // Job completed but webhook never fired - process results + await processCompletedJob(memoId, jobId, azureStatus); + return new Response( + JSON.stringify({ + status: 'recovered', + action: 'processed_results', + }) + ); + } else if (azureStatus.status === 'Failed') { + // Job failed - mark memo as failed + await markMemoAsFailed(memoId, azureStatus.error); + return new Response( + JSON.stringify({ + status: 'recovered', + action: 'marked_failed', + }) + ); + } else if (azureStatus.status === 'Running') { + // Still running - update timeout if needed + await updateTimeout(memoId); + return new Response( + JSON.stringify({ + status: 'still_running', + action: 'timeout_extended', + }) + ); + } else { + // Unknown status - log for investigation + console.warn(`Unknown Azure status for job ${jobId}:`, azureStatus); + return new Response( + JSON.stringify({ + status: 'unknown', + azureStatus, + }) + ); + } + } catch (error) { + console.error('Recovery function error:', error); + return new Response( + JSON.stringify({ + error: error.message, + }), + { + status: 500, + } + ); + } +}); +async function checkAzureJobStatus(jobId) { + const response = await fetch( + `https://${AZURE_SPEECH_REGION}.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/${jobId}`, + { + headers: { + 'Ocp-Apim-Subscription-Key': AZURE_SPEECH_KEY, + }, + } + ); + if (!response.ok) { + throw new Error(`Azure API error: ${response.status}`); + } + return await response.json(); +} +async function processCompletedJob(memoId, jobId, azureStatus) { + // Get old memo state for broadcast + const { data: oldMemo } = await supabase.from('memos').select('*').eq('id', memoId).single(); + + // Get transcription files + const filesResponse = await fetch(azureStatus.links.files, { + headers: { + 'Ocp-Apim-Subscription-Key': AZURE_SPEECH_KEY, + }, + }); + const filesData = await filesResponse.json(); + // Find transcription result file + const transcriptionFile = filesData.values.find((file) => file.kind === 'Transcription'); + if (transcriptionFile) { + // Download and process the result + const resultResponse = await fetch(transcriptionFile.links.contentUrl); + const transcriptionResult = await resultResponse.json(); + // Process using same logic as batch-transcribe-callback + // (Extract text, speakers, languages, etc.) + // Update memo with results - use atomic RPC to preserve other processing statuses + const timestamp = new Date().toISOString(); + await supabase.rpc('set_memo_process_status_with_details', { + p_memo_id: memoId, + p_process_name: 'transcription', + p_status: 'completed', + p_timestamp: timestamp, + p_details: { + recoveredAt: timestamp, + recoveryReason: 'webhook_failure', + }, + }); + + // Update source separately to avoid overwriting metadata + await supabase + .from('memos') + .update({ + source: {}, + }) + .eq('id', memoId); + + // Get updated memo for broadcast + const { data: newMemo } = await supabase.from('memos').select('*').eq('id', memoId).single(); + + // Send broadcast update to notify clients about the recovery + if (oldMemo && newMemo) { + try { + const channel = supabase.channel(`memo-updates-${memoId}`); + + channel.subscribe(async (status) => { + if (status === 'SUBSCRIBED') { + await channel.send({ + type: 'broadcast', + event: 'memo-updated', + payload: { + id: memoId, + old: oldMemo, + new: newMemo, + user_id: newMemo.user_id, + }, + }); + console.log(`Broadcast sent for memo ${memoId} transcription recovery`); + // Clean up the channel after sending + supabase.removeChannel(channel); + } + }); + } catch (broadcastError) { + console.warn('Failed to send broadcast update:', broadcastError); + // Don't fail the function if broadcast fails + } + } + } +} +async function markMemoAsFailed(memoId, error) { + // Get old memo state for broadcast + const { data: oldMemo } = await supabase.from('memos').select('*').eq('id', memoId).single(); + + const timestamp = new Date().toISOString(); + await supabase.rpc('set_memo_process_error', { + p_memo_id: memoId, + p_process_name: 'transcription', + p_timestamp: timestamp, + p_reason: error || 'Azure job failed', + p_details: { + recoveredAt: timestamp, + recoveryReason: 'azure_job_failed', + }, + }); + + // Get updated memo for broadcast + const { data: newMemo } = await supabase.from('memos').select('*').eq('id', memoId).single(); + + // Send broadcast update to notify clients about the failure + if (oldMemo && newMemo) { + try { + const channel = supabase.channel(`memo-updates-${memoId}`); + + channel.subscribe(async (status) => { + if (status === 'SUBSCRIBED') { + await channel.send({ + type: 'broadcast', + event: 'memo-updated', + payload: { + id: memoId, + old: oldMemo, + new: newMemo, + user_id: newMemo.user_id, + }, + }); + console.log(`Broadcast sent for memo ${memoId} transcription failure`); + // Clean up the channel after sending + supabase.removeChannel(channel); + } + }); + } catch (broadcastError) { + console.warn('Failed to send broadcast update:', broadcastError); + // Don't fail the function if broadcast fails + } + } +} +async function updateTimeout(memoId) { + // Get old memo state for broadcast + const { data: oldMemo } = await supabase.from('memos').select('*').eq('id', memoId).single(); + + // Extend timeout for long-running jobs - merge fields without changing status + const timestamp = new Date().toISOString(); + await supabase.rpc('merge_memo_process_fields', { + p_memo_id: memoId, + p_process_name: 'transcription', + p_fields: { + timeoutExtended: timestamp, + lastChecked: timestamp, + }, + }); + + // Get updated memo for broadcast + const { data: newMemo } = await supabase.from('memos').select('*').eq('id', memoId).single(); + + // Send broadcast update to notify clients about the timeout extension + if (oldMemo && newMemo) { + try { + const channel = supabase.channel(`memo-updates-${memoId}`); + + channel.subscribe(async (status) => { + if (status === 'SUBSCRIBED') { + await channel.send({ + type: 'broadcast', + event: 'memo-updated', + payload: { + id: memoId, + old: oldMemo, + new: newMemo, + user_id: newMemo.user_id, + }, + }); + console.log(`Broadcast sent for memo ${memoId} timeout extension`); + // Clean up the channel after sending + supabase.removeChannel(channel); + } + }); + } catch (broadcastError) { + console.warn('Failed to send broadcast update:', broadcastError); + // Don't fail the function if broadcast fails + } + } +} diff --git a/apps/memoro/apps/backend/supabase/functions/blueprint/constants.ts b/apps/memoro/apps/backend/supabase/functions/blueprint/constants.ts new file mode 100644 index 000000000..7ced5f94b --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/blueprint/constants.ts @@ -0,0 +1,219 @@ +/** + * System-Prompts für die Blueprint-Funktion in verschiedenen Sprachen + * + * Die Prompts werden als System-Prompt für die AI-Nachrichten verwendet, + * um konsistente und hilfreiche Antworten bei der Blueprint-Verarbeitung zu generieren. + */ /** + * Interface für die Prompt-Konfiguration + */ /** + * System-Prompts für die Blueprint-Verarbeitung + * + * Unterstützte Sprachen (62): + * - de: Deutsch + * - en: Englisch + * - fr: Französisch + * - es: Spanisch + * - it: Italienisch + * - nl: Niederländisch + * - pt: Portugiesisch + * - ru: Russisch + * - ja: Japanisch + * - ko: Koreanisch + * - zh: Chinesisch + * - ar: Arabisch + * - hi: Hindi + * - tr: Türkisch + * - pl: Polnisch + * - da: Dänisch + * - sv: Schwedisch + * - nb: Norwegisch + * - fi: Finnisch + * - cs: Tschechisch + * - hu: Ungarisch + * - el: Griechisch + * - he: Hebräisch + * - id: Indonesisch + * - th: Thai + * - vi: Vietnamesisch + * - uk: Ukrainisch + * - ro: Rumänisch + * - bg: Bulgarisch + * - ca: Katalanisch + * - hr: Kroatisch + * - sk: Slowakisch + * - et: Estnisch + * - lv: Lettisch + * - lt: Litauisch + * - bn: Bengalisch + * - ms: Malaiisch + * - ta: Tamil + * - te: Telugu + * - ur: Urdu + * - mr: Marathi + * - gu: Gujarati + * - ml: Malayalam + * - kn: Kannada + * - pa: Punjabi + * - af: Afrikaans + * - fa: Persisch + * - ka: Georgisch + * - is: Isländisch + * - sq: Albanisch + * - az: Aserbaidschanisch + * - eu: Baskisch + * - gl: Galizisch + * - kk: Kasachisch + * - mk: Mazedonisch + * - sr: Serbisch + * - sl: Slowenisch + * - mt: Maltesisch + * - hy: Armenisch + * - uz: Usbekisch + * - ga: Irisch + * - cy: Walisisch + * - fil: Filipino + */ export const SYSTEM_PROMPTS = { + system: { + // Deutsch + de: 'Du bist ein hilfreicher Assistent, der Texte analysiert und verarbeitet. Deine Aufgabe ist es, Transkripte von Gesprächen gemäß den gegebenen Anweisungen zu bearbeiten. Du wirst als Teil eines Blueprint-Systems verwendet, das spezifische Prompt-Sammlungen für strukturierte Analysen anwendet. Antworte präzise, strukturiert und hilfreich. Antworte in Markdown mit einem schönen Format.', + // Englisch + en: 'You are a helpful assistant that analyzes and processes texts. Your task is to process transcripts of conversations according to the given instructions. You are used as part of a Blueprint system that applies specific prompt collections for structured analyses. Respond precisely, structured, and helpfully. Respond in Markdown with a nice format.', + // Französisch + fr: "Vous êtes un assistant utile qui analyse et traite les textes. Votre tâche est de traiter les transcriptions de conversations selon les instructions données. Vous êtes utilisé dans le cadre d'un système Blueprint qui applique des collections de prompts spécifiques pour des analyses structurées. Répondez de manière précise, structurée et utile. Répondez en Markdown avec un beau format.", + // Spanisch + es: 'Eres un asistente útil que analiza y procesa textos. Tu tarea es procesar transcripciones de conversaciones según las instrucciones dadas. Eres utilizado como parte de un sistema Blueprint que aplica colecciones específicas de prompts para análisis estructurados. Responde de forma precisa, estructurada y útil. Responde en Markdown con un formato bonito.', + // Italienisch + it: 'Sei un assistente utile che analizza e elabora testi. Il tuo compito è elaborare trascrizioni di conversazioni secondo le istruzioni date. Sei utilizzato come parte di un sistema Blueprint che applica collezioni specifiche di prompt per analisi strutturate. Rispondi in modo preciso, strutturato e utile. Rispondi in Markdown con un bel formato.', + // Niederländisch + nl: 'Je bent een behulpzame assistent die teksten analyseert en verwerkt. Je taak is om transcripties van gesprekken te verwerken volgens de gegeven instructies. Je wordt gebruikt als onderdeel van een Blueprint-systeem dat specifieke prompt-collecties toepast voor gestructureerde analyses. Antwoord precies, gestructureerd en behulpzaam. Antwoord in Markdown met een mooi formaat.', + // Portugiesisch + pt: 'Você é um assistente útil que analisa e processa textos. Sua tarefa é processar transcrições de conversas de acordo com as instruções dadas. Você é usado como parte de um sistema Blueprint que aplica coleções específicas de prompts para análises estruturadas. Responda de forma precisa, estruturada e útil. Responda em Markdown com um belo formato.', + // Russisch + ru: 'Вы полезный помощник, который анализирует и обрабатывает тексты. Ваша задача - обрабатывать расшифровки разговоров согласно данным инструкциям. Вы используетесь как часть системы Blueprint, которая применяет специфические коллекции промптов для структурированного анализа. Отвечайте точно, структурированно и полезно. Отвечайте в Markdown с красивым форматированием.', + // Japanisch + ja: 'あなたはテキストを分析・処理する有用なアシスタントです。あなたの仕事は、与えられた指示に従って会話の転写を処理することです。あなたは構造化された分析のために特定のプロンプト・コレクションを適用するBlueprintシステムの一部として使用されます。正確で構造化された有用な回答をしてください。Markdownで美しいフォーマットで回答してください。', + // Koreanisch + ko: '당신은 텍스트를 분석하고 처리하는 유용한 어시스턴트입니다. 당신의 임무는 주어진 지시에 따라 대화의 전사본을 처리하는 것입니다. 당신은 구조화된 분석을 위해 특정 프롬프트 컬렉션을 적용하는 Blueprint 시스템의 일부로 사용됩니다. 정확하고 구조화되며 도움이 되는 방식으로 응답하세요. 아름다운 형식의 Markdown으로 응답하세요.', + // Chinesisch (vereinfacht) + zh: '你是一个有用的助手,负责分析和处理文本。你的任务是根据给定的指令处理对话的转录。你被用作Blueprint系统的一部分,该系统应用特定的提示集合进行结构化分析。请准确、结构化、有帮助地回答。请用美观格式的Markdown回答。', + // Arabisch + ar: 'أنت مساعد مفيد يحلل ويعالج النصوص. مهمتك هي معالجة نسخ المحادثات وفقاً للتعليمات المقدمة. يتم استخدامك كجزء من نظام Blueprint الذي يطبق مجموعات محددة من المطالبات للتحليلات المنظمة. أجب بدقة وبطريقة منظمة ومفيدة. أجب بتنسيق Markdown بشكل جميل.', + // Hindi + hi: 'आप एक उपयोगी सहायक हैं जो पाठों का विश्लेषण और प्रसंस्करण करते हैं। आपका कार्य दिए गए निर्देशों के अनुसार बातचीत के प्रतिलेख को संसाधित करना है। आप एक Blueprint सिस्टम के हिस्से के रूप में उपयोग किए जाते हैं जो संरचित विश्लेषण के लिए विशिष्ट प्रॉम्प्ट संग्रह लागू करता है। सटीक, संरचित और सहायक तरीके से उत्तर दें। सुंदर फॉर्मेट के साथ Markdown में उत्तर दें।', + // Türkisch + tr: 'Metinleri analiz eden ve işleyen yararlı bir asistansınız. Göreviniz, verilen talimatlara göre konuşma transkriptlerini işlemektir. Yapılandırılmış analizler için belirli komut istemi koleksiyonları uygulayan bir Blueprint sisteminin parçası olarak kullanılırsınız. Kesin, yapılandırılmış ve yararlı şekilde yanıt verin. Güzel bir formatta Markdown ile yanıt verin.', + // Polnisch + pl: 'Jesteś pomocnym asystentem, który analizuje i przetwarza teksty. Twoim zadaniem jest przetwarzanie transkrypcji rozmów zgodnie z podanymi instrukcjami. Jesteś używany jako część systemu Blueprint, który stosuje specyficzne kolekcje promptów do ustrukturyzowanych analiz. Odpowiadaj precyzyjnie, uporządkowanie i pomocnie. Odpowiadaj w Markdown z ładnym formatowaniem.', + // Dänisch + da: 'Du er en hjælpsom assistent, der analyserer og behandler tekster. Din opgave er at behandle transskriptioner af samtaler i henhold til de givne instruktioner. Du bruges som en del af et Blueprint-system, der anvender specifikke prompt-samlinger til strukturerede analyser. Svar præcist, struktureret og hjælpsomt. Svar i Markdown med et pænt format.', + // Schwedisch + sv: 'Du är en hjälpsam assistent som analyserar och bearbetar texter. Din uppgift är att bearbeta transkriptioner av samtal enligt givna instruktioner. Du används som en del av ett Blueprint-system som tillämpar specifika prompt-samlingar för strukturerade analyser. Svara exakt, strukturerat och hjälpsamt. Svara i Markdown med ett snyggt format.', + // Norwegisch + nb: 'Du er en hjelpsom assistent som analyserer og behandler tekster. Din oppgave er å behandle transkripsjoner av samtaler i henhold til gitte instruksjoner. Du brukes som en del av et Blueprint-system som anvender spesifikke prompt-samlinger for strukturerte analyser. Svar presist, strukturert og hjelpsomt. Svar i Markdown med et pent format.', + // Finnisch + fi: 'Olet avulias avustaja, joka analysoi ja käsittelee tekstejä. Tehtäväsi on käsitellä keskustelujen transkriptioita annettujen ohjeiden mukaisesti. Sinua käytetään osana Blueprint-järjestelmää, joka soveltaa tiettyjä kehotuskokoelmia rakenteellisiin analyyseihin. Vastaa tarkasti, jäsennellysti ja avuliaasti. Vastaa Markdownilla kauniilla muotoilulla.', + // Tschechisch + cs: 'Jste užitečný asistent, který analyzuje a zpracovává texty. Vaším úkolem je zpracovávat přepisy konverzací podle daných pokynů. Jste používán jako součást systému Blueprint, který aplikuje specifické kolekce výzev pro strukturované analýzy. Odpovídejte přesně, strukturovaně a užitečně. Odpovídejte v Markdownu s pěkným formátováním.', + // Ungarisch + hu: 'Ön egy hasznos asszisztens, aki szövegeket elemez és dolgoz fel. Az Ön feladata a beszélgetések átiratainak feldolgozása a megadott utasítások szerint. Önt egy Blueprint rendszer részeként használják, amely specifikus prompt gyűjteményeket alkalmaz strukturált elemzésekhez. Válaszoljon pontosan, strukturáltan és hasznosam. Válaszoljon Markdown formátumban szép formázással.', + // Griechisch + el: 'Είστε ένας χρήσιμος βοηθός που αναλύει και επεξεργάζεται κείμενα. Το καθήκον σας είναι να επεξεργάζεστε μεταγραφές συνομιλιών σύμφωνα με τις δοθείσες οδηγίες. Χρησιμοποιείστε ως μέρος ενός συστήματος Blueprint που εφαρμόζει συγκεκριμένες συλλογές προτροπών για δομημένες αναλύσεις. Απαντήστε με ακρίβεια, δομημένα και χρήσιμα. Απαντήστε σε Markdown με όμορφη μορφοποίηση.', + // Hebräisch + he: 'אתה עוזר מועיל שמנתח ומעבד טקסטים. המשימה שלך היא לעבד תמלילים של שיחות בהתאם להוראות הנתונות. אתה משמש כחלק ממערכת Blueprint שמיישמת אוספי הנחיות ספציפיים לניתוחים מובנים. השב בצורה מדויקת, מובנית ומועילה. השב ב-Markdown עם עיצוב יפה.', + // Indonesisch + id: 'Anda adalah asisten yang membantu yang menganalisis dan memproses teks. Tugas Anda adalah memproses transkrip percakapan sesuai dengan instruksi yang diberikan. Anda digunakan sebagai bagian dari sistem Blueprint yang menerapkan koleksi prompt spesifik untuk analisis terstruktur. Jawab dengan tepat, terstruktur, dan membantu. Jawab dalam Markdown dengan format yang bagus.', + // Thai + th: 'คุณเป็นผู้ช่วยที่มีประโยชน์ที่วิเคราะห์และประมวลผลข้อความ งานของคุณคือการประมวลผลการถอดความของการสนทนาตามคำแนะนำที่กำหนด คุณถูกใช้เป็นส่วนหนึ่งของระบบ Blueprint ที่ใช้คอลเลกชันพรอมต์เฉพาะสำหรับการวิเคราะห์ที่มีโครงสร้าง ตอบอย่างแม่นยำ มีโครงสร้าง และเป็นประโยชน์ ตอบใน Markdown ด้วยรูปแบบที่สวยงาม', + // Vietnamesisch + vi: 'Bạn là một trợ lý hữu ích phân tích và xử lý văn bản. Nhiệm vụ của bạn là xử lý bản ghi các cuộc hội thoại theo hướng dẫn đã cho. Bạn được sử dụng như một phần của hệ thống Blueprint áp dụng các bộ sưu tập lời nhắc cụ thể cho các phân tích có cấu trúc. Trả lời chính xác, có cấu trúc và hữu ích. Trả lời bằng Markdown với định dạng đẹp.', + // Ukrainisch + uk: 'Ви корисний помічник, який аналізує та обробляє тексти. Ваше завдання - обробляти транскрипції розмов відповідно до наданих інструкцій. Ви використовуєтесь як частина системи Blueprint, яка застосовує специфічні колекції підказок для структурованого аналізу. Відповідайте точно, структуровано та корисно. Відповідайте в Markdown з гарним форматуванням.', + // Rumänisch + ro: 'Sunteți un asistent util care analizează și procesează texte. Sarcina dvs. este să procesați transcrieri ale conversațiilor conform instrucțiunilor date. Sunteți utilizat ca parte a unui sistem Blueprint care aplică colecții specifice de solicitări pentru analize structurate. Răspundeți precis, structurat și util. Răspundeți în Markdown cu o formatare frumoasă.', + // Bulgarisch + bg: 'Вие сте полезен асистент, който анализира и обработва текстове. Вашата задача е да обработвате транскрипции на разговори според дадените инструкции. Вие се използвате като част от Blueprint система, която прилага специфични колекции от подкани за структурирани анализи. Отговаряйте точно, структурирано и полезно. Отговаряйте в Markdown с красиво форматиране.', + // Katalanisch + ca: "Ets un assistent útil que analitza i processa textos. La teva tasca és processar transcripcions de converses segons les instruccions donades. Ets utilitzat com a part d'un sistema Blueprint que aplica col·leccions específiques de prompts per a anàlisis estructurades. Respon de forma precisa, estructurada i útil. Respon en Markdown amb un format bonic.", + // Kroatisch + hr: 'Vi ste korisni asistent koji analizira i obrađuje tekstove. Vaš zadatak je obraditi transkripcije razgovora prema danim uputama. Koristite se kao dio Blueprint sustava koji primjenjuje specifične kolekcije upita za strukturirane analize. Odgovorite precizno, strukturirano i korisno. Odgovorite u Markdownu s lijepim formatiranjem.', + // Slowakisch + sk: 'Ste užitočný asistent, ktorý analyzuje a spracováva texty. Vašou úlohou je spracovávať prepisy konverzácií podľa daných pokynov. Používate sa ako súčasť systému Blueprint, ktorý aplikuje špecifické kolekcie výziev pre štruktúrované analýzy. Odpovedajte presne, štruktúrovane a užitočne. Odpovedajte v Markdowne s pekným formátovaním.', + // Estnisch + et: 'Olete kasulik assistent, kes analüüsib ja töötleb tekste. Teie ülesanne on töödelda vestluste transkriptsioone vastavalt antud juhistele. Teid kasutatakse Blueprint-süsteemi osana, mis rakendab struktureeritud analüüside jaoks spetsiifilisi viipade kogumeid. Vastake täpselt, struktureeritult ja kasulikult. Vastake Markdownis ilusa vormindusega.', + // Lettisch + lv: 'Jūs esat noderīgs asistents, kas analizē un apstrādā tekstus. Jūsu uzdevums ir apstrādāt sarunu transkripcijas saskaņā ar dotajiem norādījumiem. Jūs tiekat izmantots kā daļa no Blueprint sistēmas, kas pielieto specifiskas uzvedņu kolekcijas strukturētām analīzēm. Atbildiet precīzi, strukturēti un noderīgi. Atbildiet Markdown formātā ar skaistu formatējumu.', + // Litauisch + lt: 'Jūs esate naudingas asistentas, kuris analizuoja ir apdoroja tekstus. Jūsų užduotis yra apdoroti pokalbių transkriptus pagal pateiktas instrukcijas. Jūs naudojatės kaip Blueprint sistemos dalis, kuri taiko specifinius raginimų rinkinius struktūrizuotoms analizėms. Atsakykite tiksliai, struktūrizuotai ir naudingai. Atsakykite Markdown formatu su gražiu formatavimu.', + // Bengalisch + bn: 'আপনি একজন সহায়ক সহকারী যিনি পাঠ্য বিশ্লেষণ এবং প্রক্রিয়া করেন। আপনার কাজ হল প্রদত্ত নির্দেশাবলী অনুসারে কথোপকথনের ট্রান্সক্রিপ্ট প্রক্রিয়া করা। আপনি একটি ব্লুপ্রিন্ট সিস্টেমের অংশ হিসাবে ব্যবহৃত হন যা কাঠামোগত বিশ্লেষণের জন্য নির্দিষ্ট প্রম্পট সংগ্রহ প্রয়োগ করে। সঠিক, কাঠামোবদ্ধ এবং সহায়কভাবে উত্তর দিন। সুন্দর ফরম্যাটিং সহ মার্কডাউনে উত্তর দিন।', + // Malaiisch + ms: 'Anda adalah pembantu berguna yang menganalisis dan memproses teks. Tugas anda adalah untuk memproses transkrip perbualan mengikut arahan yang diberikan. Anda digunakan sebagai sebahagian daripada sistem Blueprint yang menggunakan koleksi prompt khusus untuk analisis berstruktur. Jawab dengan tepat, berstruktur dan membantu. Jawab dalam Markdown dengan format yang cantik.', + // Tamil + ta: 'நீங்கள் உரைகளை பகுப்பாய்வு செய்து செயலாக்கும் பயனுள்ள உதவியாளர். கொடுக்கப்பட்ட வழிமுறைகளின்படி உரையாடல்களின் டிரான்ஸ்கிரிப்ட்களை செயலாக்குவது உங்கள் பணி. கட்டமைக்கப்பட்ட பகுப்பாய்வுகளுக்கு குறிப்பிட்ட உத்வேக சேகரிப்புகளைப் பயன்படுத்தும் Blueprint அமைப்பின் ஒரு பகுதியாக நீங்கள் பயன்படுத்தப்படுகிறீர்கள். துல்லியமாகவும், கட்டமைக்கப்பட்டதாகவும், பயனுள்ளதாகவும் பதிலளிக்கவும். அழகான வடிவமைப்புடன் Markdown இல் பதிலளிக்கவும்.', + // Telugu + te: 'మీరు టెక్స్ట్‌లను విశ్లేషించే మరియు ప్రాసెస్ చేసే సహాయక అసిస్టెంట్. ఇచ్చిన సూచనల ప్రకారం సంభాషణ ట్రాన్స్‌క్రిప్ట్‌లను ప్రాసెస్ చేయడం మీ పని. నిర్మాణాత్మక విశ్లేషణల కోసం నిర్దిష్ట ప్రాంప్ట్ సేకరణలను వర్తింపజేసే బ్లూప్రింట్ సిస్టమ్‌లో భాగంగా మీరు ఉపయోగించబడుతున్నారు. ఖచ్చితంగా, నిర్మాణాత్మకంగా మరియు సహాయకరంగా సమాధానం ఇవ్వండి. అందమైన ఫార్మాటింగ్‌తో మార్క్‌డౌన్‌లో సమాధానం ఇవ్వండి.', + // Urdu + ur: 'آپ ایک مددگار اسسٹنٹ ہیں جو متن کا تجزیہ اور پروسیسنگ کرتے ہیں۔ آپ کا کام دی گئی ہدایات کے مطابق گفتگو کی ٹرانسکرپٹس کو پروسیس کرنا ہے۔ آپ بلیو پرنٹ سسٹم کے حصے کے طور پر استعمال ہوتے ہیں جو ساختی تجزیات کے لیے مخصوص پرامپٹ کلیکشنز کا اطلاق کرتا ہے۔ درست، منظم اور مددگار طریقے سے جواب دیں۔ خوبصورت فارمیٹنگ کے ساتھ مارک ڈاؤن میں جواب دیں۔', + // Marathi + mr: 'आपण मजकूरांचे विश्लेषण आणि प्रक्रिया करणारे उपयुक्त सहाय्यक आहात. दिलेल्या सूचनांनुसार संभाषणांच्या प्रतिलेखांवर प्रक्रिया करणे हे आपले कार्य आहे. आपण ब्लूप्रिंट सिस्टमचा भाग म्हणून वापरले जाता जे संरचित विश्लेषणांसाठी विशिष्ट प्रॉम्प्ट संग्रह लागू करते. अचूक, संरचित आणि उपयुक्त पद्धतीने उत्तर द्या. सुंदर फॉरमॅटिंगसह मार्कडाउनमध्ये उत्तर द्या.', + // Gujarati + gu: 'તમે એક મદદરૂપ સહાયક છો જે ટેક્સ્ટનું વિશ્લેષણ અને પ્રક્રિયા કરે છે. તમારું કાર્ય આપેલી સૂચનાઓ અનુસાર વાતચીતની ટ્રાન્સક્રિપ્ટ્સ પર પ્રક્રિયા કરવાનું છે. તમે બ્લુપ્રિન્ટ સિસ્ટમના ભાગ તરીકે ઉપયોગમાં લેવાય છો જે માળખાગત વિશ્લેષણ માટે વિશિષ્ટ પ્રોમ્પ્ટ સંગ્રહો લાગુ કરે છે. ચોક્કસ, માળખાગત અને મદદરૂપ રીતે જવાબ આપો. સુંદર ફોર્મેટિંગ સાથે માર્કડાઉનમાં જવાબ આપો.', + // Malayalam + ml: 'നിങ്ങൾ വാചകങ്ങൾ വിശകലനം ചെയ്യുകയും പ്രോസസ്സ് ചെയ്യുകയും ചെയ്യുന്ന സഹായകരമായ അസിസ്റ്റന്റാണ്. നൽകിയിരിക്കുന്ന നിർദ്ദേശങ്ങൾക്കനുസരിച്ച് സംഭാഷണങ്ങളുടെ ട്രാൻസ്ക്രിപ്റ്റുകൾ പ്രോസസ്സ് ചെയ്യുക എന്നതാണ് നിങ്ങളുടെ ജോലി. ഘടനാപരമായ വിശകലനങ്ങൾക്കായി നിർദ്ദിഷ്ട പ്രോംപ്റ്റ് ശേഖരങ്ങൾ പ്രയോഗിക്കുന്ന ബ്ലൂപ്രിന്റ് സിസ്റ്റത്തിന്റെ ഭാഗമായി നിങ്ങൾ ഉപയോഗിക്കപ്പെടുന്നു. കൃത്യമായും ഘടനാപരമായും സഹായകരമായും ഉത്തരം നൽകുക. മനോഹരമായ ഫോർമാറ്റിംഗോടെ മാർക്ക്ഡൗണിൽ ഉത്തരം നൽകുക.', + // Kannada + kn: 'ನೀವು ಪಠ್ಯಗಳನ್ನು ವಿಶ್ಲೇಷಿಸುವ ಮತ್ತು ಪ್ರಕ್ರಿಯೆಗೊಳಿಸುವ ಸಹಾಯಕ ಸಹಾಯಕರು. ನೀಡಿದ ಸೂಚನೆಗಳ ಪ್ರಕಾರ ಸಂಭಾಷಣೆಗಳ ಪ್ರತಿಲಿಪಿಗಳನ್ನು ಪ್ರಕ್ರಿಯೆಗೊಳಿಸುವುದು ನಿಮ್ಮ ಕಾರ್ಯ. ರಚನಾತ್ಮಕ ವಿಶ್ಲೇಷಣೆಗಳಿಗಾಗಿ ನಿರ್ದಿಷ್ಟ ಪ್ರಾಂಪ್ಟ್ ಸಂಗ್ರಹಗಳನ್ನು ಅನ್ವಯಿಸುವ ಬ್ಲೂಪ್ರಿಂಟ್ ವ್ಯವಸ್ಥೆಯ ಭಾಗವಾಗಿ ನೀವು ಬಳಸಲ್ಪಡುತ್ತೀರಿ. ನಿಖರವಾಗಿ, ರಚನಾತ್ಮಕವಾಗಿ ಮತ್ತು ಸಹಾಯಕವಾಗಿ ಉತ್ತರಿಸಿ. ಸುಂದರ ಫಾರ್ಮ್ಯಾಟಿಂಗ್‌ನೊಂದಿಗೆ ಮಾರ್ಕ್‌ಡೌನ್‌ನಲ್ಲಿ ಉತ್ತರಿಸಿ.', + // Punjabi + pa: 'ਤੁਸੀਂ ਇੱਕ ਮਦਦਗਾਰ ਸਹਾਇਕ ਹੋ ਜੋ ਟੈਕਸਟ ਦਾ ਵਿਸ਼ਲੇਸ਼ਣ ਅਤੇ ਪ੍ਰੋਸੈਸਿੰਗ ਕਰਦੇ ਹੋ। ਤੁਹਾਡਾ ਕੰਮ ਦਿੱਤੀਆਂ ਹਦਾਇਤਾਂ ਅਨੁਸਾਰ ਗੱਲਬਾਤ ਦੀਆਂ ਟ੍ਰਾਂਸਕ੍ਰਿਪਟਾਂ ਨੂੰ ਪ੍ਰੋਸੈਸ ਕਰਨਾ ਹੈ। ਤੁਸੀਂ ਇੱਕ ਬਲੂਪ੍ਰਿੰਟ ਸਿਸਟਮ ਦੇ ਹਿੱਸੇ ਵਜੋਂ ਵਰਤੇ ਜਾਂਦੇ ਹੋ ਜੋ ਢਾਂਚਾਗਤ ਵਿਸ਼ਲੇਸ਼ਣਾਂ ਲਈ ਵਿਸ਼ੇਸ਼ ਪ੍ਰੌਂਪਟ ਸੰਗ੍ਰਹਿ ਲਾਗੂ ਕਰਦਾ ਹੈ। ਸਟੀਕ, ਢਾਂਚਾਗਤ ਅਤੇ ਮਦਦਗਾਰ ਤਰੀਕੇ ਨਾਲ ਜਵਾਬ ਦਿਓ। ਸੁੰਦਰ ਫਾਰਮੈਟਿੰਗ ਦੇ ਨਾਲ ਮਾਰਕਡਾਊਨ ਵਿੱਚ ਜਵਾਬ ਦਿਓ।', + // Afrikaans + af: "Jy is 'n nuttige assistent wat tekste analiseer en verwerk. Jou taak is om transkripsies van gesprekke volgens die gegewe instruksies te verwerk. Jy word gebruik as deel van 'n Blueprint-stelsel wat spesifieke prompt-versamelings vir gestruktureerde ontledings toepas. Antwoord presies, gestruktureerd en nuttig. Antwoord in Markdown met 'n mooi formatering.", + // Persisch/Farsi + fa: 'شما یک دستیار مفید هستید که متون را تحلیل و پردازش می‌کند. وظیفه شما پردازش رونوشت مکالمات طبق دستورالعمل‌های داده شده است. شما به عنوان بخشی از سیستم Blueprint استفاده می‌شوید که مجموعه‌های اعلان خاصی را برای تحلیل‌های ساختاریافته اعمال می‌کند. دقیق، ساختارمند و مفید پاسخ دهید. با قالب‌بندی زیبا در Markdown پاسخ دهید.', + // Georgisch + ka: 'თქვენ ხართ სასარგებლო ასისტენტი, რომელიც აანალიზებს და ამუშავებს ტექსტებს. თქვენი ამოცანაა საუბრების ტრანსკრიპტების დამუშავება მოცემული ინსტრუქციების შესაბამისად. თქვენ გამოიყენებით როგორც Blueprint სისტემის ნაწილი, რომელიც იყენებს სპეციფიკურ მოთხოვნების კოლექციებს სტრუქტურირებული ანალიზებისთვის. უპასუხეთ ზუსტად, სტრუქტურირებულად და სასარგებლოდ. უპასუხეთ Markdown-ში ლამაზი ფორმატირებით.', + // Isländisch + is: 'Þú ert hjálplegur aðstoðarmaður sem greinir og vinnur úr textum. Verkefni þitt er að vinna úr afritum af samtölum samkvæmt gefnum leiðbeiningum. Þú ert notaður sem hluti af Blueprint kerfi sem beitir sérstökum hvatasöfnum fyrir skipulagðar greiningar. Svaraðu nákvæmlega, skipulega og hjálplega. Svaraðu í Markdown með fallegu sniði.', + // Albanisch + sq: 'Ju jeni një asistent i dobishëm që analizon dhe përpunon tekste. Detyra juaj është të përpunoni transkriptimet e bisedave sipas udhëzimeve të dhëna. Ju përdoreni si pjesë e një sistemi Blueprint që aplikon koleksione specifike të kërkesave për analiza të strukturuara. Përgjigjuni saktë, të strukturuar dhe të dobishëm. Përgjigjuni në Markdown me një formatim të bukur.', + // Aserbaidschanisch + az: 'Siz mətnləri təhlil edən və emal edən faydalı köməkçisiniz. Sizin vəzifəniz verilmiş təlimatlara uyğun olaraq söhbətlərin transkriptlərini emal etməkdir. Siz strukturlaşdırılmış təhlillər üçün xüsusi sorğu kolleksiyalarını tətbiq edən Blueprint sisteminin bir hissəsi kimi istifadə olunursunuz. Dəqiq, strukturlaşdırılmış və faydalı cavab verin. Gözəl formatlaşdırma ilə Markdown-da cavab verin.', + // Baskisch + eu: 'Testuak aztertzen eta prozesatzen dituen laguntzaile erabilgarria zara. Zure zeregina elkarrizketen transkripzioak emandako argibideen arabera prozesatzea da. Blueprint sistema baten zati gisa erabiltzen zara, analisi egituratuetarako gonbidapen bilduma espezifikoak aplikatzen dituena. Erantzun zehatz, egituratuta eta lagungarri. Erantzun Markdown-en formatu eder batekin.', + // Galizisch + gl: 'Es un asistente útil que analiza e procesa textos. A túa tarefa é procesar transcricións de conversas segundo as instrucións dadas. Utilizaste como parte dun sistema Blueprint que aplica coleccións específicas de prompts para análises estruturadas. Responde de forma precisa, estruturada e útil. Responde en Markdown cun formato bonito.', + // Kasachisch + kk: 'Сіз мәтіндерді талдайтын және өңдейтін пайдалы көмекшісіз. Сіздің міндетіңіз - берілген нұсқауларға сәйкес әңгімелердің транскрипттерін өңдеу. Сіз құрылымдық талдаулар үшін арнайы сұрау жинақтарын қолданатын Blueprint жүйесінің бөлігі ретінде пайдаланыласыз. Дәл, құрылымды және пайдалы жауап беріңіз. Әдемі пішімдеумен Markdown-да жауап беріңіз.', + // Mazedonisch + mk: 'Вие сте корисен асистент кој анализира и обработува текстови. Вашата задача е да обработувате транскрипти на разговори според дадените упатства. Вие се користите како дел од Blueprint систем кој применува специфични колекции на покани за структурирани анализи. Одговорете прецизно, структурирано и корисно. Одговорете во Markdown со убаво форматирање.', + // Serbisch + sr: 'Ви сте корисни асистент који анализира и обрађује текстове. Ваш задатак је да обрађујете транскрипте разговора према датим упутствима. Користите се као део Blueprint система који примењује специфичне колекције упита за структуриране анализе. Одговорите прецизно, структурирано и корисно. Одговорите у Markdown-у са лепим форматирањем.', + // Slowenisch + sl: 'Ste koristen pomočnik, ki analizira in obdeluje besedila. Vaša naloga je obdelava prepisov pogovorov v skladu z danimi navodili. Uporabljate se kot del sistema Blueprint, ki uporablja specifične zbirke pozivov za strukturirane analize. Odgovorite natančno, strukturirano in koristno. Odgovorite v Markdownu z lepim oblikovanjem.', + // Maltesisch + mt: "Int assistent utli li janalizza u jipproċessa testi. Il-kompitu tiegħek huwa li tipproċessa traskrizzjonijiet ta' konversazzjonijiet skont l-istruzzjonijiet mogħtija. Int użat bħala parti minn sistema Blueprint li tapplika kollezzjonijiet speċifiċi ta' prompt għal analiżi strutturati. Wieġeb b'mod preċiż, strutturat u utli. Wieġeb f'Markdown b'format sabiħ.", + // Armenisch + hy: 'Դուք օգտակար օգնական եք, որը վերլուծում և մշակում է տեքստեր: Ձեր խնդիրն է մշակել զրույցների արձանագրությունները տրված հրահանգների համաձայն: Դուք օգտագործվում եք որպես Blueprint համակարգի մաս, որը կիրառում է հատուկ հուշումների հավաքածուներ կառուցվածքային վերլուծությունների համար: Պատասխանեք ճշգրիտ, կառուցվածքային և օգտակար: Պատասխանեք Markdown-ում գեղեցիկ ձևաչափով:', + // Usbekisch + uz: "Siz matnlarni tahlil qiluvchi va qayta ishlovchi foydali yordamchisiz. Sizning vazifangiz berilgan ko'rsatmalarga muvofiq suhbatlar transkriptlarini qayta ishlashdir. Siz tuzilgan tahlillar uchun maxsus so'rovlar to'plamlarini qo'llaydigan Blueprint tizimining bir qismi sifatida foydalanilasiz. Aniq, tuzilgan va foydali javob bering. Chiroyli formatlash bilan Markdown da javob bering.", + // Irisch + ga: 'Is cúntóir cabhrach thú a dhéanann anailís agus próiseáil ar théacsanna. Is é do thasc trascríbhinní comhráite a phróiseáil de réir na dtreoracha tugtha. Úsáidtear thú mar chuid de chóras Blueprint a chuireann bailiúcháin leid shonracha i bhfeidhm le haghaidh anailísí struchtúrtha. Freagair go beacht, struchtúrtha agus cabhrach. Freagair i Markdown le formáidiú álainn.', + // Walisisch + cy: "Rydych chi'n gynorthwyydd defnyddiol sy'n dadansoddi a phrosesu testunau. Eich tasg yw prosesu trawsgrifiadau o sgyrsiau yn unol â'r cyfarwyddiadau a roddwyd. Rydych yn cael eich defnyddio fel rhan o system Blueprint sy'n defnyddio casgliadau ysgogiad penodol ar gyfer dadansoddiadau strwythuredig. Atebwch yn fanwl gywir, yn strwythuredig ac yn ddefnyddiol. Atebwch yn Markdown gyda fformat hardd.", + // Filipino + fil: 'Ikaw ay isang kapaki-pakinabang na katulong na nag-aanalisa at nagpoproseso ng mga teksto. Ang iyong gawain ay magproseso ng mga transkripsyon ng mga pag-uusap ayon sa mga ibinigay na tagubilin. Ginagamit ka bilang bahagi ng isang Blueprint system na naglalapat ng mga partikular na koleksyon ng prompt para sa mga nakabalangkas na pagsusuri. Tumugon nang tumpak, nakabalangkas, at nakakatulong. Tumugon sa Markdown na may magandang format.', + }, +}; +/** + * Hilfsfunktion zum Abrufen des System-Prompts für eine bestimmte Sprache + * @param language Sprache (z.B. 'de', 'en', 'fr') + * @returns System-Prompt für die angegebene Sprache oder Fallback + */ export function getSystemPrompt(language) { + const lang = language.toLowerCase().split('-')[0]; // z.B. 'de-DE' -> 'de' + // Versuche spezifische Sprache, dann Deutsch, dann Englisch, dann erste verfügbare + return ( + SYSTEM_PROMPTS.system[lang] || + SYSTEM_PROMPTS.system['de'] || + SYSTEM_PROMPTS.system['en'] || + Object.values(SYSTEM_PROMPTS.system)[0] || + 'You are a helpful assistant.' + ); +} diff --git a/apps/memoro/apps/backend/supabase/functions/blueprint/index.ts b/apps/memoro/apps/backend/supabase/functions/blueprint/index.ts new file mode 100644 index 000000000..dc23f63ab --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/blueprint/index.ts @@ -0,0 +1,679 @@ +// Follow this setup guide to integrate the Deno language server with your editor: +// https://deno.land/manual/getting_started/setup_your_environment +// This enables autocomplete, go to definition, etc. +// Setup type definitions for built-in Supabase Runtime APIs +import 'jsr:@supabase/functions-js/edge-runtime.d.ts'; +import { serve } from 'https://deno.land/std@0.215.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'; +import { getSystemPrompt } from './constants.ts'; +import { getTranscriptText } from '../_shared/transcript-utils.ts'; +import { ROOT_SYSTEM_PROMPTS } from '../_shared/system-prompt.ts'; +// Atomic status update utilities using RPC to prevent race conditions +async function setMemoErrorStatus(supabaseClient, memoId, processName, error) { + if (!memoId) return; + const errorMessage = error instanceof Error ? error.message : String(error); + const timestamp = new Date().toISOString(); + try { + await supabaseClient.rpc('set_memo_process_error', { + p_memo_id: memoId, + p_process_name: processName, + p_timestamp: timestamp, + p_reason: errorMessage, + p_details: null, + }); + } catch (dbError) { + console.error(`Error setting error status for memo ${memoId}:`, dbError); + } +} +async function setMemoProcessingStatus(supabaseClient, memoId, processName) { + const timestamp = new Date().toISOString(); + try { + await supabaseClient.rpc('set_memo_process_status', { + p_memo_id: memoId, + p_process_name: processName, + p_status: 'processing', + p_timestamp: timestamp, + }); + } catch (dbError) { + console.error(`Error setting processing status for memo ${memoId}:`, dbError); + } +} +async function setMemoCompletedStatus(supabaseClient, memoId, processName, details) { + const timestamp = new Date().toISOString(); + try { + await supabaseClient.rpc('set_memo_process_status_with_details', { + p_memo_id: memoId, + p_process_name: processName, + p_status: 'completed', + p_timestamp: timestamp, + p_details: details, + }); + } catch (dbError) { + console.error(`Error setting completed status for memo ${memoId}:`, dbError); + } +} +function createErrorResponse(error, status = 500, corsHeaders = {}) { + const errorMessage = error instanceof Error ? error.message : String(error); + return new Response( + JSON.stringify({ + error: errorMessage, + timestamp: new Date().toISOString(), + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status, + } + ); +} +/** + * Blueprint Edge Function + * + * Diese Funktion wird wie die Headline-Function getriggert und verarbeitet einen angegebenen Blueprint, + * dessen Prompts mit dem Transcript an das LLM geschickt werden. Die Antworten werden als neue Memory-Einträge + * gespeichert, die auf das ursprüngliche Memo referenzieren. + * + * @version 1.2.0 + * @date 2025-05-19 + */ // ─── Umgebungsvariablen ────────────────────────────────────────────── +const SUPABASE_URL = Deno.env.get('SUPABASE_URL'); +if (!SUPABASE_URL) { + throw new Error('SUPABASE_URL not configured'); +} +const SERVICE_KEY = Deno.env.get('C_SUPABASE_SECRET_KEY'); +if (!SERVICE_KEY) { + throw new Error('C_SUPABASE_SECRET_KEY not configured'); +} +// Google Gemini Konfiguration +const GEMINI_API_KEY = Deno.env.get('CREATE_BLUEPRINT_GEMINI_MEMORO') || ''; +const GEMINI_MODEL = 'gemini-2.0-flash'; +const GEMINI_ENDPOINT = 'https://generativelanguage.googleapis.com/v1beta/models'; +// Azure OpenAI Konfiguration (Backup) +const AZURE_OPENAI_ENDPOINT = 'https://memoroseopenai.openai.azure.com'; +const AZURE_OPENAI_KEY = Deno.env.get('AZURE_OPENAI_KEY'); +if (!AZURE_OPENAI_KEY) { + throw new Error('AZURE_OPENAI_KEY not configured'); +} +const AZURE_OPENAI_DEPLOYMENT = 'gpt-4.1-mini-se'; +const AZURE_OPENAI_API_VERSION = '2025-01-01-preview'; +const memoro_sb = createClient(SUPABASE_URL, SERVICE_KEY); +// ─── Logging-Funktion ────────────────────────────────────────────── +/** + * Erweiterte Logging-Funktion mit Zeitstempel und Log-Level + */ function log(level, message, data) { + const timestamp = new Date().toISOString(); + const logMessage = `[${timestamp}] [${level.toUpperCase()}] ${message}`; + switch (level.toUpperCase()) { + case 'INFO': + console.log(logMessage); + break; + case 'DEBUG': + console.debug(logMessage); + break; + case 'WARN': + console.warn(logMessage); + break; + case 'ERROR': + console.error(logMessage); + break; + default: + console.log(logMessage); + break; + } + if (data) { + if (level.toUpperCase() === 'ERROR') { + console.error(data); + } else { + // For DEBUG, log data more verbosely if needed, otherwise keep it simple + console.log(typeof data === 'object' ? JSON.stringify(data, null, 2) : data); + } + } +} +/** + * Sendet Prompt an Gemini Flash und gibt die Antwort zurück + */ async function runPromptWithGemini( + prompt, + transcript, + language = 'de', + functionIdForLog = 'global' +) { + // Always use the default system prompt from ROOT_SYSTEM_PROMPTS + const systemPrompt = getSystemPrompt(language); + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][GEMINI-${requestId}] Starte Gemini-Anfrage.`); + try { + let fullPrompt; + if (prompt.includes('{transcript}')) { + fullPrompt = prompt.replace('{transcript}', transcript); + log('DEBUG', `[${functionIdForLog}][GEMINI-${requestId}] Platzhalter im Prompt ersetzt.`); + } else { + fullPrompt = `${prompt}\n\nText: ${transcript}`; + log( + 'DEBUG', + `[${functionIdForLog}][GEMINI-${requestId}] Kein Platzhalter, Transkript am Ende angehängt.` + ); + } + log( + 'DEBUG', + `[${functionIdForLog}][GEMINI-${requestId}] System-Prompt (${language}): ${systemPrompt}` + ); + log( + 'DEBUG', + `[${functionIdForLog}][GEMINI-${requestId}] User-Prompt (erste 200 Zeichen): ${fullPrompt.substring(0, 200)}...` + ); + const startTime = Date.now(); + const response = await fetch( + `${GEMINI_ENDPOINT}/${GEMINI_MODEL}:generateContent?key=${GEMINI_API_KEY}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + contents: [ + { + role: 'user', + parts: [ + { + text: fullPrompt, + }, + ], + }, + ], + systemInstruction: { + parts: [ + { + text: systemPrompt, + }, + ], + }, + generationConfig: { + temperature: 0.7, + maxOutputTokens: 8192, + }, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][GEMINI-${requestId}] Gemini Antwort erhalten in ${duration}ms, Status: ${response.status}` + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][GEMINI-${requestId}] Gemini API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Gemini API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.candidates?.[0]?.content?.parts?.[0]?.text?.trim() || ''; + log( + 'INFO', + `[${functionIdForLog}][GEMINI-${requestId}] Erfolgreiche Gemini-Antwort (Länge: ${content.length}).` + ); + return content; + } catch (error) { + log('ERROR', `[${functionIdForLog}][GEMINI-${requestId}] Fehler beim Gemini-Request:`, error); + throw error; + } +} +/** + * Sendet Prompt an Azure OpenAI und gibt die Antwort zurück (Fallback) + */ async function runPromptWithAzure( + prompt, + transcript, + language = 'de', + functionIdForLog = 'global' +) { + // Always use the default system prompt from ROOT_SYSTEM_PROMPTS + const systemPrompt = getSystemPrompt(language); + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][AZURE-${requestId}] Starte Azure OpenAI-Anfrage.`); + try { + let fullPrompt; + if (prompt.includes('{transcript}')) { + fullPrompt = prompt.replace('{transcript}', transcript); + log('DEBUG', `[${functionIdForLog}][AZURE-${requestId}] Platzhalter im Prompt ersetzt.`); + } else { + fullPrompt = `${prompt}\n\nText: ${transcript}`; + log( + 'DEBUG', + `[${functionIdForLog}][AZURE-${requestId}] Kein Platzhalter, Transkript am Ende angehängt.` + ); + } + log( + 'DEBUG', + `[${functionIdForLog}][AZURE-${requestId}] System-Prompt (${language}): ${systemPrompt}` + ); + log( + 'DEBUG', + `[${functionIdForLog}][AZURE-${requestId}] User-Prompt (erste 200 Zeichen): ${fullPrompt.substring(0, 200)}...` + ); + const startTime = Date.now(); + const response = await fetch( + `${AZURE_OPENAI_ENDPOINT}/openai/deployments/${AZURE_OPENAI_DEPLOYMENT}/chat/completions?api-version=${AZURE_OPENAI_API_VERSION}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'api-key': AZURE_OPENAI_KEY, + }, + body: JSON.stringify({ + messages: [ + { + role: 'system', + content: systemPrompt, + }, + { + role: 'user', + content: fullPrompt, + }, + ], + max_tokens: 8192, + temperature: 0.7, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][AZURE-${requestId}] Azure OpenAI Antwort erhalten in ${duration}ms, Status: ${response.status}` + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][AZURE-${requestId}] Azure OpenAI API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Azure OpenAI API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.choices[0]?.message?.content?.trim() || ''; + log( + 'INFO', + `[${functionIdForLog}][AZURE-${requestId}] Erfolgreiche Azure OpenAI-Antwort (Länge: ${content.length}).` + ); + return content; + } catch (error) { + log( + 'ERROR', + `[${functionIdForLog}][AZURE-${requestId}] Fehler beim Azure OpenAI-Request:`, + error + ); + throw error; + } +} +/** + * Hauptfunktion zur Prompt-Verarbeitung mit Fallback-Logik + */ async function runPromptWithTranscript( + prompt, + transcript, + language = 'de', + functionIdForLog = 'global' +) { + try { + // Zuerst mit Gemini versuchen + return await runPromptWithGemini(prompt, transcript, language, functionIdForLog); + } catch (error) { + log('WARN', `[${functionIdForLog}] Gemini fehlgeschlagen, fallback auf Azure OpenAI`, error); + try { + // Fallback auf Azure OpenAI + return await runPromptWithAzure(prompt, transcript, language, functionIdForLog); + } catch (azureError) { + log('ERROR', `[${functionIdForLog}] Beide LLM-Services fehlgeschlagen`, azureError); + return ''; // Return empty string to maintain compatibility + } + } +} +serve(async (req) => { + const functionId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionId}] Blueprint-Funktion gestartet`); + const corsHeaders = { + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': 'POST, OPTIONS', + 'Access-Control-Allow-Headers': 'Content-Type, Authorization', + }; + if (req.method === 'OPTIONS') { + log('DEBUG', `[${functionId}] CORS Preflight-Anfrage bearbeitet`); + return new Response(null, { + headers: corsHeaders, + status: 204, + }); + } + let memo_id_to_update = null; + try { + const requestData = await req.json(); + const { memo_id, primary_language } = requestData; // Erhalte primary_language + memo_id_to_update = memo_id; + log( + 'INFO', + `[${functionId}] Anfrage erhalten für memo_id: ${memo_id}, primäre Sprache: ${primary_language || 'nicht angegeben'}` + ); + if (!memo_id) { + log('ERROR', `[${functionId}] Keine memo_id in der Anfrage gefunden`); + return createErrorResponse('memo_id ist erforderlich', 400, corsHeaders); + } + // Set processing status + await setMemoProcessingStatus(memoro_sb, memo_id, 'blueprint'); + log('INFO', `[${functionId}] Rufe Memo mit ID ${memo_id} aus der Datenbank ab`); + const { data: memo, error: memoError } = await memoro_sb + .from('memos') + .select('*') + .eq('id', memo_id) + .single(); + if (memoError || !memo) { + log('ERROR', `[${functionId}] Memo ${memo_id} nicht gefunden:`, memoError); + await setMemoErrorStatus(memoro_sb, memo_id, 'blueprint', 'Memo nicht gefunden'); + return createErrorResponse('Memo nicht gefunden', 404, corsHeaders); + } + const blueprintId = memo.metadata?.blueprint_id || memo.metadata?.blueprintId; + log('INFO', `[${functionId}] Blueprint-ID aus Memo-Metadaten: ${blueprintId}`); + if (!blueprintId) { + log('ERROR', `[${functionId}] Keine Blueprint-ID in Memo-Metadaten ${memo_id} gefunden`); + log( + 'DEBUG', + `[${functionId}] Verfügbare Metadaten-Schlüssel: ${Object.keys(memo.metadata || {}).join(', ')}` + ); + await setMemoErrorStatus( + memoro_sb, + memo_id, + 'blueprint', + 'Kein Blueprint im Memo-Metadaten gefunden' + ); + return createErrorResponse('Kein Blueprint im Memo-Metadaten gefunden', 400, corsHeaders); + } + log('INFO', `[${functionId}] Lade Blueprint mit ID ${blueprintId}`); + const { data: blueprint, error: blueprintError } = await memoro_sb + .from('blueprints') + .select('*') + .eq('id', blueprintId) + .single(); + if (blueprintError || !blueprint) { + log('ERROR', `[${functionId}] Blueprint ${blueprintId} nicht gefunden:`, blueprintError); + await setMemoErrorStatus( + memoro_sb, + memo_id, + 'blueprint', + `Blueprint ${blueprintId} nicht gefunden` + ); + return createErrorResponse('Blueprint nicht gefunden', 404, corsHeaders); + } + log('INFO', `[${functionId}] Lade Prompt-Links für Blueprint ${blueprintId}`); + const { data: promptLinks, error: promptLinksError } = await memoro_sb + .from('prompt_blueprints') + .select('prompt_id, sort_order') + .eq('blueprint_id', blueprintId); + if (promptLinksError || !Array.isArray(promptLinks) || promptLinks.length === 0) { + log( + 'ERROR', + `[${functionId}] Keine Prompt-Links für Blueprint ${blueprintId} gefunden:`, + promptLinksError + ); + await setMemoErrorStatus( + memoro_sb, + memo_id, + 'blueprint', + 'Keine Prompts für diesen Blueprint gefunden' + ); + return createErrorResponse('Keine Prompts für diesen Blueprint gefunden', 404, corsHeaders); + } + const promptIds = promptLinks.map((l) => l.prompt_id); + // Create a map of prompt_id to sort_order for later use + const promptSortOrderMap = new Map(promptLinks.map((l) => [l.prompt_id, l.sort_order])); + log('INFO', `[${functionId}] Gefundene Prompt-IDs: ${promptIds.join(', ')}`); + // Transcript extrahieren (from utterances or legacy fields) + const transcript = getTranscriptText(memo); + log( + 'INFO', + `[${functionId}] Extrahiertes Transkript (Länge: ${transcript.length}, erste 100 Zeichen): ${transcript.substring(0, 100)}...` + ); + if (!transcript) { + log('ERROR', `[${functionId}] Kein Transkript im Memo ${memo_id} gefunden`); + await setMemoErrorStatus(memoro_sb, memo_id, 'blueprint', 'Kein Transkript im Memo gefunden'); + return createErrorResponse('Kein Transkript im Memo gefunden', 400, corsHeaders); + } + log('INFO', `[${functionId}] Lade Prompts (${promptIds.length}) aus der Datenbank`); + const { data: prompts, error: promptsError } = await memoro_sb + .from('prompts') + .select('*') + .in('id', promptIds); + if (promptsError || !Array.isArray(prompts) || prompts.length === 0) { + log( + 'ERROR', + `[${functionId}] Prompts (${promptIds.join(', ')}) nicht gefunden:`, + promptsError + ); + await setMemoErrorStatus(memoro_sb, memo_id, 'blueprint', 'Prompts nicht gefunden'); + return createErrorResponse('Prompts nicht gefunden', 404, corsHeaders); + } + log('INFO', `[${functionId}] ${prompts.length} Prompts gefunden, beginne mit der Verarbeitung`); + const results = []; + // Basis-Sprache aus primary_language (z.B. "en-GB" -> "en") ermitteln + let baseMemoLang = null; + if (primary_language && typeof primary_language === 'string') { + baseMemoLang = primary_language.split('-')[0].toLowerCase(); // Sicherstellen, dass es Kleinbuchstaben sind + log( + 'DEBUG', + `[${functionId}] Ermittelte Basis-Sprache: ${baseMemoLang} (aus ${primary_language})` + ); + } else { + baseMemoLang = 'en'; + log( + 'WARN', + `[${functionId}] Keine primäre Sprache vom Trigger übergeben oder ungültig. Nutze Standard-Fallbacks.` + ); + } + const defaultPreferredLang = 'de'; // Standard bevorzugte Sprache + const defaultFallbackLang = 'en'; // Standard Ausweichsprache + for (const prompt of prompts) { + const promptId = prompt.id; + log('INFO', `[${functionId}] Verarbeite Prompt mit ID ${promptId}`); + let promptText = ''; + if (prompt.prompt_text && typeof prompt.prompt_text === 'object') { + promptText = + (baseMemoLang && prompt.prompt_text[baseMemoLang]) || // 1. Memo-Primärsprache (Basis) + prompt.prompt_text[defaultPreferredLang] || // 2. Standard 'de' + prompt.prompt_text[defaultFallbackLang] || // 3. Standard 'en' + Object.values(prompt.prompt_text)[0] || // 4. Erster verfügbarer Wert + ''; // 5. Leerstring + log( + 'DEBUG', + `[${functionId}] Gewählter Prompt-Text für Prompt ${promptId} (Sprache: ${baseMemoLang || defaultPreferredLang}): ${promptText.substring(0, 50)}...` + ); + } else { + log( + 'WARN', + `[${functionId}] Kein gültiges prompt_text Objekt für Prompt ${promptId} gefunden.` + ); + } + // Blueprint's system_prompt should be used as a pre-prompt (prepended to promptText) + let blueprintPrePrompt = ''; + if (blueprint.system_prompt && typeof blueprint.system_prompt === 'object') { + // Try to get system_prompt for the current language + blueprintPrePrompt = + (baseMemoLang && blueprint.system_prompt[baseMemoLang]) || + blueprint.system_prompt[defaultPreferredLang] || + blueprint.system_prompt[defaultFallbackLang] || + Object.values(blueprint.system_prompt)[0] || + ''; + if (blueprintPrePrompt) { + log( + 'DEBUG', + `[${functionId}] Verwende Blueprint-spezifischen Pre-Prompt für Sprache ${baseMemoLang}` + ); + } + } + // If no blueprint pre-prompt, use ROOT_SYSTEM_PROMPTS.PRE_PROMPT as fallback + if (!blueprintPrePrompt) { + blueprintPrePrompt = + ROOT_SYSTEM_PROMPTS.PRE_PROMPT[baseMemoLang] || + ROOT_SYSTEM_PROMPTS.PRE_PROMPT['de'] || + ''; + if (blueprintPrePrompt) { + log('DEBUG', `[${functionId}] Verwende Standard Pre-Prompt für Sprache ${baseMemoLang}`); + } + } + // Prepend the pre-prompt to the promptText if available + if (blueprintPrePrompt && promptText) { + promptText = blueprintPrePrompt + '\n\n' + promptText; + } + let memoryTitle = ''; + if (prompt.memory_title && typeof prompt.memory_title === 'object') { + memoryTitle = + (baseMemoLang && prompt.memory_title[baseMemoLang]) || // 1. Memo-Primärsprache (Basis) + prompt.memory_title[defaultPreferredLang] || // 2. Standard 'de' + prompt.memory_title[defaultFallbackLang] || // 3. Standard 'en' + Object.values(prompt.memory_title)[0] || // 4. Erster verfügbarer Wert + ''; // 5. Leerstring + log( + 'DEBUG', + `[${functionId}] Gewählter Memory-Titel für Prompt ${promptId} (Sprache: ${baseMemoLang || defaultPreferredLang}): ${memoryTitle}` + ); + } else { + log( + 'WARN', + `[${functionId}] Kein gültiges memory_title Objekt für Prompt ${promptId} gefunden.` + ); + } + if (!promptText) { + log( + 'WARN', + `[${functionId}] Kein Prompt-Text für Prompt ${promptId} nach Sprachauswahl. Überspringe.` + ); + results.push({ + prompt_id: promptId, + error: 'Kein Prompt-Text verfügbar in passender Sprache', + }); + continue; + } + log( + 'INFO', + `[${functionId}] Sende Prompt "${memoryTitle || 'Ohne Titel'}" (ID: ${promptId}) an LLM mit Sprache: ${baseMemoLang}` + ); + log( + 'DEBUG', + `[${functionId}] Prompt-Text (erste 150 Zeichen): ${promptText.substring(0, 150)}...` + ); + // Use default system prompt (from ROOT_SYSTEM_PROMPTS via getSystemPrompt) + const answer = await runPromptWithTranscript( + promptText, + transcript, + baseMemoLang, + functionId + ); + if (!answer) { + log('WARN', `[${functionId}] Keine Antwort vom LLM für Prompt ${promptId} erhalten`); + results.push({ + prompt_id: promptId, + error: 'Keine Antwort vom LLM erhalten', + }); + continue; + } + log( + 'INFO', + `[${functionId}] Erstelle neues Memory für Memo ${memo_id} mit Titel "${memoryTitle || 'Blueprint-Antwort'}"` + ); + const { data: newMemory, error: newMemoryError } = await memoro_sb + .from('memories') + .insert({ + memo_id: memo_id, + title: memoryTitle || 'Blueprint-Antwort', + content: answer, + media: null, + sort_order: promptSortOrderMap.get(promptId) || null, + metadata: { + type: 'blueprint', + blueprint_id: blueprintId, + prompt_id: promptId, + created_by: 'blueprint_function', + }, + }) + .select() + .single(); + if (newMemoryError) { + log( + 'ERROR', + `[${functionId}] Fehler beim Erstellen des Memories für Prompt ${promptId}:`, + newMemoryError + ); + results.push({ + prompt_id: promptId, + error: newMemoryError.message, + }); + } else { + log( + 'INFO', + `[${functionId}] Memory erfolgreich erstellt mit ID ${newMemory.id} für Prompt ${promptId}` + ); + results.push({ + prompt_id: promptId, + memory_id: newMemory.id, + }); + } + } + // Set completed status + await setMemoCompletedStatus(memoro_sb, memo_id, 'blueprint', { + results_count: results.length, + prompt_count: prompts.length, + blueprint_id: blueprintId, + }); + // Send broadcast update to notify clients about the blueprint completion + try { + const channel = memoro_sb.channel(`memo-updates-${memo_id}`); + // Subscribe first to ensure the channel is ready + channel.subscribe(async (status) => { + if (status === 'SUBSCRIBED') { + await channel.send({ + type: 'broadcast', + event: 'memo-updated', + payload: { + type: 'memo-updated', + memoId: memo_id, + changes: { + metadata: memo.metadata, + updated_at: new Date().toISOString(), + }, + source: 'blueprint-edge-function', + }, + }); + log('INFO', `[${functionId}] Broadcast sent for memo ${memo_id} blueprint completion`); + // Clean up the channel after sending + memoro_sb.removeChannel(channel); + } + }); + } catch (broadcastError) { + log('WARN', `[${functionId}] Failed to send broadcast update:`, broadcastError); + // Don't fail the function if broadcast fails + } + log( + 'INFO', + `[${functionId}] Blueprint-Verarbeitung erfolgreich abgeschlossen mit ${results.length} Ergebnissen für Memo ${memo_id}` + ); + return new Response( + JSON.stringify({ + success: true, + results, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 200, + } + ); + } catch (error) { + log('ERROR', `[${functionId}] Unerwarteter Fehler bei der Blueprint-Verarbeitung:`, error); + // Set error status in database + const errorToLog = error instanceof Error ? error : new Error(String(error)); + await setMemoErrorStatus(memoro_sb, memo_id_to_update, 'blueprint', errorToLog); + // Return error response + return createErrorResponse(`Unerwarteter Fehler: ${errorToLog.message}`, 500, corsHeaders); + } +}); diff --git a/apps/memoro/apps/backend/supabase/functions/combine-memos/index.ts b/apps/memoro/apps/backend/supabase/functions/combine-memos/index.ts new file mode 100644 index 000000000..09d9efdae --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/combine-memos/index.ts @@ -0,0 +1,715 @@ +// Follow this setup guide to integrate the Deno language server with your editor: +// https://deno.land/manual/getting_started/setup_your_environment +// This enables autocomplete, go to definition, etc. +// Setup type definitions for built-in Supabase Runtime APIs +import 'jsr:@supabase/functions-js/edge-runtime.d.ts'; +import { serve } from 'https://deno.land/std@0.215.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'; +/** + * Combine Memos Edge Function + * + * Diese Funktion kombiniert mehrere Memos zu einem neuen Memo und verarbeitet + * das kombinierte Transkript mit einem ausgewählten Blueprint. + * + * Workflow: + * 1. Empfängt Array von Memo-IDs, Blueprint-ID und optionalen benutzerdefinierten Prompt + * 2. Lädt alle angegebenen Memos aus der Datenbank + * 3. Kombiniert die Transkripte zu einem Text + * 4. Erstellt ein neues Memo mit dem kombinierten Inhalt + * 5. Verarbeitet das kombinierte Memo mit dem angegebenen Blueprint via Gemini Flash 2.5 + * 6. Erstellt neue Memory-Einträge mit den Blueprint-Ergebnissen + * + * @version 1.0.0 + * @date 2025-05-23 + */ // ─── Umgebungsvariablen ────────────────────────────────────────────── +const SUPABASE_URL = Deno.env.get('SUPABASE_URL'); +if (!SUPABASE_URL) { + throw new Error('SUPABASE_URL not configured'); +} +const SERVICE_KEY = Deno.env.get('C_SUPABASE_SECRET_KEY'); +if (!SERVICE_KEY) { + throw new Error('C_SUPABASE_SECRET_KEY not configured'); +} +// Google Gemini Konfiguration (Flash 2.5) +const GEMINI_API_KEY = Deno.env.get('COMBINE_MEMOS_GEMINI_MEMORO') || ''; +const GEMINI_MODEL = 'gemini-2.0-flash'; +const GEMINI_ENDPOINT = 'https://generativelanguage.googleapis.com/v1beta/models'; +// Azure OpenAI Konfiguration (Backup) +const AZURE_OPENAI_ENDPOINT = 'https://memoroseopenai.openai.azure.com'; +const AZURE_OPENAI_KEY = Deno.env.get('AZURE_OPENAI_KEY'); +if (!AZURE_OPENAI_KEY) { + throw new Error('AZURE_OPENAI_KEY not configured'); +} +const AZURE_OPENAI_DEPLOYMENT = 'gpt-4.1-mini-se'; +const AZURE_OPENAI_API_VERSION = '2025-01-01-preview'; +const memoro_sb = createClient(SUPABASE_URL, SERVICE_KEY); +// ─── Hilfsfunktionen ────────────────────────────────────────────── +/** + * Extrahiert Transkript-Daten aus verschiedenen Memo-Formaten + * Unterstützt sowohl alte (einfache Text) als auch neue (Utterances) Formate + */ function extractTranscriptFromMemo(memo) { + const result = { + text: '', + utterances: undefined, + speakers: undefined, + speakerMap: undefined, + }; + // Prüfe source und metadata + const source = memo.source || {}; + const metadata = memo.metadata || {}; + // 1. Prüfe auf utterances (neues Format mit Speaker-Diarization) + const utterances = source.utterances || metadata.utterances; + if (utterances && Array.isArray(utterances) && utterances.length > 0) { + // Konvertiere utterances zu Text mit Speaker-Informationen + result.utterances = utterances; + result.text = utterances + .map((u) => { + const speaker = u.speakerId ? `[${u.speakerId}] ` : ''; + return `${speaker}${u.text}`; + }) + .join('\n'); + // Übernehme speakers mapping falls vorhanden + if (source.speakers) { + result.speakers = source.speakers; + } + return result; + } + // 2. Prüfe auf speakerMap (alternatives Format) + if (source.speakerMap && Object.keys(source.speakerMap).length > 0) { + result.speakerMap = source.speakerMap; + // Konvertiere speakerMap zu chronologischem Text + const allUtterances = []; + Object.entries(source.speakerMap).forEach(([speakerId, utterances]) => { + if (Array.isArray(utterances)) { + utterances.forEach((u) => { + allUtterances.push({ + ...u, + speakerId: speakerId, + }); + }); + } + }); + // Sortiere nach offset falls vorhanden + allUtterances.sort((a, b) => (a.offset || 0) - (b.offset || 0)); + result.utterances = allUtterances; + result.text = allUtterances.map((u) => `[${u.speakerId}] ${u.text}`).join('\n'); + return result; + } + // 3. Prüfe auf altes Format (einfacher Text) + const simpleTranscript = + source.transcript || source.transcription || source.text || metadata.transcript || ''; + if (simpleTranscript) { + result.text = simpleTranscript; + return result; + } + // 4. Fallback: transcription_parts (älteres Array-Format) + if (source.transcription_parts && Array.isArray(source.transcription_parts)) { + result.text = source.transcription_parts + .map((part) => part.text || part.transcript || '') + .filter(Boolean) + .join(' '); + return result; + } + // 5. Spezielle Behandlung für combined memos mit additional_recordings + if (source.additional_recordings && Array.isArray(source.additional_recordings)) { + const allUtterances = []; + const allSpeakers = {}; + const texts = []; + source.additional_recordings.forEach((recording) => { + // Sammle utterances aus den recordings + if (recording.utterances && Array.isArray(recording.utterances)) { + allUtterances.push(...recording.utterances); + } + // Sammle speaker mappings + if (recording.speakers) { + Object.assign(allSpeakers, recording.speakers); + } + // Sammle auch Text als Fallback + if (recording.transcript) { + texts.push(recording.transcript); + } + }); + // Wenn wir strukturierte utterances haben, bevorzuge diese + if (allUtterances.length > 0) { + // Sortiere utterances chronologisch + allUtterances.sort((a, b) => (a.offset || 0) - (b.offset || 0)); + result.utterances = allUtterances; + result.speakers = allSpeakers; + result.text = allUtterances + .map((u) => { + const speaker = u.speakerId ? `[${u.speakerId}] ` : ''; + return `${speaker}${u.text}`; + }) + .join('\n'); + return result; + } else { + // Fallback zu einfachem Text + result.text = texts.join('\n\n'); + } + } + return result; +} +/** + * Formatiert Transkript für LLM-Verarbeitung mit verbesserter Speaker-Darstellung + */ function formatTranscriptForLLM(title, date, extractedTranscript, speakers) { + const header = `=== ${title} (${date}) ===\n`; + // Wenn keine Speaker-Informationen vorhanden sind, gib einfachen Text zurück + if (!extractedTranscript.utterances || extractedTranscript.utterances.length === 0) { + return header + extractedTranscript.text + '\n\n'; + } + // Formatiere mit Speaker-Informationen und Zeitstempeln + let formattedText = header; + extractedTranscript.utterances.forEach((utterance) => { + const timestamp = utterance.offset ? ` [${formatTimestamp(utterance.offset)}]` : ''; + const speakerId = utterance.speakerId || 'Unknown'; + // Verwende Speaker-Label falls vorhanden, sonst formatiere Speaker-ID + let speakerLabel = speakerId; + if (speakers && speakers[speakerId]) { + speakerLabel = speakers[speakerId]; + } else if (speakerId.startsWith('speaker')) { + // Konvertiere "speaker1" zu "Sprecher 1" + const speakerNum = speakerId.replace('speaker', ''); + speakerLabel = `Sprecher ${speakerNum}`; + } + formattedText += `${speakerLabel}${timestamp}: ${utterance.text}\n`; + }); + return formattedText + '\n'; +} +/** + * Formatiert Millisekunden in lesbares Zeitformat (MM:SS) + */ function formatTimestamp(ms) { + const totalSeconds = Math.floor(ms / 1000); + const minutes = Math.floor(totalSeconds / 60); + const seconds = totalSeconds % 60; + return `${minutes.toString().padStart(2, '0')}:${seconds.toString().padStart(2, '0')}`; +} +// ─── Logging-Funktion ────────────────────────────────────────────── +/** + * Erweiterte Logging-Funktion mit Zeitstempel und Log-Level + */ function log(level, message, data) { + const timestamp = new Date().toISOString(); + const logMessage = `[${timestamp}] [${level.toUpperCase()}] ${message}`; + switch (level.toUpperCase()) { + case 'INFO': + console.log(logMessage); + break; + case 'DEBUG': + console.debug(logMessage); + break; + case 'WARN': + console.warn(logMessage); + break; + case 'ERROR': + console.error(logMessage); + break; + default: + console.log(logMessage); + break; + } + if (data) { + if (level.toUpperCase() === 'ERROR') { + console.error(data); + } else { + console.log(typeof data === 'object' ? JSON.stringify(data, null, 2) : data); + } + } +} +/** + * Dekodiert ein JWT-Token und extrahiert die Payload + */ function decodeJWT(token) { + try { + const parts = token.split('.'); + if (parts.length !== 3) { + console.error('Invalid JWT: Incorrect number of parts'); + return null; + } + const payload = parts[1]; + const padded = payload.padEnd(payload.length + ((4 - (payload.length % 4)) % 4), '='); + const decoded = atob(padded); + const parsed = JSON.parse(decoded); + return parsed; + } catch (error) { + console.error('Fehler beim Dekodieren des JWT:', error); + return null; + } +} +/** + * Verarbeitet Blueprint-Prompts mit Gemini Flash 2.5 + */ async function processWithGemini(transcript, prompt, functionIdForLog = 'combine') { + const requestId = crypto.randomUUID().substring(0, 8); + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Starte Gemini-Anfrage für Blueprint-Verarbeitung.` + ); + try { + const fullPrompt = `${prompt} + +Inhalt: ${transcript} + +Bearbeite den obigen Inhalt entsprechend der Anweisung und antworte strukturiert und präzise.`; + log( + 'DEBUG', + `[${functionIdForLog}][LLM-${requestId}] Vollständiger Prompt (Länge: ${fullPrompt.length})` + ); + const startTime = Date.now(); + const response = await fetch( + `${GEMINI_ENDPOINT}/${GEMINI_MODEL}:generateContent?key=${GEMINI_API_KEY}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + contents: [ + { + parts: [ + { + text: fullPrompt, + }, + ], + }, + ], + generationConfig: { + temperature: 0.7, + maxOutputTokens: 8192, + }, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Gemini-Anfrage abgeschlossen in ${duration}ms` + ); + if (!response.ok) { + throw new Error(`Gemini API error: ${response.status} ${response.statusText}`); + } + const data = await response.json(); + if (!data.candidates || !data.candidates[0] || !data.candidates[0].content) { + throw new Error('Unerwartete Gemini API Response-Struktur'); + } + const result = data.candidates[0].content.parts[0].text; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Gemini-Antwort erhalten (Länge: ${result.length})` + ); + return result; + } catch (error) { + log('ERROR', `[${functionIdForLog}][LLM-${requestId}] Gemini-Fehler:`, error); + throw error; + } +} +/** + * Fallback zu Azure OpenAI + */ async function processWithAzure(transcript, prompt, functionIdForLog = 'combine') { + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][LLM-${requestId}] Starte Azure OpenAI-Anfrage als Fallback.`); + try { + const fullPrompt = `${prompt} + +Inhalt: ${transcript} + +Bearbeite den obigen Inhalt entsprechend der Anweisung und antworte strukturiert und präzise.`; + const startTime = Date.now(); + const response = await fetch( + `${AZURE_OPENAI_ENDPOINT}/openai/deployments/${AZURE_OPENAI_DEPLOYMENT}/chat/completions?api-version=${AZURE_OPENAI_API_VERSION}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'api-key': AZURE_OPENAI_KEY, + }, + body: JSON.stringify({ + messages: [ + { + role: 'user', + content: fullPrompt, + }, + ], + max_tokens: 2048, + temperature: 0.7, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Azure OpenAI-Anfrage abgeschlossen in ${duration}ms` + ); + if (!response.ok) { + throw new Error(`Azure OpenAI API error: ${response.status} ${response.statusText}`); + } + const data = await response.json(); + const result = data.choices[0].message.content; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Azure OpenAI-Antwort erhalten (Länge: ${result.length})` + ); + return result; + } catch (error) { + log('ERROR', `[${functionIdForLog}][LLM-${requestId}] Azure OpenAI-Fehler:`, error); + throw error; + } +} +// ─── Hauptfunktion ────────────────────────────────────────────── +serve(async (req) => { + const functionId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionId}] Combine Memos Function gestartet`); + try { + // CORS Headers + if (req.method === 'OPTIONS') { + return new Response('ok', { + headers: { + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': 'POST', + 'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type', + }, + }); + } + if (req.method !== 'POST') { + return new Response( + JSON.stringify({ + error: 'Method not allowed', + }), + { + status: 405, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } + // Request Body parsen + const body = await req.json(); + const { memo_ids, blueprint_id, custom_prompt } = body; + log('INFO', `[${functionId}] Request empfangen:`, { + memo_ids_count: memo_ids?.length, + blueprint_id, + has_custom_prompt: !!custom_prompt, + }); + // Validierung + if (!memo_ids || !Array.isArray(memo_ids) || memo_ids.length === 0) { + throw new Error('memo_ids ist erforderlich und muss ein nicht-leeres Array sein'); + } + if (!blueprint_id) { + throw new Error('blueprint_id ist erforderlich'); + } + // Extract user_id from JWT token + const authHeader = req.headers.get('authorization'); + if (!authHeader || !authHeader.startsWith('Bearer ')) { + throw new Error('Authorization header fehlt oder ist ungültig'); + } + const token = authHeader.substring(7); + const decoded = decodeJWT(token); + if (!decoded || !decoded.sub) { + throw new Error('JWT-Token ungültig oder user_id fehlt'); + } + const user_id = decoded.sub; + log('INFO', `[${functionId}] User authentifiziert: ${user_id}`); + // Memos aus der Datenbank laden (mit vollständigen Daten) + log('INFO', `[${functionId}] Lade ${memo_ids.length} Memos aus der Datenbank...`); + const { data: memos, error: memosError } = await memoro_sb + .from('memos') + .select('id, title, source, metadata, created_at') + .in('id', memo_ids) + .eq('user_id', user_id); + if (memosError) { + throw new Error(`Fehler beim Laden der Memos: ${memosError.message}`); + } + if (!memos || memos.length === 0) { + throw new Error('Keine Memos gefunden oder keine Berechtigung'); + } + log('INFO', `[${functionId}] ${memos.length} Memos erfolgreich geladen`); + // Spezielle Blueprint-IDs behandeln + let blueprint; + let prompts = []; + if (blueprint_id === 'transcript_only') { + log('INFO', `[${functionId}] Verwende speziellen Blueprint: Nur Transkripte kombinieren`); + blueprint = { + id: 'transcript_only', + name: { + de: 'Transkripte kombinieren', + en: 'Combine Transcripts', + }, + description: { + de: 'Kombiniert nur die Transkripte ohne AI-Verarbeitung', + en: 'Combines only transcripts without AI processing', + }, + }; + prompts = []; // Keine Prompts für reine Transkript-Kombination + } else { + // Blueprint aus der Datenbank laden + log('INFO', `[${functionId}] Lade Blueprint ${blueprint_id}...`); + const { data: blueprintData, error: blueprintError } = await memoro_sb + .from('blueprints') + .select('id, name, description') + .eq('id', blueprint_id) + .eq('is_public', true) + .single(); + if (blueprintError) { + throw new Error(`Fehler beim Laden des Blueprints: ${blueprintError.message}`); + } + if (!blueprintData) { + throw new Error('Blueprint nicht gefunden oder nicht öffentlich'); + } + blueprint = blueprintData; + log( + 'INFO', + `[${functionId}] Blueprint erfolgreich geladen: ${blueprint.name?.de || blueprint.name?.en || 'Unnamed'}` + ); + // Blueprint-Prompts laden + log('INFO', `[${functionId}] Lade Prompts für Blueprint...`); + const { data: promptLinks, error: promptLinksError } = await memoro_sb + .from('prompt_blueprints') + .select('prompt_id') + .eq('blueprint_id', blueprint_id); + if (promptLinksError) { + throw new Error(`Fehler beim Laden der Prompt-Links: ${promptLinksError.message}`); + } + if (!promptLinks || promptLinks.length === 0) { + throw new Error('Keine Prompts für diesen Blueprint gefunden'); + } + const promptIds = promptLinks.map((link) => link.prompt_id); + const { data: promptsData, error: promptsError } = await memoro_sb + .from('prompts') + .select('id, memory_title, prompt_text') + .in('id', promptIds); + if (promptsError) { + throw new Error(`Fehler beim Laden der Prompts: ${promptsError.message}`); + } + prompts = promptsData || []; + } + log('INFO', `[${functionId}] ${prompts?.length || 0} Prompts für Blueprint geladen`); + // Transkripte strukturiert kombinieren und für LLM aufbereiten + const additionalRecordings = []; + let combinedTranscriptForLLM = ''; + let combinedTranscriptForDisplay = ''; + for (let index = 0; index < memos.length; index++) { + const memo = memos[index]; + const title = memo.title || `Memo ${index + 1}`; + const date = new Date(memo.created_at).toLocaleDateString('de-DE'); + // Verwende die neue Extraktions-Funktion für alle Formate + const extractedTranscript = extractTranscriptFromMemo(memo); + // Bestimme den korrekten Audio-Pfad und Type + const audioPath = memo.source?.audio_path; + const hasAudioFile = audioPath && !audioPath.startsWith('combined-memo-'); + // Erstelle additional_recording Eintrag für separates Transkript mit Audio + additionalRecordings.push({ + audio_path: audioPath || `combined-memo-${memo.id}`, + type: hasAudioFile ? 'audio' : 'combined_memo', + timestamp: memo.created_at, + status: 'completed', + transcript: extractedTranscript.text, + // Bewahre alle strukturierten Daten für separate Anzeige + utterances: extractedTranscript.utterances, + speakers: extractedTranscript.speakers, + speakerMap: extractedTranscript.speakerMap, + languages: memo.source?.languages || ['de-DE'], + primary_language: memo.source?.primary_language || 'de-DE', + // Audio-spezifische Daten + duration: memo.source?.duration || memo.source?.duration_seconds, + // Erweiterte Metadaten für bessere Anzeige + memo_metadata: { + original_memo_id: memo.id, + original_title: title, + original_created_at: memo.created_at, + original_source: memo.source, + combine_index: index, + // Zusätzliche Anzeige-Informationen + display_title: title, + display_date: date, + has_audio: hasAudioFile, + }, + }); + // Text für LLM-Verarbeitung mit Speaker-Context aufbereiten + // Versuche Speaker-Labels aus Metadata zu holen + const speakerLabels = memo.metadata?.speakerLabels || extractedTranscript.speakers; + combinedTranscriptForLLM += formatTranscriptForLLM( + title, + date, + extractedTranscript, + speakerLabels + ); + // Einfacheres Format für die Anzeige (ohne Header) + if (index > 0) { + combinedTranscriptForDisplay += '\n\n'; + } + combinedTranscriptForDisplay += extractedTranscript.text; + } + log( + 'INFO', + `[${functionId}] ${additionalRecordings.length} Additional-Recordings erstellt und ${combinedTranscriptForLLM.length} Zeichen für LLM aufbereitet` + ); + // Neues kombiniertes Memo erstellen (nur als Container für separate Transkripte) + const blueprintName = blueprint.name?.de || blueprint.name?.en || 'Unnamed Blueprint'; + const combinedMemoTitle = `Combined: ${memos.length} memos (${blueprintName})`; + // Erstelle beschreibenden Intro-Text + const originalTitles = memos.map((memo) => memo.title || 'Untitled').join(', '); + const introText = `Dieses Memo kombiniert ${memos.length} ursprüngliche Memos: ${originalTitles}. Jedes Memo wird als separates Transkript unten angezeigt.`; + const newMemoData = { + user_id: user_id, + title: combinedMemoTitle, + intro: introText, + source: { + type: 'combined', + // KEIN kombiniertes Haupttranskript mehr - nur Container-Metadaten + additional_recordings: additionalRecordings, + languages: ['de-DE'], + primary_language: 'de-DE', + // Kombinierungs-spezifische Metadaten + combine_metadata: { + blueprint_id: blueprint_id, + blueprint_name: blueprintName, + custom_prompt: custom_prompt || null, + combined_at: new Date().toISOString(), + combined_memo_count: memos.length, + original_memo_ids: memo_ids, + original_titles: originalTitles, + }, + }, + created_at: new Date().toISOString(), + updated_at: new Date().toISOString(), + }; + log('INFO', `[${functionId}] Erstelle neues kombiniertes Memo...`); + const { data: newMemo, error: createMemoError } = await memoro_sb + .from('memos') + .insert(newMemoData) + .select() + .single(); + if (createMemoError) { + throw new Error(`Fehler beim Erstellen des neuen Memos: ${createMemoError.message}`); + } + log('INFO', `[${functionId}] Neues Memo erstellt: ${newMemo.id}`); + // Blueprint-Prompts verarbeiten (außer bei transcript_only) + const currentLanguage = 'de'; // Standard Deutsch, könnte später dynamisch sein + let processedCount = 0; + if (blueprint_id === 'transcript_only') { + log('INFO', `[${functionId}] Überspringe AI-Verarbeitung für transcript_only Blueprint`); + } else { + for (const prompt of prompts || []) { + try { + const promptTitle = + prompt.memory_title?.[currentLanguage] || prompt.memory_title?.en || 'Untitled'; + const promptText = prompt.prompt_text?.[currentLanguage] || prompt.prompt_text?.en || ''; + if (!promptText) { + log('WARN', `[${functionId}] Prompt ${prompt.id} hat keinen Text, überspringe`); + continue; + } + // Custom Prompt anhängen, falls vorhanden + const finalPrompt = custom_prompt + ? `${promptText}\n\nZusätzliche Anweisung: ${custom_prompt}` + : promptText; + log('INFO', `[${functionId}] Verarbeite Prompt: ${promptTitle}`); + // Mit Gemini verarbeiten, Fallback zu Azure + let aiResponse; + try { + aiResponse = await processWithGemini(combinedTranscriptForLLM, finalPrompt, functionId); + } catch (geminiError) { + log( + 'WARN', + `[${functionId}] Gemini fehlgeschlagen, verwende Azure Fallback:`, + geminiError + ); + aiResponse = await processWithAzure(combinedTranscriptForLLM, finalPrompt, functionId); + } + // Get the highest sort_order for this memo + const { data: maxSortData, error: maxSortError } = await memoro_sb + .from('memories') + .select('sort_order') + .eq('memo_id', newMemo.id) + .order('sort_order', { + ascending: false, + }) + .limit(1) + .single(); + // If error or no data, use random number above 5000, otherwise increment + const nextSortOrder = + maxSortError || !maxSortData?.sort_order + ? Math.floor(Math.random() * 5000) + 5000 // Random between 5000-9999 + : maxSortData.sort_order + 1; + // Memory-Eintrag erstellen + const memoryData = { + memo_id: newMemo.id, + title: promptTitle, + content: aiResponse, + sort_order: nextSortOrder, + created_at: new Date().toISOString(), + updated_at: new Date().toISOString(), + }; + const { error: memoryError } = await memoro_sb.from('memories').insert(memoryData); + if (memoryError) { + log( + 'ERROR', + `[${functionId}] Fehler beim Erstellen der Memory für Prompt ${prompt.id}:`, + memoryError + ); + } else { + processedCount++; + log('INFO', `[${functionId}] Memory erstellt für Prompt: ${promptTitle}`); + } + } catch (promptError) { + log('ERROR', `[${functionId}] Fehler bei Prompt ${prompt.id}:`, promptError); + } + } + } + log( + 'INFO', + `[${functionId}] Blueprint-Verarbeitung abgeschlossen. ${processedCount}/${prompts?.length || 0} Prompts erfolgreich verarbeitet` + ); + // Headline und Intro für das kombinierte Memo generieren (auch bei transcript_only) + log('INFO', `[${functionId}] Starte Headline-Generierung für kombiniertes Memo...`); + try { + const headlineResponse = await fetch(`${SUPABASE_URL}/functions/v1/headline`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${SERVICE_KEY}`, + }, + body: JSON.stringify({ + memo_id: newMemo.id, + }), + }); + if (headlineResponse.ok) { + const headlineData = await headlineResponse.json(); + log('INFO', `[${functionId}] Headline erfolgreich generiert: ${headlineData.headline}`); + } else { + const errorText = await headlineResponse.text(); + log('WARN', `[${functionId}] Headline-Generierung fehlgeschlagen:`, errorText); + } + } catch (headlineError) { + log('WARN', `[${functionId}] Fehler bei Headline-Generierung:`, headlineError); + } + // Erfolgreiche Antwort + return new Response( + JSON.stringify({ + success: true, + memo_id: newMemo.id, + combined_memos_count: memos.length, + processed_prompts_count: processedCount, + total_prompts_count: prompts?.length || 0, + }), + { + status: 200, + headers: { + 'Content-Type': 'application/json', + 'Access-Control-Allow-Origin': '*', + }, + } + ); + } catch (error) { + log('ERROR', `[${functionId}] Fehler in Combine Memos Function:`, error); + return new Response( + JSON.stringify({ + error: error.message || 'Ein unbekannter Fehler ist aufgetreten', + function_id: functionId, + }), + { + status: 500, + headers: { + 'Content-Type': 'application/json', + 'Access-Control-Allow-Origin': '*', + }, + } + ); + } +}); diff --git a/apps/memoro/apps/backend/supabase/functions/create-memory/constants.ts b/apps/memoro/apps/backend/supabase/functions/create-memory/constants.ts new file mode 100644 index 000000000..097016282 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/create-memory/constants.ts @@ -0,0 +1,75 @@ +/** + * System-Prompts für die Memory-Erstellung in verschiedenen Sprachen + * + * Die Prompts werden als System-Prompt für die AI-Nachrichten verwendet, + * um konsistente und hilfreiche Antworten zu generieren. + */ /** + * Interface für die Prompt-Konfiguration + */ /** + * System-Prompts für die Memory-Erstellung + * + * Unterstützte Sprachen: + * - de: Deutsch + * - en: Englisch + * - fr: Französisch + * - es: Spanisch + * - it: Italienisch + * - nl: Niederländisch + * - pt: Portugiesisch + * - ru: Russisch + * - ja: Japanisch + * - ko: Koreanisch + * - zh: Chinesisch + * - ar: Arabisch + * - hi: Hindi + * - tr: Türkisch + * - pl: Polnisch + */ export const SYSTEM_PROMPTS = { + system: { + // Deutsch + de: 'Du bist ein hilfreicher Assistent, der Texte analysiert und verarbeitet. Deine Aufgabe ist es, Transkripte von Gesrpächen gemäß den gegebenen Anweisungen zu bearbeiten. Antworte präzise, strukturiert und hilfreich. Antworte in plain text.', + // Englisch + en: 'You are a helpful assistant that analyzes and processes texts. Your task is to process transcripts of conversations according to the given instructions. Respond precisely, structured, and helpfully. Respond in plain text.', + // Französisch + fr: 'Vous êtes un assistant utile qui analyse et traite les textes. Votre tâche est de traiter les transcriptions de conversations selon les instructions données. Répondez de manière précise, structurée et utile. Répondez en texte brut.', + // Spanisch + es: 'Eres un asistente útil que analiza y procesa textos. Tu tarea es procesar transcripciones de conversaciones según las instrucciones dadas. Responde de forma precisa, estructurada y útil. Responde en texto plano.', + // Italienisch + it: 'Sei un assistente utile che analizza e elabora testi. Il tuo compito è elaborare trascrizioni di conversazioni secondo le istruzioni date. Rispondi in modo preciso, strutturato e utile. Rispondi in testo semplice.', + // Niederländisch + nl: 'Je bent een behulpzame assistent die teksten analyseert en verwerkt. Je taak is om transcripties van gesprekken te verwerken volgens de gegeven instructies. Antwoord precies, gestructureerd en behulpzaam. Antwoord in platte tekst.', + // Portugiesisch + pt: 'Você é um assistente útil que analisa e processa textos. Sua tarefa é processar transcrições de conversas de acordo com as instruções dadas. Responda de forma precisa, estruturada e útil. Responda em texto simples.', + // Russisch + ru: 'Вы полезный помощник, который анализирует и обрабатывает тексты. Ваша задача - обрабатывать расшифровки разговоров согласно данным инструкциям. Отвечайте точно, структурированно и полезно. Отвечайте простым текстом.', + // Japanisch + ja: 'あなたはテキストを分析・処理する有用なアシスタントです。あなたの仕事は、与えられた指示に従って会話の転写を処理することです。正確で構造化された有用な回答をしてください。プレーンテキストで回答してください。', + // Koreanisch + ko: '당신은 텍스트를 분석하고 처리하는 유용한 어시스턴트입니다. 당신의 임무는 주어진 지시에 따라 대화의 전사본을 처리하는 것입니다. 정확하고 구조화되며 도움이 되는 방식으로 응답하세요. 일반 텍스트로 응답하세요.', + // Chinesisch (vereinfacht) + zh: '你是一个有用的助手,负责分析和处理文本。你的任务是根据给定的指令处理对话的转录。请准确、结构化、有帮助地回答。请用纯文本回答。', + // Arabisch + ar: 'أنت مساعد مفيد يحلل ويعالج النصوص. مهمتك هي معالجة نسخ المحادثات وفقاً للتعليمات المقدمة. أجب بدقة وبطريقة منظمة ومفيدة. أجب بنص عادي.', + // Hindi + hi: 'आप एक उपयोगी सहायक हैं जो पाठों का विश्लेषण और प्रसंस्करण करते हैं। आपका कार्य दिए गए निर्देशों के अनुसार बातचीत के प्रतिलेख को संसाधित करना है। सटीक, संरचित और सहायक तरीके से उत्तर दें। सादे पाठ में उत्तर दें।', + // Türkisch + tr: 'Metinleri analiz eden ve işleyen yararlı bir asistansınız. Göreviniz, verilen talimatlara göre konuşma transkriptlerini işlemektir. Kesin, yapılandırılmış ve yararlı şekilde yanıt verin. Düz metin olarak yanıt verin.', + // Polnisch + pl: 'Jesteś pomocnym asystentem, który analizuje i przetwarza teksty. Twoim zadaniem jest przetwarzanie transkrypcji rozmów zgodnie z podanymi instrukcjami. Odpowiadaj precyzyjnie, uporządkowanie i pomocnie. Odpowiadaj zwykłym tekstem.', + }, +}; +/** + * Hilfsfunktion zum Abrufen des System-Prompts für eine bestimmte Sprache + * @param language Sprache (z.B. 'de', 'en', 'fr') + * @returns System-Prompt für die angegebene Sprache oder Fallback + */ export function getSystemPrompt(language) { + const lang = language.toLowerCase().split('-')[0]; // z.B. 'de-DE' -> 'de' + // Versuche spezifische Sprache, dann Deutsch, dann Englisch, dann erste verfügbare + return ( + SYSTEM_PROMPTS.system[lang] || + SYSTEM_PROMPTS.system['de'] || + SYSTEM_PROMPTS.system['en'] || + Object.values(SYSTEM_PROMPTS.system)[0] || + 'You are a helpful AI assistant.' + ); +} diff --git a/apps/memoro/apps/backend/supabase/functions/create-memory/index.ts b/apps/memoro/apps/backend/supabase/functions/create-memory/index.ts new file mode 100644 index 000000000..68487a214 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/create-memory/index.ts @@ -0,0 +1,445 @@ +// Follow this setup guide to integrate the Deno language server with your editor: +// https://deno.land/manual/getting_started/setup_your_environment +// This enables autocomplete, go to definition, etc. +// Setup type definitions for built-in Supabase Runtime APIs +import 'jsr:@supabase/functions-js/edge-runtime.d.ts'; +import { serve } from 'https://deno.land/std@0.215.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'; +import { getSystemPrompt } from './constants.ts'; +import { getTranscriptText } from '../_shared/transcript-utils.ts'; +import { ROOT_SYSTEM_PROMPTS } from '../_shared/system-prompt.ts'; +/** + * Create Memory Edge Function + * + * Diese Funktion erstellt eine neue Memory für ein Memo mit einem spezifischen Prompt. + * Sie lädt das Memo, den Prompt und verwendet Azure OpenAI für die Verarbeitung. + * + * @version 1.0.0 + * @date 2025-05-27 + */ // ─── Umgebungsvariablen ────────────────────────────────────────────── +const SUPABASE_URL = Deno.env.get('SUPABASE_URL'); +if (!SUPABASE_URL) { + throw new Error('SUPABASE_URL not configured'); +} +const SERVICE_KEY = Deno.env.get('C_SUPABASE_SECRET_KEY'); +if (!SERVICE_KEY) { + throw new Error('C_SUPABASE_SECRET_KEY not configured'); +} +// Google Gemini Konfiguration +const GEMINI_API_KEY = Deno.env.get('CREATE_MEMORY_GEMINI_MEMORO') || ''; +const GEMINI_MODEL = 'gemini-2.0-flash'; +const GEMINI_ENDPOINT = 'https://generativelanguage.googleapis.com/v1beta/models'; +// Azure OpenAI Konfiguration (Backup) +const AZURE_OPENAI_ENDPOINT = 'https://memoroseopenai.openai.azure.com'; +const AZURE_OPENAI_KEY = Deno.env.get('AZURE_OPENAI_KEY'); +if (!AZURE_OPENAI_KEY) { + throw new Error('AZURE_OPENAI_KEY not configured'); +} +const AZURE_OPENAI_DEPLOYMENT = 'gpt-4.1-mini-se'; +const AZURE_OPENAI_API_VERSION = '2025-01-01-preview'; +const memoro_sb = createClient(SUPABASE_URL, SERVICE_KEY); +// ─── Error Handler Functions ────────────────────────────────────────────── +/** + * Erstellt eine standardisierte Fehlerantwort für Edge Functions + */ function createErrorResponse(error, status = 500, corsHeaders = {}) { + const errorMessage = error instanceof Error ? error.message : String(error); + return new Response( + JSON.stringify({ + error: errorMessage, + timestamp: new Date().toISOString(), + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status, + } + ); +} +// ─── Logging-Funktion ────────────────────────────────────────────── +/** + * Erweiterte Logging-Funktion mit Zeitstempel und Log-Level + */ function log(level, message, data) { + const timestamp = new Date().toISOString(); + const logMessage = `[${timestamp}] [${level.toUpperCase()}] ${message}`; + switch (level.toUpperCase()) { + case 'INFO': + console.log(logMessage); + break; + case 'DEBUG': + console.debug(logMessage); + break; + case 'WARN': + console.warn(logMessage); + break; + case 'ERROR': + console.error(logMessage); + break; + default: + console.log(logMessage); + break; + } + if (data) { + if (level.toUpperCase() === 'ERROR') { + console.error(data); + } else { + console.log(typeof data === 'object' ? JSON.stringify(data, null, 2) : data); + } + } +} +/** + * Sendet Prompt an Gemini Flash und gibt die Antwort zurück + */ async function runPromptWithGemini(prompt, transcript, functionIdForLog = 'global') { + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][GEMINI-${requestId}] Starte Gemini-Anfrage.`); + try { + let fullPrompt; + if (prompt.includes('{transcript}')) { + fullPrompt = prompt.replace('{transcript}', transcript); + log('DEBUG', `[${functionIdForLog}][GEMINI-${requestId}] Platzhalter im Prompt ersetzt.`); + } else { + fullPrompt = `${prompt}\n\nText: ${transcript}`; + log( + 'DEBUG', + `[${functionIdForLog}][GEMINI-${requestId}] Kein Platzhalter, Transkript am Ende angehängt.` + ); + } + const startTime = Date.now(); + const response = await fetch( + `${GEMINI_ENDPOINT}/${GEMINI_MODEL}:generateContent?key=${GEMINI_API_KEY}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + contents: [ + { + parts: [ + { + text: fullPrompt, + }, + ], + }, + ], + generationConfig: { + temperature: 0.7, + maxOutputTokens: 8192, + }, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][GEMINI-${requestId}] Gemini Antwort erhalten in ${duration}ms, Status: ${response.status}` + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][GEMINI-${requestId}] Gemini API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Gemini API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.candidates?.[0]?.content?.parts?.[0]?.text?.trim() || ''; + log( + 'INFO', + `[${functionIdForLog}][GEMINI-${requestId}] Erfolgreiche Gemini-Antwort (Länge: ${content.length}).` + ); + return content; + } catch (error) { + log('ERROR', `[${functionIdForLog}][GEMINI-${requestId}] Fehler beim Gemini-Request:`, error); + throw error; + } +} +/** + * Sendet Prompt an Azure OpenAI und gibt die Antwort zurück (Fallback) + */ async function runPromptWithAzure( + prompt, + transcript, + language = 'de', + functionIdForLog = 'global' +) { + const systemPrompt = getSystemPrompt(language); + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][AZURE-${requestId}] Starte Azure OpenAI-Anfrage.`); + try { + let fullPrompt; + if (prompt.includes('{transcript}')) { + fullPrompt = prompt.replace('{transcript}', transcript); + log('DEBUG', `[${functionIdForLog}][AZURE-${requestId}] Platzhalter im Prompt ersetzt.`); + } else { + fullPrompt = `${prompt}\n\nText: ${transcript}`; + log( + 'DEBUG', + `[${functionIdForLog}][AZURE-${requestId}] Kein Platzhalter, Transkript am Ende angehängt.` + ); + } + const startTime = Date.now(); + const response = await fetch( + `${AZURE_OPENAI_ENDPOINT}/openai/deployments/${AZURE_OPENAI_DEPLOYMENT}/chat/completions?api-version=${AZURE_OPENAI_API_VERSION}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'api-key': AZURE_OPENAI_KEY, + }, + body: JSON.stringify({ + messages: [ + { + role: 'system', + content: systemPrompt, + }, + { + role: 'user', + content: fullPrompt, + }, + ], + max_tokens: 8192, + temperature: 0.7, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][AZURE-${requestId}] Azure OpenAI Antwort erhalten in ${duration}ms, Status: ${response.status}` + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][AZURE-${requestId}] Azure OpenAI API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Azure OpenAI API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.choices[0]?.message?.content?.trim() || ''; + log( + 'INFO', + `[${functionIdForLog}][AZURE-${requestId}] Erfolgreiche Azure OpenAI-Antwort (Länge: ${content.length}).` + ); + return content; + } catch (error) { + log( + 'ERROR', + `[${functionIdForLog}][AZURE-${requestId}] Fehler beim Azure OpenAI-Request:`, + error + ); + throw error; + } +} +/** + * Hauptfunktion zur Prompt-Verarbeitung mit Fallback-Logik + */ async function runPromptWithTranscript( + prompt, + transcript, + language = 'de', + functionIdForLog = 'global' +) { + try { + // Zuerst mit Gemini versuchen + return await runPromptWithGemini(prompt, transcript, functionIdForLog); + } catch (error) { + log('WARN', `[${functionIdForLog}] Gemini fehlgeschlagen, fallback auf Azure OpenAI`, error); + try { + // Fallback auf Azure OpenAI + return await runPromptWithAzure(prompt, transcript, language, functionIdForLog); + } catch (azureError) { + log('ERROR', `[${functionIdForLog}] Beide LLM-Services fehlgeschlagen`, azureError); + throw new Error('Beide LLM-Services sind nicht verfügbar'); + } + } +} +serve(async (req) => { + const functionId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionId}] Create-Memory-Funktion gestartet`); + const corsHeaders = { + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': 'POST, OPTIONS', + 'Access-Control-Allow-Headers': 'Content-Type, Authorization', + }; + if (req.method === 'OPTIONS') { + log('DEBUG', `[${functionId}] CORS Preflight-Anfrage bearbeitet`); + return new Response(null, { + headers: corsHeaders, + status: 204, + }); + } + try { + const requestData = await req.json(); + const { memo_id, prompt_id } = requestData; + log( + 'INFO', + `[${functionId}] Anfrage erhalten für memo_id: ${memo_id}, prompt_id: ${prompt_id}` + ); + if (!memo_id || !prompt_id) { + log( + 'ERROR', + `[${functionId}] Fehlende Parameter: memo_id=${memo_id}, prompt_id=${prompt_id}` + ); + return createErrorResponse('memo_id und prompt_id sind erforderlich', 400, corsHeaders); + } + // Memo laden + log('INFO', `[${functionId}] Rufe Memo mit ID ${memo_id} aus der Datenbank ab`); + const { data: memo, error: memoError } = await memoro_sb + .from('memos') + .select('*') + .eq('id', memo_id) + .single(); + if (memoError || !memo) { + log('ERROR', `[${functionId}] Memo ${memo_id} nicht gefunden:`, memoError); + return createErrorResponse('Memo nicht gefunden', 404, corsHeaders); + } + // Prompt laden + log('INFO', `[${functionId}] Rufe Prompt mit ID ${prompt_id} aus der Datenbank ab`); + const { data: prompt, error: promptError } = await memoro_sb + .from('prompts') + .select('*') + .eq('id', prompt_id) + .single(); + if (promptError || !prompt) { + log('ERROR', `[${functionId}] Prompt ${prompt_id} nicht gefunden:`, promptError); + return createErrorResponse('Prompt nicht gefunden', 404, corsHeaders); + } + // Transcript extrahieren (from utterances or legacy fields) + const transcript = getTranscriptText(memo); + log('INFO', `[${functionId}] Extrahiertes Transkript (Länge: ${transcript.length})`); + if (!transcript) { + log('ERROR', `[${functionId}] Kein Transkript im Memo ${memo_id} gefunden`); + return createErrorResponse('Kein Transkript im Memo gefunden', 400, corsHeaders); + } + // Bestimme die Sprache des Memos + let baseMemoLang = 'de'; // Standard: Deutsch + const primaryLanguage = memo.source?.primary_language || memo.source?.languages?.[0]; + if (primaryLanguage && typeof primaryLanguage === 'string') { + baseMemoLang = primaryLanguage.split('-')[0].toLowerCase(); + log( + 'DEBUG', + `[${functionId}] Ermittelte Basis-Sprache: ${baseMemoLang} (aus ${primaryLanguage})` + ); + } else { + log( + 'DEBUG', + `[${functionId}] Keine primäre Sprache gefunden. Nutze Standard: ${baseMemoLang}` + ); + } + const defaultPreferredLang = 'de'; + const defaultFallbackLang = 'en'; + // Prompt-Text und Memory-Titel extrahieren + let promptText = ''; + if (prompt.prompt_text && typeof prompt.prompt_text === 'object') { + promptText = + (baseMemoLang && prompt.prompt_text[baseMemoLang]) || + prompt.prompt_text[defaultPreferredLang] || + prompt.prompt_text[defaultFallbackLang] || + Object.values(prompt.prompt_text)[0] || + ''; + } + // Prepend system prompt if available for the language + const systemPrePrompt = + ROOT_SYSTEM_PROMPTS.PRE_PROMPT[baseMemoLang] || ROOT_SYSTEM_PROMPTS.PRE_PROMPT['de']; + if (systemPrePrompt && promptText) { + promptText = systemPrePrompt + '\n\n' + promptText; + } + let memoryTitle = ''; + if (prompt.memory_title && typeof prompt.memory_title === 'object') { + memoryTitle = + (baseMemoLang && prompt.memory_title[baseMemoLang]) || + prompt.memory_title[defaultPreferredLang] || + prompt.memory_title[defaultFallbackLang] || + Object.values(prompt.memory_title)[0] || + ''; + } + if (!promptText) { + log('ERROR', `[${functionId}] Kein Prompt-Text für Prompt ${prompt_id} gefunden`); + return createErrorResponse('Kein Prompt-Text verfügbar', 400, corsHeaders); + } + log( + 'INFO', + `[${functionId}] Sende Prompt "${memoryTitle || 'Ohne Titel'}" (ID: ${prompt_id}) an LLM mit Sprache: ${baseMemoLang}` + ); + const answer = await runPromptWithTranscript(promptText, transcript, baseMemoLang, functionId); + if (!answer) { + log('ERROR', `[${functionId}] Keine Antwort vom LLM für Prompt ${prompt_id} erhalten`); + return createErrorResponse('Keine Antwort vom LLM erhalten', 500, corsHeaders); + } + // Get the highest sort_order for this memo + log('INFO', `[${functionId}] Ermittle höchste sort_order für Memo ${memo_id}`); + const { data: maxSortData, error: maxSortError } = await memoro_sb + .from('memories') + .select('sort_order') + .eq('memo_id', memo_id) + .order('sort_order', { + ascending: false, + }) + .limit(1) + .single(); + // If error or no data, use random number above 5000, otherwise increment + const nextSortOrder = + maxSortError || !maxSortData?.sort_order + ? Math.floor(Math.random() * 5000) + 5000 // Random between 5000-9999 + : maxSortData.sort_order + 1; + log('INFO', `[${functionId}] Nächste sort_order: ${nextSortOrder}`); + log( + 'INFO', + `[${functionId}] Erstelle neues Memory für Memo ${memo_id} mit Titel "${memoryTitle || 'Memory'}"` + ); + const { data: newMemory, error: newMemoryError } = await memoro_sb + .from('memories') + .insert({ + memo_id: memo_id, + title: memoryTitle || 'Memory', + content: answer, + media: null, + sort_order: nextSortOrder, + metadata: { + type: 'manual_prompt', + prompt_id: prompt_id, + created_by: 'create_memory_function', + }, + }) + .select() + .single(); + if (newMemoryError) { + log( + 'ERROR', + `[${functionId}] Fehler beim Erstellen des Memories für Prompt ${prompt_id}:`, + newMemoryError + ); + return createErrorResponse( + `Fehler beim Erstellen der Memory: ${newMemoryError.message}`, + 500, + corsHeaders + ); + } + log( + 'INFO', + `[${functionId}] Memory erfolgreich erstellt mit ID ${newMemory.id} für Prompt ${prompt_id}` + ); + return new Response( + JSON.stringify({ + success: true, + memory_id: newMemory.id, + title: memoryTitle, + content: answer, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 200, + } + ); + } catch (error) { + log('ERROR', `[${functionId}] Unerwarteter Fehler bei der Memory-Erstellung:`, error); + const errorToLog = error instanceof Error ? error : new Error(String(error)); + return createErrorResponse(`Unerwarteter Fehler: ${errorToLog.message}`, 500, corsHeaders); + } +}); diff --git a/apps/memoro/apps/backend/supabase/functions/headline/constants.ts b/apps/memoro/apps/backend/supabase/functions/headline/constants.ts new file mode 100644 index 000000000..1061c351c --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/headline/constants.ts @@ -0,0 +1,219 @@ +/** + * System-Prompts für die Headline-Generierung in verschiedenen Sprachen + * + * Die Prompts werden verwendet, um Überschriften und Einleitungen für Memos zu generieren. + * Jede Sprache hat ihren eigenen Prompt, der die spezifischen Anforderungen und Formatierungen enthält. + */ /** + * Interface für die Prompt-Konfiguration + */ /** + * System-Prompts für die Headline-Generierung + * + * Unterstützte Sprachen (62): + * - de: Deutsch + * - en: Englisch + * - fr: Französisch + * - es: Spanisch + * - it: Italienisch + * - nl: Niederländisch + * - pt: Portugiesisch + * - ru: Russisch + * - ja: Japanisch + * - ko: Koreanisch + * - zh: Chinesisch + * - ar: Arabisch + * - hi: Hindi + * - tr: Türkisch + * - pl: Polnisch + * - da: Dänisch + * - sv: Schwedisch + * - nb: Norwegisch + * - fi: Finnisch + * - cs: Tschechisch + * - hu: Ungarisch + * - el: Griechisch + * - he: Hebräisch + * - id: Indonesisch + * - th: Thai + * - vi: Vietnamesisch + * - uk: Ukrainisch + * - ro: Rumänisch + * - bg: Bulgarisch + * - ca: Katalanisch + * - hr: Kroatisch + * - sk: Slowakisch + * - et: Estnisch + * - lv: Lettisch + * - lt: Litauisch + * - bn: Bengalisch + * - ms: Malaiisch + * - ta: Tamil + * - te: Telugu + * - ur: Urdu + * - mr: Marathi + * - gu: Gujarati + * - ml: Malayalam + * - kn: Kannada + * - pa: Punjabi + * - af: Afrikaans + * - fa: Persisch + * - ka: Georgisch + * - is: Isländisch + * - sq: Albanisch + * - az: Aserbaidschanisch + * - eu: Baskisch + * - gl: Galizisch + * - kk: Kasachisch + * - mk: Mazedonisch + * - sr: Serbisch + * - sl: Slowenisch + * - mt: Maltesisch + * - hy: Armenisch + * - uz: Usbekisch + * - ga: Irisch + * - cy: Walisisch + * - fil: Filipino + */ export const SYSTEM_PROMPTS = { + headline: { + // Deutsch + de: 'Du bist ein Assistent, der Texte analysiert und zusammenfasst. Deine Aufgabe ist es, für den folgenden Text zwei Dinge zu erstellen:\n1. Eine kurze, prägnante Headline (maximal 8 Wörter)\n2. Ein kurzes Intro, das den Inhalt des Textes in 2-3 Sätzen zusammenfasst und neugierig macht\n\nFormatiere deine Antwort genau so:\nHEADLINE: [Deine Headline hier]\nINTRO: [Dein Intro hier]', + // Englisch + en: 'You are an assistant that analyzes and summarizes texts. Your task is to create two things for the following text:\n1. A short, concise headline (maximum 8 words)\n2. A brief intro that summarizes the content of the text in 2-3 sentences and makes the reader curious\n\nFormat your answer exactly like this:\nHEADLINE: [Your headline here]\nINTRO: [Your intro here]', + // Französisch + fr: 'Vous êtes un assistant qui analyse et résume des textes. Votre tâche est de créer deux choses pour le texte suivant :\n1. Un titre court et concis (maximum 8 mots)\n2. Une brève introduction qui résume le contenu du texte en 2-3 phrases et éveille la curiosité du lecteur\n\nFormatez votre réponse exactement comme ceci :\nHEADLINE: [Votre titre ici]\nINTRO: [Votre introduction ici]', + // Spanisch + es: 'Eres un asistente que analiza y resume textos. Tu tarea es crear dos cosas para el siguiente texto:\n1. Un título breve y conciso (máximo 8 palabras)\n2. Una breve introducción que resuma el contenido del texto en 2-3 frases y despierte la curiosidad del lector\n\nFormatea tu respuesta exactamente así:\nHEADLINE: [Tu título aquí]\nINTRO: [Tu introducción aquí]', + // Italienisch + it: 'Sei un assistente che analizza e riassume testi. Il tuo compito è creare due cose per il seguente testo:\n1. Un titolo breve e conciso (massimo 8 parole)\n2. Una breve introduzione che riassume il contenuto del testo in 2-3 frasi e suscita la curiosità del lettore\n\nFormatta la tua risposta esattamente così:\nHEADLINE: [Il tuo titolo qui]\nINTRO: [La tua introduzione qui]', + // Niederländisch + nl: 'Je bent een assistent die teksten analyseert en samenvat. Je taak is om twee dingen te maken voor de volgende tekst:\n1. Een korte, bondige kop (maximaal 8 woorden)\n2. Een korte intro die de inhoud van de tekst in 2-3 zinnen samenvat en de lezer nieuwsgierig maakt\n\nFormatteer je antwoord precies zo:\nHEADLINE: [Jouw kop hier]\nINTRO: [Jouw intro hier]', + // Portugiesisch + pt: 'Você é um assistente que analisa e resume textos. Sua tarefa é criar duas coisas para o seguinte texto:\n1. Uma manchete breve e concisa (máximo 8 palavras)\n2. Uma breve introdução que resume o conteúdo do texto em 2-3 frases e desperta a curiosidade do leitor\n\nFormate sua resposta exatamente assim:\nHEADLINE: [Sua manchete aqui]\nINTRO: [Sua introdução aqui]', + // Russisch + ru: 'Вы помощник, который анализирует и резюмирует тексты. Ваша задача - создать две вещи для следующего текста:\n1. Короткий, лаконичный заголовок (максимум 8 слов)\n2. Краткое введение, которое резюмирует содержание текста в 2-3 предложениях и вызывает любопытство у читателя\n\nФорматируйте ваш ответ точно так:\nHEADLINE: [Ваш заголовок здесь]\nINTRO: [Ваше введение здесь]', + // Japanisch + ja: 'あなたはテキストを分析し要約するアシスタントです。次のテキストに対して2つのことを作成するのがあなたの仕事です:\n1. 短く簡潔な見出し(最大8語)\n2. テキストの内容を2-3文で要約し、読者の興味を引く短い導入文\n\n次のように正確にフォーマットしてください:\nHEADLINE: [ここにあなたの見出し]\nINTRO: [ここにあなたの導入文]', + // Koreanisch + ko: '당신은 텍스트를 분석하고 요약하는 어시스턴트입니다. 다음 텍스트에 대해 두 가지를 만드는 것이 당신의 임무입니다:\n1. 짧고 간결한 헤드라인 (최대 8단어)\n2. 텍스트의 내용을 2-3문장으로 요약하고 독자의 호기심을 자극하는 짧은 소개\n\n다음과 같이 정확히 형식을 맞춰주세요:\nHEADLINE: [여기에 당신의 헤드라인]\nINTRO: [여기에 당신의 소개]', + // Chinesisch (vereinfacht) + zh: '你是一个分析和总结文本的助手。你的任务是为以下文本创建两样东西:\n1. 一个简短、简洁的标题(最多8个词)\n2. 一个简短的介绍,用2-3句话总结文本内容并激发读者的好奇心\n\n请严格按照以下格式回答:\nHEADLINE: [你的标题]\nINTRO: [你的介绍]', + // Arabisch + ar: 'أنت مساعد يحلل ويلخص النصوص. مهمتك هي إنشاء شيئين للنص التالي:\n1. عنوان قصير ومقتضب (8 كلمات كحد أقصى)\n2. مقدمة مختصرة تلخص محتوى النص في 2-3 جمل وتثير فضول القارئ\n\nقم بتنسيق إجابتك بالضبط هكذا:\nHEADLINE: [عنوانك هنا]\nINTRO: [مقدمتك هنا]', + // Hindi + hi: 'आप एक सहायक हैं जो ग्रंथों का विश्लेषण और सारांश करते हैं। निम्नलिखित पाठ के लिए दो चीजें बनाना आपका कार्य है:\n1. एक संक्षिप्त, सटीक शीर्षक (अधिकतम 8 शब्द)\n2. एक संक्षिप्त परिचय जो पाठ की सामग्री को 2-3 वाक्यों में सारांशित करता है और पाठक में जिज्ञासा जगाता है\n\nअपना उत्तर बिल्कुल इस तरह से प्रारूपित करें:\nHEADLINE: [यहाँ आपका शीर्षक]\nINTRO: [यहाँ आपका परिचय]', + // Türkisch + tr: 'Metinleri analiz eden ve özetleyen bir asistansınız. Aşağıdaki metin için iki şey oluşturmak sizin göreviniz:\n1. Kısa, özlü bir başlık (maksimum 8 kelime)\n2. Metnin içeriğini 2-3 cümlede özetleyen ve okuyucuyu meraklandıran kısa bir giriş\n\nCevabınızı tam olarak şu şekilde biçimlendirin:\nHEADLINE: [Başlığınız burada]\nINTRO: [Girişiniz burada]', + // Polnisch + pl: 'Jesteś asystentem, który analizuje i streszcza teksty. Twoim zadaniem jest stworzenie dwóch rzeczy dla następującego tekstu:\n1. Krótki, zwięzły nagłówek (maksymalnie 8 słów)\n2. Krótkie wprowadzenie, które streszcza treść tekstu w 2-3 zdaniach i wzbudza ciekawość czytelnika\n\nSformatuj swoją odpowiedź dokładnie tak:\nHEADLINE: [Twój nagłówek tutaj]\nINTRO: [Twoje wprowadzenie tutaj]', + // Dänisch + da: 'Du er en assistent, der analyserer og sammenfatter tekster. Din opgave er at skabe to ting for følgende tekst:\n1. En kort, præcis overskrift (maksimalt 8 ord)\n2. En kort intro, der sammenfatter tekstens indhold i 2-3 sætninger og gør læseren nysgerrig\n\nFormatter dit svar præcis sådan:\nHEADLINE: [Din overskrift her]\nINTRO: [Dit intro her]', + // Schwedisch + sv: 'Du är en assistent som analyserar och sammanfattar texter. Din uppgift är att skapa två saker för följande text:\n1. En kort, koncis rubrik (maximalt 8 ord)\n2. En kort intro som sammanfattar textens innehåll i 2-3 meningar och gör läsaren nyfiken\n\nFormatera ditt svar exakt så här:\nHEADLINE: [Din rubrik här]\nINTRO: [Ditt intro här]', + // Norwegisch + nb: 'Du er en assistent som analyserer og oppsummerer tekster. Oppgaven din er å lage to ting for følgende tekst:\n1. En kort, presis overskrift (maksimalt 8 ord)\n2. En kort intro som oppsummerer tekstens innhold i 2-3 setninger og gjør leseren nysgjerrig\n\nFormater svaret ditt nøyaktig slik:\nHEADLINE: [Din overskrift her]\nINTRO: [Ditt intro her]', + // Finnisch + fi: 'Olet avustaja, joka analysoi ja tiivistää tekstejä. Tehtäväsi on luoda kaksi asiaa seuraavalle tekstille:\n1. Lyhyt, ytimekäs otsikko (enintään 8 sanaa)\n2. Lyhyt johdanto, joka tiivistää tekstin sisällön 2-3 lauseessa ja herättää lukijan uteliaisuuden\n\nMuotoile vastauksesi täsmälleen näin:\nHEADLINE: [Otsikkosi tähän]\nINTRO: [Johdantosi tähän]', + // Tschechisch + cs: 'Jste asistent, který analyzuje a shrnuje texty. Vaším úkolem je vytvořit dvě věci pro následující text:\n1. Krátký, stručný nadpis (maximálně 8 slov)\n2. Krátký úvod, který shrne obsah textu ve 2-3 větách a vzbudí zvědavost čtenáře\n\nNaformátujte svou odpověď přesně takto:\nHEADLINE: [Váš nadpis zde]\nINTRO: [Váš úvod zde]', + // Ungarisch + hu: 'Ön egy asszisztens, aki szövegeket elemez és összefoglal. Az Ön feladata, hogy két dolgot hozzon létre a következő szöveghez:\n1. Egy rövid, tömör címsor (maximum 8 szó)\n2. Egy rövid bevezető, amely 2-3 mondatban összefoglalja a szöveg tartalmát és felkelti az olvasó kíváncsiságát\n\nFormázza válaszát pontosan így:\nHEADLINE: [Az Ön címsora itt]\nINTRO: [Az Ön bevezetője itt]', + // Griechisch + el: 'Είστε ένας βοηθός που αναλύει και συνοψίζει κείμενα. Το καθήκον σας είναι να δημιουργήσετε δύο πράγματα για το ακόλουθο κείμενο:\n1. Έναν σύντομο, περιεκτικό τίτλο (μέγιστο 8 λέξεις)\n2. Μια σύντομη εισαγωγή που συνοψίζει το περιεχόμενο του κειμένου σε 2-3 προτάσεις και προκαλεί την περιέργεια του αναγνώστη\n\nΜορφοποιήστε την απάντησή σας ακριβώς έτσι:\nHEADLINE: [Ο τίτλος σας εδώ]\nINTRO: [Η εισαγωγή σας εδώ]', + // Hebräisch + he: 'אתה עוזר שמנתח ומסכם טקסטים. המשימה שלך היא ליצור שני דברים לטקסט הבא:\n1. כותרת קצרה ותמציתית (מקסימום 8 מילים)\n2. הקדמה קצרה שמסכמת את תוכן הטקסט ב-2-3 משפטים ומעוררת סקרנות אצל הקורא\n\nעצב את התשובה שלך בדיוק כך:\nHEADLINE: [הכותרת שלך כאן]\nINTRO: [ההקדמה שלך כאן]', + // Indonesisch + id: 'Anda adalah asisten yang menganalisis dan merangkum teks. Tugas Anda adalah membuat dua hal untuk teks berikut:\n1. Judul yang pendek dan ringkas (maksimal 8 kata)\n2. Intro singkat yang merangkum isi teks dalam 2-3 kalimat dan membuat pembaca penasaran\n\nFormat jawaban Anda persis seperti ini:\nHEADLINE: [Judul Anda di sini]\nINTRO: [Intro Anda di sini]', + // Thai + th: 'คุณเป็นผู้ช่วยที่วิเคราะห์และสรุปข้อความ งานของคุณคือการสร้างสองสิ่งสำหรับข้อความต่อไปนี้:\n1. หัวข้อที่สั้นและกระชับ (ไม่เกิน 8 คำ)\n2. บทนำสั้นๆ ที่สรุปเนื้อหาของข้อความใน 2-3 ประโยคและทำให้ผู้อ่านอยากรู้\n\nจัดรูปแบบคำตอบของคุณตามนี้เป๊ะๆ:\nHEADLINE: [หัวข้อของคุณที่นี่]\nINTRO: [บทนำของคุณที่นี่]', + // Vietnamesisch + vi: 'Bạn là một trợ lý phân tích và tóm tắt văn bản. Nhiệm vụ của bạn là tạo hai thứ cho văn bản sau:\n1. Một tiêu đề ngắn gọn và súc tích (tối đa 8 từ)\n2. Một phần giới thiệu ngắn tóm tắt nội dung văn bản trong 2-3 câu và khơi gợi sự tò mò của người đọc\n\nĐịnh dạng câu trả lời của bạn chính xác như thế này:\nHEADLINE: [Tiêu đề của bạn ở đây]\nINTRO: [Phần giới thiệu của bạn ở đây]', + // Ukrainisch + uk: 'Ви помічник, який аналізує та резюмує тексти. Ваше завдання - створити дві речі для наступного тексту:\n1. Короткий, лаконічний заголовок (максимум 8 слів)\n2. Короткий вступ, який резюмує зміст тексту у 2-3 реченнях та викликає цікавість у читача\n\nФорматуйте вашу відповідь точно так:\nHEADLINE: [Ваш заголовок тут]\nINTRO: [Ваш вступ тут]', + // Rumänisch + ro: 'Sunteți un asistent care analizează și rezumă texte. Sarcina dvs. este să creați două lucruri pentru următorul text:\n1. Un titlu scurt și concis (maximum 8 cuvinte)\n2. O scurtă introducere care rezumă conținutul textului în 2-3 propoziții și trezește curiozitatea cititorului\n\nFormatați răspunsul dvs. exact astfel:\nHEADLINE: [Titlul dvs. aici]\nINTRO: [Introducerea dvs. aici]', + // Bulgarisch + bg: 'Вие сте асистент, който анализира и резюмира текстове. Вашата задача е да създадете две неща за следния текст:\n1. Кратко, сбито заглавие (максимум 8 думи)\n2. Кратко въведение, което резюмира съдържанието на текста в 2-3 изречения и предизвиква любопитството на читателя\n\nФорматирайте отговора си точно така:\nHEADLINE: [Вашето заглавие тук]\nINTRO: [Вашето въведение тук]', + // Katalanisch + ca: 'Ets un assistent que analitza i resumeix textos. La teva tasca és crear dues coses per al següent text:\n1. Un títol breu i concís (màxim 8 paraules)\n2. Una breu introducció que resumeixi el contingut del text en 2-3 frases i desperti la curiositat del lector\n\nFormata la teva resposta exactament així:\nHEADLINE: [El teu títol aquí]\nINTRO: [La teva introducció aquí]', + // Kroatisch + hr: 'Vi ste asistent koji analizira i sažima tekstove. Vaš zadatak je stvoriti dvije stvari za sljedeći tekst:\n1. Kratak, sažet naslov (maksimalno 8 riječi)\n2. Kratak uvod koji sažima sadržaj teksta u 2-3 rečenice i pobuđuje znatiželju čitatelja\n\nFormatirajte svoj odgovor točno ovako:\nHEADLINE: [Vaš naslov ovdje]\nINTRO: [Vaš uvod ovdje]', + // Slowakisch + sk: 'Ste asistent, ktorý analyzuje a sumarizuje texty. Vašou úlohou je vytvoriť dve veci pre nasledujúci text:\n1. Krátky, stručný nadpis (maximálne 8 slov)\n2. Krátky úvod, ktorý sumarizuje obsah textu v 2-3 vetách a vzbudí zvedavosť čitateľa\n\nNaformátujte svoju odpoveď presne takto:\nHEADLINE: [Váš nadpis tu]\nINTRO: [Váš úvod tu]', + // Estnisch + et: 'Olete assistent, kes analüüsib ja kokkuvõtab tekste. Teie ülesanne on luua kaks asja järgmise teksti jaoks:\n1. Lühike, kokkuvõtlik pealkiri (maksimaalselt 8 sõna)\n2. Lühike sissejuhatus, mis võtab teksti sisu kokku 2-3 lauses ja äratab lugeja uudishimu\n\nVormistage oma vastus täpselt nii:\nHEADLINE: [Teie pealkiri siin]\nINTRO: [Teie sissejuhatus siin]', + // Lettisch + lv: 'Jūs esat asistents, kas analizē un apkopo tekstus. Jūsu uzdevums ir izveidot divas lietas šādam tekstam:\n1. Īsu, kodolīgu virsrakstu (maksimums 8 vārdi)\n2. Īsu ievadu, kas apkopo teksta saturu 2-3 teikumos un modina lasītāja ziņkāri\n\nFormatējiet savu atbildi tieši tā:\nHEADLINE: [Jūsu virsraksts šeit]\nINTRO: [Jūsu ievads šeit]', + // Litauisch + lt: 'Esate asistentas, kuris analizuoja ir apibendrина tekstus. Jūsų užduotis - sukurti du dalykus šiam tekstui:\n1. Trumpą, glaustą antraštę (ne daugiau kaip 8 žodžiai)\n2. Trumpą įvadą, kuris apibendrinta teksto turinį 2-3 sakiniais ir žadina skaitytojo smalsumą\n\nSuformatuokite savo atsakymą tiksliai taip:\nHEADLINE: [Jūsų antraštė čia]\nINTRO: [Jūsų įvadas čia]', + // Bengalisch + bn: 'আপনি একজন সহায়ক যিনি পাঠ্য বিশ্লেষণ এবং সারসংক্ষেপ করেন। নিম্নলিখিত পাঠ্যের জন্য দুটি জিনিস তৈরি করা আপনার কাজ:\n1. একটি সংক্ষিপ্ত, সারগর্ভ শিরোনাম (সর্বোচ্চ ৮টি শব্দ)\n2. একটি সংক্ষিপ্ত ভূমিকা যা ২-৩টি বাক্যে পাঠ্যের বিষয়বস্তু সারসংক্ষেপ করে এবং পাঠকের কৌতূহল জাগায়\n\nআপনার উত্তর ঠিক এভাবে ফরম্যাট করুন:\nHEADLINE: [এখানে আপনার শিরোনাম]\nINTRO: [এখানে আপনার ভূমিকা]', + // Malaiisch + ms: 'Anda adalah pembantu yang menganalisis dan meringkaskan teks. Tugas anda adalah untuk mencipta dua perkara untuk teks berikut:\n1. Tajuk utama yang pendek dan padat (maksimum 8 perkataan)\n2. Pengenalan ringkas yang meringkaskan kandungan teks dalam 2-3 ayat dan menimbulkan rasa ingin tahu pembaca\n\nFormatkan jawapan anda tepat seperti ini:\nHEADLINE: [Tajuk utama anda di sini]\nINTRO: [Pengenalan anda di sini]', + // Tamil + ta: 'நீங்கள் உரைகளை பகுப்பாய்வு செய்து சுருக்கும் உதவியாளர். பின்வரும் உரைக்கு இரண்டு விஷயங்களை உருவாக்குவது உங்கள் பணி:\n1. ஒரு குறுகிய, சுருக்கமான தலைப்பு (அதிகபட்சம் 8 வார்த்தைகள்)\n2. உரையின் உள்ளடக்கத்தை 2-3 வாக்கியங்களில் சுருக்கி வாசகரின் ஆர்வத்தை தூண்டும் குறுகிய அறிமுகம்\n\nஉங்கள் பதிலை சரியாக இப்படி வடிவமைக்கவும்:\nHEADLINE: [இங்கே உங்கள் தலைப்பு]\nINTRO: [இங்கே உங்கள் அறிமுகம்]', + // Telugu + te: 'మీరు టెక్స్ట్‌లను విశ్లేషించి సంక్షిప్తీకరించే సహాయకుడు. కింది టెక్స్ట్ కోసం రెండు విషయాలు సృష్టించడం మీ పని:\n1. ఒక చిన్న, సంక్షిప్త శీర్షిక (గరిష్టంగా 8 పదాలు)\n2. టెక్స్ట్ యొక్క కంటెంట్‌ను 2-3 వాక్యాలలో సంక్షిప్తీకరించి పాఠకుడిలో ఆసక్తిని రేకెత్తించే చిన్న పరిచయం\n\nమీ సమాధానాన్ని సరిగ్గా ఇలా ఫార్మాట్ చేయండి:\nHEADLINE: [ఇక్కడ మీ శీర్షిక]\nINTRO: [ఇక్కడ మీ పరిచయం]', + // Urdu + ur: 'آپ ایک معاون ہیں جو متن کا تجزیہ اور خلاصہ کرتے ہیں۔ مندرجہ ذیل متن کے لیے دو چیزیں بنانا آپ کا کام ہے:\n1. ایک مختصر، جامع سرخی (زیادہ سے زیادہ 8 الفاظ)\n2. ایک مختصر تعارف جو متن کے مواد کو 2-3 جملوں میں خلاصہ کرے اور قاری میں تجسس پیدا کرے\n\nاپنے جواب کو بالکل اس طرح فارمیٹ کریں:\nHEADLINE: [یہاں آپ کی سرخی]\nINTRO: [یہاں آپ کا تعارف]', + // Marathi + mr: 'तुम्ही मजकूरांचे विश्लेषण आणि सारांश करणारे सहाय्यक आहात. पुढील मजकुरासाठी दोन गोष्टी तयार करणे हे तुमचे काम आहे:\n1. एक लहान, संक्षिप्त मथळा (जास्तीत जास्त 8 शब्द)\n2. एक छोटी प्रस्तावना जी मजकुराची सामग्री 2-3 वाक्यांमध्ये सारांशित करते आणि वाचकामध्ये कुतूहल निर्माण करते\n\nतुमचे उत्तर अगदी अशा प्रकारे स्वरूपित करा:\nHEADLINE: [इथे तुमचा मथळा]\nINTRO: [इथे तुमची प्रस्तावना]', + // Gujarati + gu: 'તમે એક સહાયક છો જે ટેક્સ્ટનું વિશ્લેષણ અને સારાંશ કરે છે. નીચેના ટેક્સ્ટ માટે બે વસ્તુઓ બનાવવી એ તમારું કામ છે:\n1. એક ટૂંકું, સંક્ષિપ્ત હેડલાઇન (મહત્તમ 8 શબ્દો)\n2. એક ટૂંકો પરિચય જે ટેક્સ્ટની સામગ્રીને 2-3 વાક્યોમાં સારાંશ આપે અને વાચકમાં જિજ્ઞાસા જગાડે\n\nતમારા જવાબને બરાબર આ રીતે ફોર્મેટ કરો:\nHEADLINE: [અહીં તમારું હેડલાઇન]\nINTRO: [અહીં તમારો પરિચય]', + // Malayalam + ml: 'നിങ്ങൾ വാചകങ്ങൾ വിശകലനം ചെയ്യുകയും സംഗ്രഹിക്കുകയും ചെയ്യുന്ന ഒരു സഹായകനാണ്. ഇനിപ്പറയുന്ന വാചകത്തിനായി രണ്ട് കാര്യങ്ങൾ സൃഷ്ടിക്കുക എന്നതാണ് നിങ്ങളുടെ ജോലി:\n1. ഒരു ചെറിയ, സംക്ഷിപ്ത തലക്കെട്ട് (പരമാവധി 8 വാക്കുകൾ)\n2. വാചകത്തിന്റെ ഉള്ളടക്കം 2-3 വാക്യങ്ങളിൽ സംഗ്രഹിക്കുകയും വായനക്കാരനിൽ ജിജ്ഞാസ ഉണർത്തുകയും ചെയ്യുന്ന ഒരു ചെറിയ ആമുഖം\n\nനിങ്ങളുടെ ഉത്തരം കൃത്യമായി ഇപ്രകാരം ഫോർമാറ്റ് ചെയ്യുക:\nHEADLINE: [ഇവിടെ നിങ്ങളുടെ തലക്കെട്ട്]\nINTRO: [ഇവിടെ നിങ്ങളുടെ ആമുഖം]', + // Kannada + kn: 'ನೀವು ಪಠ್ಯಗಳನ್ನು ವಿಶ್ಲೇಷಿಸುವ ಮತ್ತು ಸಾರಾಂಶಗೊಳಿಸುವ ಸಹಾಯಕರಾಗಿದ್ದೀರಿ. ಕೆಳಗಿನ ಪಠ್ಯಕ್ಕಾಗಿ ಎರಡು ವಿಷಯಗಳನ್ನು ರಚಿಸುವುದು ನಿಮ್ಮ ಕೆಲಸ:\n1. ಒಂದು ಸಣ್ಣ, ಸಂಕ್ಷಿಪ್ತ ಶೀರ್ಷಿಕೆ (ಗರಿಷ್ಠ 8 ಪದಗಳು)\n2. ಪಠ್ಯದ ವಿಷಯವನ್ನು 2-3 ವಾಕ್ಯಗಳಲ್ಲಿ ಸಾರಾಂಶಗೊಳಿಸುವ ಮತ್ತು ಓದುಗರಲ್ಲಿ ಕುತೂಹಲವನ್ನು ಹುಟ್ಟಿಸುವ ಒಂದು ಸಣ್ಣ ಪರಿಚಯ\n\nನಿಮ್ಮ ಉತ್ತರವನ್ನು ನಿಖರವಾಗಿ ಈ ರೀತಿ ಫಾರ್ಮ್ಯಾಟ್ ಮಾಡಿ:\nHEADLINE: [ಇಲ್ಲಿ ನಿಮ್ಮ ಶೀರ್ಷಿಕೆ]\nINTRO: [ಇಲ್ಲಿ ನಿಮ್ಮ ಪರಿಚಯ]', + // Punjabi + pa: 'ਤੁਸੀਂ ਇੱਕ ਸਹਾਇਕ ਹੋ ਜੋ ਟੈਕਸਟਾਂ ਦਾ ਵਿਸ਼ਲੇਸ਼ਣ ਅਤੇ ਸੰਖੇਪ ਕਰਦੇ ਹੋ। ਹੇਠਲੇ ਟੈਕਸਟ ਲਈ ਦੋ ਚੀਜ਼ਾਂ ਬਣਾਉਣਾ ਤੁਹਾਡਾ ਕੰਮ ਹੈ:\n1. ਇੱਕ ਛੋਟੀ, ਸੰਖੇਪ ਸਿਰਲੇਖ (ਵੱਧ ਤੋਂ ਵੱਧ 8 ਸ਼ਬਦ)\n2. ਇੱਕ ਛੋਟੀ ਜਾਣ-ਪਛਾਣ ਜੋ ਟੈਕਸਟ ਦੀ ਸਮੱਗਰੀ ਨੂੰ 2-3 ਵਾਕਾਂ ਵਿੱਚ ਸੰਖੇਪ ਕਰੇ ਅਤੇ ਪਾਠਕ ਵਿੱਚ ਉਤਸੁਕਤਾ ਪੈਦਾ ਕਰੇ\n\nਆਪਣੇ ਜਵਾਬ ਨੂੰ ਬਿਲਕੁਲ ਇਸ ਤਰ੍ਹਾਂ ਫਾਰਮੈਟ ਕਰੋ:\nHEADLINE: [ਇੱਥੇ ਤੁਹਾਡੀ ਸਿਰਲੇਖ]\nINTRO: [ਇੱਥੇ ਤੁਹਾਡੀ ਜਾਣ-ਪਛਾਣ]', + // Afrikaans + af: "Jy is 'n assistent wat tekste ontleed en opsom. Jou taak is om twee dinge vir die volgende teks te skep:\n1. 'n Kort, bondige opskrif (maksimum 8 woorde)\n2. 'n Kort inleiding wat die inhoud van die teks in 2-3 sinne opsom en die leser nuuskierig maak\n\nFormateer jou antwoord presies so:\nHEADLINE: [Jou opskrif hier]\nINTRO: [Jou inleiding hier]", + // Persisch/Farsi + fa: 'شما دستیاری هستید که متون را تجزیه و تحلیل و خلاصه می‌کند. وظیفه شما ایجاد دو چیز برای متن زیر است:\n1. یک عنوان کوتاه و مختصر (حداکثر 8 کلمه)\n2. یک مقدمه کوتاه که محتوای متن را در 2-3 جمله خلاصه کند و کنجکاوی خواننده را برانگیزد\n\nپاسخ خود را دقیقاً به این شکل قالب‌بندی کنید:\nHEADLINE: [عنوان شما اینجا]\nINTRO: [مقدمه شما اینجا]', + // Georgisch + ka: 'თქვენ ხართ ასისტენტი, რომელიც აანალიზებს და აჯამებს ტექსტებს. თქვენი ამოცანაა შემდეგი ტექსტისთვის ორი რამ შექმნათ:\n1. მოკლე, ლაკონური სათაური (მაქსიმუმ 8 სიტყვა)\n2. მოკლე შესავალი, რომელიც აჯამებს ტექსტის შინაარსს 2-3 წინადადებაში და აღძრავს მკითხველის ცნობისმოყვარეობას\n\nგააფორმეთ თქვენი პასუხი ზუსტად ასე:\nHEADLINE: [თქვენი სათაური აქ]\nINTRO: [თქვენი შესავალი აქ]', + // Isländisch + is: 'Þú ert aðstoðarmaður sem greinir og dregur saman texta. Verkefni þitt er að búa til tvö hluti fyrir eftirfarandi texta:\n1. Stuttan, hnitmiðaðan fyrirsögn (að hámarki 8 orð)\n2. Stutta inngang sem dregur saman efni textans í 2-3 setningum og vekur forvitni lesandans\n\nSníðdu svarið þitt nákvæmlega svona:\nHEADLINE: [Fyrirsögnin þín hér]\nINTRO: [Inngangurinn þinn hér]', + // Albanisch + sq: 'Ju jeni një asistent që analizon dhe përmbledh tekste. Detyra juaj është të krijoni dy gjëra për tekstin e mëposhtëm:\n1. Një titull të shkurtër dhe të përqendruar (maksimumi 8 fjalë)\n2. Një hyrje të shkurtër që përmbledh përmbajtjen e tekstit në 2-3 fjali dhe ngjall kuriozitenin e lexuesit\n\nFormatoni përgjigjen tuaj saktësisht kështu:\nHEADLINE: [Titulli juaj këtu]\nINTRO: [Hyrja juaj këtu]', + // Aserbaidschanisch + az: 'Siz mətnləri təhlil edən və xülasə çıxaran köməkçisiniz. Sizin vəzifəniz aşağıdakı mətn üçün iki şey yaratmaqdır:\n1. Qısa, dəqiq başlıq (maksimum 8 söz)\n2. Mətnin məzmununu 2-3 cümlədə xülasə edən və oxucunun marağını oyadan qısa giriş\n\nCavabınızı dəqiq belə formatlaşdırın:\nHEADLINE: [Başlığınız burada]\nINTRO: [Girişiniz burada]', + // Baskisch + eu: 'Testuak aztertzen eta laburbildu egiten dituen laguntzaile bat zara. Zure zeregina honako testuarentzat bi gauza sortzea da:\n1. Izenburua labur eta zehatza (gehienez 8 hitz)\n2. Testuaren edukia 2-3 esalditan laburbiltzen duen eta irakurlearen jakin-mina piztuko duen sarrera laburra\n\nErantzuna zehatz-mehatz honela formateatu:\nHEADLINE: [Zure izenburua hemen]\nINTRO: [Zure sarrera hemen]', + // Galizisch + gl: 'Es un asistente que analiza e resume textos. A túa tarefa é crear dúas cousas para o seguinte texto:\n1. Un título breve e conciso (máximo 8 palabras)\n2. Unha breve introdución que resuma o contido do texto en 2-3 frases e esperte a curiosidade do lector\n\nFormatea a túa resposta exactamente así:\nHEADLINE: [O teu título aquí]\nINTRO: [A túa introdución aquí]', + // Kasachisch + kk: 'Сіз мәтіндерді талдайтын және қорытындылайтын көмекшісіз. Сіздің міндетіңіз келесі мәтін үшін екі нәрсе жасау:\n1. Қысқа, нақты тақырып (ең көбі 8 сөз)\n2. Мәтін мазмұнын 2-3 сөйлемде қорытындылайтын және оқырманның қызығушылығын туғызатын қысқа кіріспе\n\nЖауабыңызды дәл осылай пішімдеңіз:\nHEADLINE: [Мұнда сіздің тақырыбыңыз]\nINTRO: [Мұнда сіздің кіріспеңіз]', + // Mazedonisch + mk: 'Вие сте асистент кој анализира и резимира текстови. Вашата задача е да создадете две работи за следниот текст:\n1. Краток, јасен наслов (максимум 8 зборови)\n2. Краток вовед кој го резимира содржината на текстот во 2-3 реченици и ја буди љубопитноста на читателот\n\nФорматирајте го вашиот одговор точно вака:\nHEADLINE: [Вашиот наслов тука]\nINTRO: [Вашиот вовед тука]', + // Serbisch + sr: 'Ви сте асистент који анализира и резимира текстове. Ваш задатак је да направите две ствари за следећи текст:\n1. Кратак, јасан наслов (максимум 8 речи)\n2. Кратак увод који резимира садржај текста у 2-3 реченице и буди радозналост читаоца\n\nФорматирајте ваш одговор тачно овако:\nHEADLINE: [Ваш наслов овде]\nINTRO: [Ваш увод овде]', + // Slowenisch + sl: 'Ste pomočnik, ki analizira in povzema besedila. Vaša naloga je ustvariti dve stvari za naslednje besedilo:\n1. Kratek, jedrnat naslov (največ 8 besed)\n2. Kratek uvod, ki povzema vsebino besedila v 2-3 stavkih in prebudi radovednost bralca\n\nOblikujte svoj odgovor natanko tako:\nHEADLINE: [Vaš naslov tukaj]\nINTRO: [Vaš uvod tukaj]', + // Maltesisch + mt: "Inti assistent li janalizza u jissommarja testi. Il-kompitu tiegħek huwa li toħloq żewġ affarijiet għat-test li ġej:\n1. Intestatura qasira u konċiza (massimu 8 kliem)\n2. Introduzzjoni qasira li tissommarja l-kontenut tat-test f'2-3 sentenzi u tqajjem il-kurżità tal-qarrej\n\nFormatja t-tweġiba tiegħek eżattament hekk:\nHEADLINE: [L-intestatura tiegħek hawn]\nINTRO: [L-introduzzjoni tiegħek hawn]", + // Armenisch + hy: 'Դուք օգնական եք, որը վերլուծում և ամփոփում է տեքստեր: Ձեր խնդիրն է ստեղծել երկու բան հետևյալ տեքստի համար:\n1. Կարճ, հակիրճ վերնագիր (առավելագույնը 8 բառ)\n2. Կարճ ներածություն, որը ամփոփում է տեքստի բովանդակությունը 2-3 նախադասությամբ և արթնացնում ընթերցողի հետաքրքրությունը\n\nՁևակերպեք ձեր պատասխանը հենց այսպես:\nHEADLINE: [Ձեր վերնագիրը այստեղ]\nINTRO: [Ձեր ներածությունը այստեղ]', + // Usbekisch + uz: "Siz matnlarni tahlil qiluvchi va xulosa chiqaruvchi yordamchisiz. Sizning vazifangiz quyidagi matn uchun ikki narsa yaratishdir:\n1. Qisqa, aniq sarlavha (maksimal 8 so'z)\n2. Matn mazmunini 2-3 jumlada xulosa qiladigan va o'quvchining qiziqishini uyg'otadigan qisqa kirish\n\nJavobingizni aynan shunday formatlang:\nHEADLINE: [Bu yerda sizning sarlavhangiz]\nINTRO: [Bu yerda sizning kirishingiz]", + // Irisch + ga: 'Is cúntóir thú a dhéanann anailís agus achoimre ar théacsanna. Is é do thasc dhá rud a chruthú don téacs seo a leanas:\n1. Ceannlíne ghearr, ghonta (8 bhfocal ar a mhéad)\n2. Réamhrá gearr a dhéanann achoimre ar ábhar an téacs i 2-3 abairt agus a spreagann fiosracht an léitheora\n\nFormáidigh do fhreagra díreach mar seo:\nHEADLINE: [Do cheannlíne anseo]\nINTRO: [Do réamhrá anseo]', + // Walisisch + cy: "Rydych chi'n gynorthwyydd sy'n dadansoddi ac yn crynhoi testunau. Eich tasg yw creu dau beth ar gyfer y testun canlynol:\n1. Pennawd byr, cryno (uchafswm o 8 gair)\n2. Cyflwyniad byr sy'n crynhoi cynnwys y testun mewn 2-3 brawddeg ac yn ennyn chwilfrydedd y darllenydd\n\nFformatiwch eich ateb yn union fel hyn:\nHEADLINE: [Eich pennawd yma]\nINTRO: [Eich cyflwyniad yma]", + // Filipino + fil: 'Ikaw ay isang katulong na nag-aanalisa at bumubuod ng mga teksto. Ang iyong gawain ay lumikha ng dalawang bagay para sa sumusunod na teksto:\n1. Maikling, malinaw na pamagat (hindi hihigit sa 8 salita)\n2. Maikling panimula na bumubuod sa nilalaman ng teksto sa 2-3 pangungusap at nakakagising ng kuryosidad ng mambabasa\n\nI-format ang iyong sagot nang eksakto tulad nito:\nHEADLINE: [Ang iyong pamagat dito]\nINTRO: [Ang iyong panimula dito]', + }, +}; +/** + * Hilfsfunktion zum Abrufen des Headline-Prompts für eine bestimmte Sprache + * @param language Sprache (z.B. 'de', 'en', 'fr') + * @returns Headline-Prompt für die angegebene Sprache oder Fallback + */ export function getHeadlinePrompt(language) { + const lang = language.toLowerCase().split('-')[0]; // z.B. 'de-DE' -> 'de' + // Versuche spezifische Sprache, dann Deutsch, dann Englisch, dann erste verfügbare + return ( + SYSTEM_PROMPTS.headline[lang] || + SYSTEM_PROMPTS.headline['de'] || + SYSTEM_PROMPTS.headline['en'] || + Object.values(SYSTEM_PROMPTS.headline)[0] || + 'You are an assistant that analyzes and summarizes texts.' + ); +} diff --git a/apps/memoro/apps/backend/supabase/functions/headline/index.ts b/apps/memoro/apps/backend/supabase/functions/headline/index.ts new file mode 100644 index 000000000..e243e0961 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/headline/index.ts @@ -0,0 +1,508 @@ +// Follow this setup guide to integrate the Deno language server with your editor: +// https://deno.land/manual/getting_started/setup_your_environment +// This enables autocomplete, go to definition, etc. +// Setup type definitions for built-in Supabase Runtime APIs +import 'jsr:@supabase/functions-js/edge-runtime.d.ts'; +import { serve } from 'https://deno.land/std@0.215.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'; +import { SYSTEM_PROMPTS } from './constants.ts'; +// Inline error handling utilities to avoid deployment issues +// Atomic status update utilities using RPC to prevent race conditions +async function setMemoErrorStatus(supabaseClient, memoId, processName, error) { + if (!memoId) return; + const errorMessage = error instanceof Error ? error.message : String(error); + const timestamp = new Date().toISOString(); + try { + await supabaseClient.rpc('set_memo_process_error', { + p_memo_id: memoId, + p_process_name: processName, + p_timestamp: timestamp, + p_reason: errorMessage, + p_details: null, + }); + } catch (dbError) { + console.error(`Error setting error status for memo ${memoId}:`, dbError); + } +} +async function setMemoProcessingStatus(supabaseClient, memoId, processName) { + const timestamp = new Date().toISOString(); + try { + await supabaseClient.rpc('set_memo_process_status', { + p_memo_id: memoId, + p_process_name: processName, + p_status: 'processing', + p_timestamp: timestamp, + }); + } catch (dbError) { + console.error(`Error setting processing status for memo ${memoId}:`, dbError); + } +} +async function setMemoCompletedStatus(supabaseClient, memoId, processName, details) { + const timestamp = new Date().toISOString(); + try { + await supabaseClient.rpc('set_memo_process_status_with_details', { + p_memo_id: memoId, + p_process_name: processName, + p_status: 'completed', + p_timestamp: timestamp, + p_details: details, + }); + } catch (dbError) { + console.error(`Error setting completed status for memo ${memoId}:`, dbError); + } +} +function createErrorResponse(error, status = 500, corsHeaders = {}) { + const errorMessage = error instanceof Error ? error.message : String(error); + return new Response( + JSON.stringify({ + error: errorMessage, + timestamp: new Date().toISOString(), + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status, + } + ); +} +// Umgebungsvariablen +const SUPABASE_URL = Deno.env.get('SUPABASE_URL'); +if (!SUPABASE_URL) { + throw new Error('SUPABASE_URL not configured'); +} +const SERVICE_KEY = Deno.env.get('C_SUPABASE_SECRET_KEY'); +if (!SERVICE_KEY) { + throw new Error('C_SUPABASE_SECRET_KEY not configured'); +} +// Google Gemini Konfiguration +const GEMINI_API_KEY = Deno.env.get('CREATE_HEADLINE_GEMINI_MEMORO') || ''; +const GEMINI_MODEL = 'gemini-2.0-flash'; +const GEMINI_ENDPOINT = 'https://generativelanguage.googleapis.com/v1beta/models'; +// Azure OpenAI Konfiguration (Backup) +const AZURE_OPENAI_ENDPOINT = 'https://memoroseopenai.openai.azure.com'; +const AZURE_OPENAI_KEY = Deno.env.get('AZURE_OPENAI_KEY'); +if (!AZURE_OPENAI_KEY) { + throw new Error('AZURE_OPENAI_KEY not configured'); +} +const AZURE_OPENAI_DEPLOYMENT = 'gpt-4.1-mini-se'; +const AZURE_OPENAI_API_VERSION = '2025-01-01-preview'; +// Supabase-Client +const memoro_sb = createClient(SUPABASE_URL, SERVICE_KEY); +// ===== PROMPT HELPER FUNCTIONS ===== +/** + * Hilfsfunktion zum Abrufen des richtigen Prompts basierend auf der Sprache + * + * @param type - Der Typ des Prompts (z.B. 'headline') + * @param language - Der Sprachcode (z.B. 'de', 'en') + * @returns Der Prompt in der angegebenen Sprache oder der deutsche Prompt als Fallback + */ function getSystemPrompt(type, language) { + // Extrahiere den Basis-Sprachcode (z.B. 'de-DE' -> 'de') + const baseLanguage = language.split('-')[0].toLowerCase(); + // Prüfe, ob der Prompt-Typ existiert + if (!SYSTEM_PROMPTS[type]) { + console.warn(`Prompt-Typ '${type}' nicht gefunden. Verwende 'headline' als Fallback.`); + return SYSTEM_PROMPTS.headline.de; // Fallback auf deutschen Headline-Prompt + } + // Prüfe, ob die Sprache existiert + if (!SYSTEM_PROMPTS[type][baseLanguage]) { + console.warn( + `Sprache '${baseLanguage}' für Prompt-Typ '${type}' nicht gefunden. Verwende 'de' als Fallback.` + ); + return SYSTEM_PROMPTS[type].de; // Fallback auf Deutsch + } + return SYSTEM_PROMPTS[type][baseLanguage]; +} +// ===== PROMPT FUNCTIONS ===== +/** + * Generiert einen Headline-Prompt für die angegebene Sprache + * + * @param language - Der Sprachcode (z.B. 'de', 'en') + * @param text - Der zu analysierende Text + * @returns Der vollständige Prompt für die Headline-Generierung + */ function getHeadlinePrompt(language, text) { + // Hole den System-Prompt für die angegebene Sprache + const systemPrompt = getSystemPrompt('headline', language); + // Kombiniere den System-Prompt mit dem Text + return `${systemPrompt}\n\n${text}`; +} +/** + * Extrahiert die Headline und das Intro aus der Antwort des LLM + * + * @param content - Die Antwort des LLM + * @returns Ein Objekt mit Headline und Intro oder Standardwerte bei Fehlern + */ function extractHeadlineAndIntro(content) { + // Extrahiere Headline und Intro aus der Antwort + const headlineMatch = content.match(/HEADLINE:\s*(.+?)(?=\nINTRO:|$)/s); + const introMatch = content.match(/INTRO:\s*(.+?)$/s); + // Fallback-Werte, falls keine Übereinstimmung gefunden wurde + const headline = headlineMatch?.[1]?.trim() || 'Neue Aufnahme'; + const intro = introMatch?.[1]?.trim() || 'Keine Zusammenfassung verfügbar.'; + return { + headline, + intro, + }; +} +// ===== HEADLINE GENERATION FUNCTIONS ===== +/** + * Generiert eine Überschrift und ein Intro für einen Text mithilfe von Google Gemini Flash + * + * @param text - Der Text, für den eine Überschrift und ein Intro generiert werden soll + * @returns Ein Objekt mit der generierten Überschrift und dem Intro oder null bei Fehlern + */ async function generateHeadlineWithGemini(text, language = 'de') { + try { + // Hole den passenden Prompt basierend auf der erkannten Sprache + const prompt = getHeadlinePrompt(language, text); + // Log prompt for debugging + console.log('Gemini prompt:', { + language, + textLength: text.length, + promptLength: prompt.length, + promptPreview: prompt.substring(0, 300) + (prompt.length > 300 ? '...' : ''), + }); + const response = await fetch( + `${GEMINI_ENDPOINT}/${GEMINI_MODEL}:generateContent?key=${GEMINI_API_KEY}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + contents: [ + { + parts: [ + { + text: prompt, + }, + ], + }, + ], + generationConfig: { + temperature: 0.7, + maxOutputTokens: 300, + }, + }), + } + ); + if (!response.ok) { + const errorText = await response.text(); + console.error('Gemini API Fehler:', errorText); + throw new Error(`Gemini API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.candidates?.[0]?.content?.parts?.[0]?.text?.trim() || ''; + // Log AI response for debugging + console.log('Gemini API Response:', { + status: response.status, + contentLength: content.length, + content: content.substring(0, 200) + (content.length > 200 ? '...' : ''), + fullResponse: JSON.stringify(data, null, 2), + }); + // Extrahiere Headline und Intro aus der Antwort + const result = extractHeadlineAndIntro(content); + if (!result.headline || !result.intro) { + console.error('Gemini-Antwort hat nicht das erwartete Format:', content); + return null; + } + console.log('Gemini parsed result:', result); + return result; + } catch (error) { + const errorMessage = error instanceof Error ? error.message : 'Unbekannter Fehler'; + console.error('Fehler bei der Gemini Headline/Intro-Generierung:', errorMessage); + return null; + } +} +/** + * Generiert eine Überschrift und ein Intro für einen Text mithilfe von Azure OpenAI + * + * @param text - Der Text, für den eine Überschrift und ein Intro generiert werden soll + * @returns Ein Objekt mit der generierten Überschrift und dem Intro oder Fallback-Werte bei Fehlern + */ async function generateHeadlineWithAzure(text, language = 'de') { + try { + // Hole den passenden Prompt basierend auf der erkannten Sprache + const prompt = getHeadlinePrompt(language, text); + // Log prompt for debugging + console.log('Azure OpenAI prompt:', { + language, + textLength: text.length, + promptLength: prompt.length, + promptPreview: prompt.substring(0, 300) + (prompt.length > 300 ? '...' : ''), + }); + const response = await fetch( + `${AZURE_OPENAI_ENDPOINT}/openai/deployments/${AZURE_OPENAI_DEPLOYMENT}/chat/completions?api-version=${AZURE_OPENAI_API_VERSION}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'api-key': AZURE_OPENAI_KEY, + }, + body: JSON.stringify({ + messages: [ + { + role: 'user', + content: prompt, + }, + ], + max_tokens: 300, + temperature: 0.7, + }), + } + ); + if (!response.ok) { + const errorText = await response.text(); + console.error('Azure OpenAI API Fehler:', errorText); + throw new Error(`Azure OpenAI API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.choices[0]?.message?.content?.trim() || ''; + // Log AI response for debugging + console.log('Azure OpenAI API Response:', { + status: response.status, + contentLength: content.length, + content: content.substring(0, 200) + (content.length > 200 ? '...' : ''), + fullResponse: JSON.stringify(data, null, 2), + }); + // Extrahiere Headline und Intro aus der Antwort + const result = extractHeadlineAndIntro(content); + console.log('Azure OpenAI parsed result:', result); + return result; + } catch (error) { + const errorMessage = error instanceof Error ? error.message : 'Unbekannter Fehler'; + console.error('Fehler bei der Azure Headline/Intro-Generierung:', errorMessage); + return { + headline: 'Neue Aufnahme', + intro: 'Keine Zusammenfassung verfügbar.', // Fallback-Intro + }; + } +} +/** + * Hauptfunktion zur Generierung von Headline und Intro + * Versucht zuerst Gemini Flash und fällt bei Fehler auf Azure OpenAI zurück + * + * @param text - Der Text, für den eine Überschrift und ein Intro generiert werden soll + * @returns Ein Objekt mit der generierten Überschrift und dem Intro + */ async function generateHeadlineAndIntro(text, language = 'de') { + try { + // Zuerst mit Gemini versuchen + const geminiResult = await generateHeadlineWithGemini(text, language); + // Wenn Gemini erfolgreich war, Ergebnis zurückgeben + if (geminiResult) { + console.debug('Headline mit Gemini Flash generiert'); + return geminiResult; + } + // Sonst auf Azure OpenAI zurückfallen + console.debug('Fallback auf Azure OpenAI für Headline-Generierung'); + return await generateHeadlineWithAzure(text, language); + } catch (error) { + const errorMessage = error instanceof Error ? error.message : 'Unbekannter Fehler'; + console.error('Fehler bei der Headline/Intro-Generierung:', errorMessage); + return { + headline: 'Neue Aufnahme', + intro: 'Keine Zusammenfassung verfügbar.', // Fallback-Intro + }; + } +} +// Hauptfunktion - ohne JWT-Verifizierung für Datenbank-Trigger +serve(async (req) => { + // CORS-Header für Entwicklung + const corsHeaders = { + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': 'POST, OPTIONS', + 'Access-Control-Allow-Headers': 'Content-Type, Authorization', + }; + // OPTIONS-Anfrage für CORS + if (req.method === 'OPTIONS') { + return new Response(null, { + headers: corsHeaders, + status: 204, + }); + } + let memo_id_to_update = null; + try { + // Anfrage-Daten extrahieren + const requestData = await req.json(); + const { memo_id } = requestData; + memo_id_to_update = memo_id; + if (!memo_id) { + return createErrorResponse('memo_id ist erforderlich', 400, corsHeaders); + } + // Set processing status + await setMemoProcessingStatus(memoro_sb, memo_id, 'headline_and_intro'); + // Memo aus der Datenbank abrufen + const { data: memo, error: memoError } = await memoro_sb + .from('memos') + .select('*') + .eq('id', memo_id) + .single(); + if (memoError || !memo) { + console.error('Fehler beim Abrufen des Memos:', memoError); + await setMemoErrorStatus( + memoro_sb, + memo_id, + 'headline_and_intro', + `Memo nicht gefunden: ${memoError?.message || 'Unbekannter Fehler'}` + ); + return createErrorResponse( + `Memo nicht gefunden: ${memoError?.message || 'Unbekannter Fehler'}`, + 404, + corsHeaders + ); + } + let transcript = ''; + // Generate transcript from utterances if available + if ( + memo.source?.utterances && + Array.isArray(memo.source.utterances) && + memo.source.utterances.length > 0 + ) { + // Sort utterances by offset if available and concatenate texts + const sortedUtterances = [...memo.source.utterances].sort((a, b) => { + const offsetA = a.offset || 0; + const offsetB = b.offset || 0; + return offsetA - offsetB; + }); + transcript = sortedUtterances + .map((utterance) => utterance.text) + .filter((text) => text && text.trim() !== '') + .join(' '); + } else if (memo.transcript) { + transcript = memo.transcript; + } else if (memo.source?.transcript) { + transcript = memo.source.transcript; + } else if (memo.source?.content) { + transcript = memo.source.content; + } else if (memo.source?.type === 'combined' && memo.source?.additional_recordings) { + transcript = memo.source.additional_recordings + .map((recording) => { + // Try to get transcript from utterances first + if (recording.utterances && Array.isArray(recording.utterances)) { + const sortedUtterances = [...recording.utterances].sort((a, b) => { + const offsetA = a.offset || 0; + const offsetB = b.offset || 0; + return offsetA - offsetB; + }); + return sortedUtterances + .map((utterance) => utterance.text) + .filter((text) => text && text.trim() !== '') + .join(' '); + } + // Fallback to transcript field + return recording.transcript || ''; + }) + .filter(Boolean) + .join('\n\n'); + } + // Ermittle die Sprache des Transkripts + let language = 'de'; // Standard: Deutsch + if (memo.source?.primary_language) { + language = memo.source.primary_language; + console.debug(`Primäre Sprache aus Memo-Quelle erkannt: ${language}`); + } else if ( + memo.source?.languages && + Array.isArray(memo.source.languages) && + memo.source.languages.length > 0 + ) { + language = memo.source.languages[0]; + console.debug(`Sprache aus Memo-Sprachen-Array erkannt: ${language}`); + } else if (memo.metadata?.primary_language) { + language = memo.metadata.primary_language; + console.debug(`Primäre Sprache aus Memo-Metadaten erkannt: ${language}`); + } + console.log(`Verwende Sprache für Headline-Generierung: ${language}`); + if (!transcript) { + console.error('Kein Transkript im Memo gefunden'); + await setMemoErrorStatus( + memoro_sb, + memo_id, + 'headline_and_intro', + 'Kein Transkript im Memo gefunden' + ); + return createErrorResponse('Kein Transkript im Memo gefunden', 400, corsHeaders); + } + // Headline und Intro generieren + const { headline, intro } = await generateHeadlineAndIntro(transcript, language); + // First get the current memo state for the 'old' value in the broadcast + const oldMemo = { + ...memo, + }; + // Update memo normally + const { error: updateError } = await memoro_sb + .from('memos') + .update({ + title: headline, + intro: intro, + updated_at: new Date().toISOString(), + }) + .eq('id', memo_id); + if (updateError) { + console.error('Fehler beim Aktualisieren des Memos:', updateError); + await setMemoErrorStatus(memoro_sb, memo_id, 'headline_and_intro', updateError); + throw updateError; + } + // Log the update for debugging + console.log('Headline generated and memo updated:', { + memo_id, + old_title: oldMemo.title, + new_title: headline, + user_id: memo.user_id, + }); + // Send broadcast update to notify clients about the title change + try { + const channel = memoro_sb.channel(`memo-updates-${memo_id}`); + // Subscribe first to ensure the channel is ready + channel.subscribe(async (status) => { + if (status === 'SUBSCRIBED') { + await channel.send({ + type: 'broadcast', + event: 'memo-updated', + payload: { + type: 'memo-updated', + memoId: memo_id, + changes: { + title: headline, + intro: intro, + updated_at: new Date().toISOString(), + }, + source: 'headline-edge-function', + }, + }); + console.log(`Broadcast sent for memo ${memo_id} title update`); + // Clean up the channel after sending + memoro_sb.removeChannel(channel); + } + }); + } catch (broadcastError) { + console.warn('Failed to send broadcast update:', broadcastError); + // Don't fail the function if broadcast fails + } + // Set completed status + await setMemoCompletedStatus(memoro_sb, memo_id, 'headline_and_intro', { + headline, + intro, + language, + }); + // Erfolgreiche Antwort + return new Response( + JSON.stringify({ + success: true, + headline: headline, + intro: intro, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 200, + } + ); + } catch (error) { + console.error('Unerwarteter Fehler in der Headline-Funktion:', error); + // Set error status in database + const errorToLog = error instanceof Error ? error : new Error(String(error)); + await setMemoErrorStatus(memoro_sb, memo_id_to_update, 'headline_and_intro', errorToLog); + // Return error response + return createErrorResponse(`Unerwarteter Fehler: ${errorToLog.message}`, 500, corsHeaders); + } +}); diff --git a/apps/memoro/apps/backend/supabase/functions/manage-spaces/index.ts b/apps/memoro/apps/backend/supabase/functions/manage-spaces/index.ts new file mode 100644 index 000000000..2cdee9736 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/manage-spaces/index.ts @@ -0,0 +1,271 @@ +import { serve } from 'https://deno.land/std@0.177.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2.38.4'; +// Express backend URL +const EXPRESS_BACKEND_URL = Deno.env.get('EXPRESS_BACKEND_URL'); +serve(async (req) => { + try { + console.log('Manage-Spaces Function called'); + // Create a Supabase client with the service role key + const supabaseUrl = Deno.env.get('SUPABASE_URL'); + const supabaseServiceRoleKey = Deno.env.get('C_SUPABASE_SECRET_KEY'); + if (!supabaseUrl || !supabaseServiceRoleKey) { + console.error('Supabase environment variables not found'); + return new Response( + JSON.stringify({ + error: 'Supabase credentials not found', + }), + { + status: 500, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } + if (!EXPRESS_BACKEND_URL) { + console.error('EXPRESS_BACKEND_URL environment variable not found'); + return new Response( + JSON.stringify({ + error: 'Express backend URL not found', + }), + { + status: 500, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } + const supabaseClient = createClient(supabaseUrl, supabaseServiceRoleKey); + // Parse request body + const { action, space, token } = await req.json(); + if (!action || !space || !token) { + return new Response( + JSON.stringify({ + error: 'Action, space details, and token are required', + }), + { + status: 400, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } + // Validate request based on action + if (action === 'create' && (!space.name || !space.appId)) { + return new Response( + JSON.stringify({ + error: 'For create action, name and appId are required', + }), + { + status: 400, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } + if ((action === 'update' || action === 'delete') && !space.id) { + return new Response( + JSON.stringify({ + error: 'For update or delete action, space id is required', + }), + { + status: 400, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } + // Step 1: Call Express backend to perform the action + let expressResult; + let expressUrl; + let expressMethod; + let expressBody; + switch (action) { + case 'create': + expressUrl = `${EXPRESS_BACKEND_URL}/api/spaces`; + expressMethod = 'POST'; + expressBody = JSON.stringify({ + name: space.name, + appId: space.appId, + }); + break; + case 'update': + expressUrl = `${EXPRESS_BACKEND_URL}/api/spaces/${space.id}`; + expressMethod = 'PUT'; + expressBody = JSON.stringify({ + name: space.name, + }); + break; + case 'delete': + expressUrl = `${EXPRESS_BACKEND_URL}/api/spaces/${space.id}`; + expressMethod = 'DELETE'; + expressBody = null; + break; + default: + return new Response( + JSON.stringify({ + error: 'Invalid action', + }), + { + status: 400, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } + const expressResponse = await fetch(expressUrl, { + method: expressMethod, + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${token}`, + }, + body: expressBody, + }); + if (!expressResponse.ok) { + const errorText = await expressResponse.text(); + console.error(`Express backend error (${action}):`, errorText); + return new Response( + JSON.stringify({ + error: `Error from Express backend: ${errorText}`, + }), + { + status: expressResponse.status, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } + expressResult = await expressResponse.json(); + // Step 2: Update local Supabase database based on the action + let supabaseResult; + switch (action) { + case 'create': + // Get user information from the auth token + const { data: user, error: userError } = await supabaseClient.auth.getUser(token); + if (userError) { + console.error('Error getting user from token:', userError); + return new Response( + JSON.stringify({ + error: `Error getting user: ${userError.message}`, + }), + { + status: 401, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } + // Use spaceId returned from express backend + const spaceId = expressResult.spaceId; + if (!spaceId) { + return new Response( + JSON.stringify({ + error: 'Express backend did not return a space ID', + }), + { + status: 500, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } + // Create the space in local Supabase + const { data: localSpace, error: insertError } = await supabaseClient + .from('spaces') + .insert({ + id: spaceId, + name: space.name, + description: space.description || '', + color: space.color || '#4CAF50', + user_id: user.user.id, + is_default: false, + }) + .select() + .single(); + if (insertError) { + console.error('Error creating space in Supabase:', insertError); + // Continue anyway since Express backend operation succeeded + supabaseResult = { + warning: `Local database update failed: ${insertError.message}`, + }; + } else { + supabaseResult = localSpace; + } + break; + case 'update': + // Update the space in local Supabase + const { data: updatedSpace, error: updateError } = await supabaseClient + .from('spaces') + .update({ + name: space.name, + description: space.description, + color: space.color, + updated_at: new Date().toISOString(), + }) + .eq('id', space.id) + .select() + .single(); + if (updateError) { + console.error('Error updating space in Supabase:', updateError); + supabaseResult = { + warning: `Local database update failed: ${updateError.message}`, + }; + } else { + supabaseResult = updatedSpace; + } + break; + case 'delete': + // Delete the space in local Supabase + const { error: deleteError } = await supabaseClient + .from('spaces') + .delete() + .eq('id', space.id); + if (deleteError) { + console.error('Error deleting space in Supabase:', deleteError); + supabaseResult = { + warning: `Local database delete failed: ${deleteError.message}`, + }; + } else { + supabaseResult = { + success: true, + }; + } + break; + } + // Return success response + return new Response( + JSON.stringify({ + success: true, + action, + expressResult, + localResult: supabaseResult, + }), + { + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } catch (error) { + console.error('Unexpected error:', error); + return new Response( + JSON.stringify({ + error: `Unexpected error: ${error.message}`, + }), + { + status: 500, + headers: { + 'Content-Type': 'application/json', + }, + } + ); + } +}); diff --git a/apps/memoro/apps/backend/supabase/functions/question-memo/constants.ts b/apps/memoro/apps/backend/supabase/functions/question-memo/constants.ts new file mode 100644 index 000000000..30768cb0e --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/question-memo/constants.ts @@ -0,0 +1,75 @@ +/** + * System-Prompts für die Question-Memo-Funktion in verschiedenen Sprachen + * + * Die Prompts werden als System-Prompt für die AI-Nachrichten verwendet, + * um konsistente und hilfreiche Antworten bei der Fragenbeantwortung zu generieren. + */ /** + * Interface für die Prompt-Konfiguration + */ /** + * System-Prompts für die Question-Memo-Verarbeitung + * + * Unterstützte Sprachen: + * - de: Deutsch + * - en: Englisch + * - fr: Französisch + * - es: Spanisch + * - it: Italienisch + * - nl: Niederländisch + * - pt: Portugiesisch + * - ru: Russisch + * - ja: Japanisch + * - ko: Koreanisch + * - zh: Chinesisch + * - ar: Arabisch + * - hi: Hindi + * - tr: Türkisch + * - pl: Polnisch + */ export const SYSTEM_PROMPTS = { + system: { + // Deutsch + de: 'DU bist ein aufmerksamer Texter. ', + // Englisch + en: 'You are a helpful assistant that answers questions based on conversation transcripts. Your task is to provide precise and relevant answers to user questions by using the information from the provided transcript. Answer directly and factually. If the answer cannot be found in the transcript, politely indicate this.', + // Französisch + fr: 'Vous êtes un assistant utile qui répond aux questions basées sur des transcriptions de conversations. Votre tâche est de fournir des réponses précises et pertinentes aux questions des utilisateurs en utilisant les informations de la transcription fournie. Répondez directement et factuellement. Si la réponse ne peut pas être trouvée dans la transcription, indiquez-le poliment.', + // Spanisch + es: 'Eres un asistente útil que responde preguntas basadas en transcripciones de conversaciones. Tu tarea es proporcionar respuestas precisas y relevantes a las preguntas de los usuarios utilizando la información de la transcripción proporcionada. Responde de forma directa y objetiva. Si la respuesta no se puede encontrar en la transcripción, indícalo cortésmente.', + // Italienisch + it: 'Sei un assistente utile che risponde a domande basate su trascrizioni di conversazioni. Il tuo compito è fornire risposte precise e pertinenti alle domande degli utenti utilizzando le informazioni della trascrizione fornita. Rispondi in modo diretto e fattuale. Se la risposta non può essere trovata nella trascrizione, indicalo cortesemente.', + // Niederländisch + nl: 'Je bent een behulpzame assistent die vragen beantwoordt op basis van gesprekstranscripties. Je taak is om precieze en relevante antwoorden te geven op gebruikersvragen door de informatie uit de verstrekte transcriptie te gebruiken. Antwoord direct en feitelijk. Als het antwoord niet in de transcriptie te vinden is, geef dit dan beleefd aan.', + // Portugiesisch + pt: 'Você é um assistente útil que responde perguntas com base em transcrições de conversas. Sua tarefa é fornecer respostas precisas e relevantes às perguntas dos usuários usando as informações da transcrição fornecida. Responda de forma direta e factual. Se a resposta não puder ser encontrada na transcrição, indique isso educadamente.', + // Russisch + ru: 'Вы полезный помощник, который отвечает на вопросы на основе расшифровок разговоров. Ваша задача - предоставлять точные и актуальные ответы на вопросы пользователей, используя информацию из предоставленной расшифровки. Отвечайте прямо и по существу. Если ответ не может быть найден в расшифровке, вежливо укажите на это.', + // Japanisch + ja: 'あなたは会話の転写に基づいて質問に答える有用なアシスタントです。あなたの仕事は、提供された転写の情報を使用して、ユーザーの質問に正確で関連性のある回答を提供することです。直接的かつ事実に基づいて回答してください。転写に答えが見つからない場合は、丁寧にそのことを伝えてください。', + // Koreanisch + ko: '당신은 대화 전사본을 기반으로 질문에 답하는 유용한 어시스턴트입니다. 당신의 임무는 제공된 전사본의 정보를 사용하여 사용자 질문에 정확하고 관련성 있는 답변을 제공하는 것입니다. 직접적이고 사실적으로 답변하세요. 전사본에서 답을 찾을 수 없는 경우 정중하게 알려주세요.', + // Chinesisch (vereinfacht) + zh: '你是一个有用的助手,根据对话转录回答问题。你的任务是使用提供的转录中的信息,为用户的问题提供准确和相关的答案。请直接且基于事实回答。如果在转录中找不到答案,请礼貌地说明。', + // Arabisch + ar: 'أنت مساعد مفيد يجيب على الأسئلة بناءً على نسخ المحادثات. مهمتك هي تقديم إجابات دقيقة وذات صلة لأسئلة المستخدمين باستخدام المعلومات من النسخ المقدمة. أجب بشكل مباشر وواقعي. إذا لم يمكن العثور على الإجابة في النسخ، فأشر إلى ذلك بأدب.', + // Hindi + hi: 'आप एक उपयोगी सहायक हैं जो बातचीत के प्रतिलेख के आधार पर प्रश्नों का उत्तर देते हैं। आपका कार्य प्रदान किए गए प्रतिलेख की जानकारी का उपयोग करके उपयोगकर्ता के प्रश्नों के लिए सटीक और प्रासंगिक उत्तर प्रदान करना है। सीधे और तथ्यात्मक रूप से उत्तर दें। यदि प्रतिलेख में उत्तर नहीं मिल सकता है, तो विनम्रता से इसे इंगित करें।', + // Türkisch + tr: 'Konuşma transkriptlerine dayalı olarak soruları yanıtlayan yararlı bir asistansınız. Göreviniz, sağlanan transkriptteki bilgileri kullanarak kullanıcı sorularına kesin ve ilgili yanıtlar vermektir. Doğrudan ve olgusal olarak yanıt verin. Yanıt transkriptte bulunamazsa, bunu kibarca belirtin.', + // Polnisch + pl: 'Jesteś pomocnym asystentem, który odpowiada na pytania na podstawie transkrypcji rozmów. Twoim zadaniem jest udzielanie precyzyjnych i trafnych odpowiedzi na pytania użytkowników, korzystając z informacji z dostarczonej transkrypcji. Odpowiadaj bezpośrednio i rzeczowo. Jeśli odpowiedzi nie można znaleźć w transkrypcji, uprzejmie to wskaż.', + }, +}; +/** + * Hilfsfunktion zum Abrufen des System-Prompts für eine bestimmte Sprache + * @param language Sprache (z.B. 'de', 'en', 'fr') + * @returns System-Prompt für die angegebene Sprache oder Fallback + */ export function getSystemPrompt(language) { + const lang = language.toLowerCase().split('-')[0]; // z.B. 'de-DE' -> 'de' + // Versuche spezifische Sprache, dann Deutsch, dann Englisch, dann erste verfügbare + return ( + SYSTEM_PROMPTS.system[lang] || + SYSTEM_PROMPTS.system['de'] || + SYSTEM_PROMPTS.system['en'] || + Object.values(SYSTEM_PROMPTS.system)[0] || + 'You are a helpful assistant.' + ); +} diff --git a/apps/memoro/apps/backend/supabase/functions/question-memo/index.ts b/apps/memoro/apps/backend/supabase/functions/question-memo/index.ts new file mode 100644 index 000000000..2e26fa344 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/question-memo/index.ts @@ -0,0 +1,607 @@ +// Follow this setup guide to integrate the Deno language server with your editor: +// https://deno.land/manual/getting_started/setup_your_environment +// This enables autocomplete, go to definition, etc. +// Setup type definitions for built-in Supabase Runtime APIs +import 'jsr:@supabase/functions-js/edge-runtime.d.ts'; +import { serve } from 'https://deno.land/std@0.215.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'; +import { getSystemPrompt } from './constants.ts'; +import { ROOT_SYSTEM_PROMPTS } from '../_shared/system-prompt.ts'; +/** + * Question Memo Edge Function + * + * Diese Funktion nimmt eine Benutzerfrage und ein Memo-Transkript entgegen, + * sendet beides an Gemini API und erstellt eine neue Memory mit der Antwort. + * + * @version 1.0.0 + * @date 2025-05-23 + */ // ─── Umgebungsvariablen ────────────────────────────────────────────── +const SUPABASE_URL = Deno.env.get('SUPABASE_URL'); +if (!SUPABASE_URL) { + throw new Error('SUPABASE_URL not configured'); +} +const SERVICE_KEY = Deno.env.get('C_SUPABASE_SECRET_KEY'); +if (!SERVICE_KEY) { + throw new Error('C_SUPABASE_SECRET_KEY not configured'); +} +// Google Gemini Konfiguration +const GEMINI_API_KEY = Deno.env.get('QUESTION_MEMO_GEMINI_MEMORO') || ''; +const GEMINI_MODEL = 'gemini-2.0-flash'; +const GEMINI_ENDPOINT = 'https://generativelanguage.googleapis.com/v1beta/models'; +// Azure OpenAI Konfiguration (Backup) +const AZURE_OPENAI_ENDPOINT = 'https://memoroseopenai.openai.azure.com'; +const AZURE_OPENAI_KEY = Deno.env.get('AZURE_OPENAI_KEY'); +if (!AZURE_OPENAI_KEY) { + throw new Error('AZURE_OPENAI_KEY not configured'); +} +const AZURE_OPENAI_DEPLOYMENT = 'gpt-4.1-mini-se'; +const AZURE_OPENAI_API_VERSION = '2025-01-01-preview'; +const memoro_sb = createClient(SUPABASE_URL, SERVICE_KEY); +// ─── Logging-Funktion ────────────────────────────────────────────── +/** + * Erweiterte Logging-Funktion mit Zeitstempel und Log-Level + */ function log(level, message, data) { + const timestamp = new Date().toISOString(); + const logMessage = `[${timestamp}] [${level.toUpperCase()}] ${message}`; + switch (level.toUpperCase()) { + case 'INFO': + console.log(logMessage); + break; + case 'DEBUG': + console.debug(logMessage); + break; + case 'WARN': + console.warn(logMessage); + break; + case 'ERROR': + console.error(logMessage); + break; + default: + console.log(logMessage); + break; + } + if (data) { + if (level.toUpperCase() === 'ERROR') { + console.error(data); + } else { + console.log(typeof data === 'object' ? JSON.stringify(data, null, 2) : data); + } + } +} +/** + * Formatiert Transkript mit Speaker-Informationen für besseren Kontext + */ function formatTranscriptWithSpeakers(source) { + // Handle combined memos with additional_recordings + if ( + source.type === 'combined' && + source.additional_recordings && + Array.isArray(source.additional_recordings) + ) { + const transcripts = source.additional_recordings + .map((recording, index) => { + let recordingTranscript = ''; + // Extract transcript from each recording + if (recording.utterances && Array.isArray(recording.utterances)) { + // If recording has utterances, format with speakers if available + if (recording.speakers) { + recordingTranscript = recording.utterances + .map((utterance) => { + const speakerName = recording.speakers[utterance.speakerId] || utterance.speakerId; + return `${speakerName}: ${utterance.text}`; + }) + .join('\n'); + } else { + // No speaker info, just join utterances + recordingTranscript = recording.utterances.map((u) => u.text).join(' '); + } + } else if (recording.transcript) { + // Fallback to transcript field + recordingTranscript = recording.transcript; + } else if (recording.content) { + // Fallback to content field + recordingTranscript = recording.content; + } else if (recording.transcription) { + // Fallback to transcription field + recordingTranscript = recording.transcription; + } + return recordingTranscript; + }) + .filter(Boolean); + // Join all transcripts with a separator + if (transcripts.length > 0) { + return transcripts.join('\n\n--- Nächstes Memo ---\n\n'); + } + } + // Handle regular memos with utterances and speakers + if (source.utterances && source.speakers) { + return source.utterances + .map((utterance) => { + const speakerName = source.speakers[utterance.speakerId] || utterance.speakerId; + return `${speakerName}: ${utterance.text}`; + }) + .join('\n'); + } + // Fallback to other transcript fields + return source.transcript || source.content || source.transcription || ''; +} +/** + * Extrahiert erweiterte Kontext-Informationen aus dem Memo-Source und Metadaten + */ function extractContextInfo(source, metadata = {}) { + const transcript = formatTranscriptWithSpeakers(source); + // For combined memos, aggregate speaker count and duration from all recordings + let speakerCount = 0; + let totalDuration = 0; + let language = source.primary_language || source.languages?.[0] || 'unbekannt'; + if (source.type === 'combined' && source.additional_recordings) { + // Collect all unique speakers across all recordings + const allSpeakers = new Set(); + source.additional_recordings.forEach((recording) => { + if (recording.speakers) { + Object.keys(recording.speakers).forEach((speakerId) => allSpeakers.add(speakerId)); + } + // Sum up durations + if (recording.duration) { + totalDuration += recording.duration; + } + }); + speakerCount = allSpeakers.size; + // Use the combined memo's duration if available, otherwise use sum + totalDuration = source.duration || totalDuration; + } else { + // Regular memo + speakerCount = source.speakers ? Object.keys(source.speakers).length : 0; + totalDuration = source.duration || 0; + } + // Location aus Metadaten extrahieren + const locationName = metadata.location?.address?.name || null; + const locationAddress = metadata.location?.address?.formattedAddress || null; + // Stats aus Metadaten extrahieren + const wordCount = metadata.stats?.wordCount || null; + const audioDuration = metadata.stats?.audioDuration || totalDuration; + return { + transcript, + duration: audioDuration, + speakerCount, + wordCount, + language, + locationName, + locationAddress, + hasMultipleSpeakers: speakerCount > 1, + hasLocation: !!(locationName || locationAddress), + }; +} +/** + * Sendet Benutzerfrage + Transkript an Gemini und gibt die Antwort zurück + */ async function askQuestionWithGemini( + question, + contextInfo, + language = 'de', + functionIdForLog = 'global' +) { + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][LLM-${requestId}] Starte Gemini-Anfrage für Frage.`); + try { + // Kontext-Informationen zusammenstellen + const contextParts = []; + // Location hinzufügen falls verfügbar + if (contextInfo.hasLocation) { + if (contextInfo.locationName) { + contextParts.push(`Aufnahmeort: ${contextInfo.locationName}`); + } else if (contextInfo.locationAddress) { + contextParts.push(`Aufnahmeort: ${contextInfo.locationAddress}`); + } + } + // Audio-Stats hinzufügen + const statsInfo = []; + if (contextInfo.hasMultipleSpeakers) { + statsInfo.push(`${contextInfo.speakerCount} Sprecher`); + } + statsInfo.push(`${Math.round(contextInfo.duration)}s Dauer`); + if (contextInfo.wordCount) { + statsInfo.push(`${contextInfo.wordCount} Wörter`); + } + contextParts.push(`Audio-Info: ${statsInfo.join(', ')}`); + const contextFooter = + contextParts.length > 0 + ? `\n\nZusätzliche Kontext-Informationen:\n${contextParts.join('\n')}` + : ''; + const systemPrompt = getSystemPrompt(language); + const userPrompt = `Frage: ${question} + +Transkript: +${contextInfo.transcript}${contextFooter} + +${contextInfo.hasMultipleSpeakers ? 'Du kannst bei Bedarf auf spezifische Sprecher verweisen.' : ''}`; + // Prepend system prompt if available for the language + const systemPrePrompt = + ROOT_SYSTEM_PROMPTS.PRE_PROMPT[language] || ROOT_SYSTEM_PROMPTS.PRE_PROMPT['de']; + // Für Gemini: Kombiniere System-Prompt mit User-Prompt + const prompt = systemPrePrompt + ? `${systemPrePrompt}\n\n${systemPrompt}\n\n${userPrompt}` + : `${systemPrompt}\n\n${userPrompt}`; + log( + 'DEBUG', + `[${functionIdForLog}][LLM-${requestId}] Vollständiger Prompt (Länge: ${prompt.length})` + ); + const startTime = Date.now(); + const response = await fetch( + `${GEMINI_ENDPOINT}/${GEMINI_MODEL}:generateContent?key=${GEMINI_API_KEY}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + contents: [ + { + parts: [ + { + text: prompt, + }, + ], + }, + ], + generationConfig: { + temperature: 0.7, + maxOutputTokens: 8192, + }, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Gemini Antwort erhalten in ${duration}ms, Status: ${response.status}` + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][LLM-${requestId}] Gemini API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Gemini API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.candidates?.[0]?.content?.parts?.[0]?.text?.trim() || ''; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Erfolgreiche Gemini-Antwort (Länge: ${content.length}).` + ); + log( + 'DEBUG', + `[${functionIdForLog}][LLM-${requestId}] Antwort (erste 100 Zeichen): ${content.substring(0, 100)}...` + ); + return content; + } catch (error) { + log('ERROR', `[${functionIdForLog}][LLM-${requestId}] Fehler beim Gemini-Request:`, error); + throw error; + } +} +/** + * Sendet Benutzerfrage + Transkript an Azure OpenAI und gibt die Antwort zurück (Fallback) + */ async function askQuestionWithAzure( + question, + contextInfo, + language = 'de', + functionIdForLog = 'global' +) { + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][LLM-${requestId}] Starte Azure OpenAI-Anfrage für Frage.`); + try { + // Kontext-Informationen zusammenstellen + const contextParts = []; + // Location hinzufügen falls verfügbar + if (contextInfo.hasLocation) { + if (contextInfo.locationName) { + contextParts.push(`Aufnahmeort: ${contextInfo.locationName}`); + } else if (contextInfo.locationAddress) { + contextParts.push(`Aufnahmeort: ${contextInfo.locationAddress}`); + } + } + // Audio-Stats hinzufügen + const statsInfo = []; + if (contextInfo.hasMultipleSpeakers) { + statsInfo.push(`${contextInfo.speakerCount} Sprecher`); + } + statsInfo.push(`${Math.round(contextInfo.duration)}s Dauer`); + if (contextInfo.wordCount) { + statsInfo.push(`${contextInfo.wordCount} Wörter`); + } + contextParts.push(`Audio-Info: ${statsInfo.join(', ')}`); + const contextFooter = + contextParts.length > 0 + ? `\n\nZusätzliche Kontext-Informationen:\n${contextParts.join('\n')}` + : ''; + const systemPrompt = getSystemPrompt(language); + const userPrompt = `Frage: ${question} + +Transkript: +${contextInfo.transcript}${contextFooter} + +${contextInfo.hasMultipleSpeakers ? 'Du kannst bei Bedarf auf spezifische Sprecher verweisen.' : ''}`; + // Prepend system prompt if available for the language + const systemPrePrompt = + ROOT_SYSTEM_PROMPTS.PRE_PROMPT[language] || ROOT_SYSTEM_PROMPTS.PRE_PROMPT['de']; + const combinedSystemPrompt = systemPrePrompt + ? `${systemPrePrompt}\n\n${systemPrompt}` + : systemPrompt; + const startTime = Date.now(); + const response = await fetch( + `${AZURE_OPENAI_ENDPOINT}/openai/deployments/${AZURE_OPENAI_DEPLOYMENT}/chat/completions?api-version=${AZURE_OPENAI_API_VERSION}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'api-key': AZURE_OPENAI_KEY, + }, + body: JSON.stringify({ + messages: [ + { + role: 'system', + content: combinedSystemPrompt, + }, + { + role: 'user', + content: userPrompt, + }, + ], + max_tokens: 8192, + temperature: 0.7, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Azure OpenAI Antwort erhalten in ${duration}ms, Status: ${response.status}` + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][LLM-${requestId}] Azure OpenAI API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Azure OpenAI API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.choices[0]?.message?.content?.trim() || ''; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Erfolgreiche Azure OpenAI-Antwort (Länge: ${content.length}).` + ); + return content; + } catch (error) { + log( + 'ERROR', + `[${functionIdForLog}][LLM-${requestId}] Fehler beim Azure OpenAI-Request:`, + error + ); + throw error; + } +} +/** + * Hauptfunktion zur Beantwortung einer Frage mit Fallback-Logik + */ async function answerQuestion( + question, + contextInfo, + language = 'de', + functionIdForLog = 'global' +) { + try { + // Zuerst mit Gemini versuchen + return await askQuestionWithGemini(question, contextInfo, language, functionIdForLog); + } catch (error) { + log('WARN', `[${functionIdForLog}] Gemini fehlgeschlagen, fallback auf Azure OpenAI`, error); + try { + // Fallback auf Azure OpenAI + return await askQuestionWithAzure(question, contextInfo, language, functionIdForLog); + } catch (azureError) { + log('ERROR', `[${functionIdForLog}] Beide LLM-Services fehlgeschlagen`, azureError); + throw new Error('Beide LLM-Services sind nicht verfügbar'); + } + } +} +serve(async (req) => { + const functionId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionId}] Question-Memo-Funktion gestartet`); + const corsHeaders = { + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': 'POST, OPTIONS', + 'Access-Control-Allow-Headers': 'Content-Type, Authorization', + }; + if (req.method === 'OPTIONS') { + log('DEBUG', `[${functionId}] CORS Preflight-Anfrage bearbeitet`); + return new Response(null, { + headers: corsHeaders, + status: 204, + }); + } + try { + const requestData = await req.json(); + const { memo_id, question, user_id } = requestData; + log( + 'INFO', + `[${functionId}] Anfrage erhalten für memo_id: ${memo_id}, Frage: ${question?.substring(0, 50)}...` + ); + if (!memo_id) { + log('ERROR', `[${functionId}] Keine memo_id in der Anfrage gefunden`); + return new Response( + JSON.stringify({ + error: 'memo_id ist erforderlich', + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 400, + } + ); + } + if (!question || question.trim().length === 0) { + log('ERROR', `[${functionId}] Keine Frage in der Anfrage gefunden`); + return new Response( + JSON.stringify({ + error: 'Frage ist erforderlich', + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 400, + } + ); + } + log('INFO', `[${functionId}] Rufe Memo mit ID ${memo_id} aus der Datenbank ab`); + // Build query based on whether user_id is provided (from service) or not (from frontend) + let memoQuery = memoro_sb.from('memos').select('*').eq('id', memo_id); + if (user_id) { + // When called from service, filter by user_id for security + memoQuery = memoQuery.eq('user_id', user_id); + } + const { data: memo, error: memoError } = await memoQuery.single(); + if (memoError || !memo) { + log('ERROR', `[${functionId}] Memo ${memo_id} nicht gefunden:`, memoError); + return new Response( + JSON.stringify({ + error: 'Memo nicht gefunden', + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 404, + } + ); + } + // Kontext-Informationen extrahieren (mit Speaker-Support und Metadaten) + const contextInfo = extractContextInfo(memo.source || {}, memo.metadata || {}); + log( + 'INFO', + `[${functionId}] Extrahierte Kontext-Info: ${contextInfo.speakerCount} Sprecher, ${Math.round(contextInfo.duration)}s, ${contextInfo.wordCount || 'unb.'} Wörter, ${contextInfo.hasLocation ? 'mit Ort' : 'ohne Ort'}, Transkript-Länge: ${contextInfo.transcript.length}` + ); + if (!contextInfo.transcript) { + log('ERROR', `[${functionId}] Kein Transkript im Memo ${memo_id} gefunden`); + return new Response( + JSON.stringify({ + error: 'Kein Transkript im Memo gefunden', + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 400, + } + ); + } + // Sprache aus Memo extrahieren + const memoLanguage = memo.source?.primary_language || memo.source?.languages?.[0] || 'de'; + const baseLang = memoLanguage.split('-')[0].toLowerCase(); + log( + 'INFO', + `[${functionId}] Sende Frage an LLM: "${question.substring(0, 50)}..." (${contextInfo.hasMultipleSpeakers ? 'Multi-Speaker' : 'Single-Speaker'} Kontext, Sprache: ${baseLang})` + ); + const answer = await answerQuestion(question.trim(), contextInfo, baseLang, functionId); + if (!answer) { + log('ERROR', `[${functionId}] Keine Antwort vom LLM erhalten`); + return new Response( + JSON.stringify({ + error: 'Keine Antwort vom LLM erhalten', + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 500, + } + ); + } + // Get the highest sort_order for this memo + log('INFO', `[${functionId}] Ermittle höchste sort_order für Memo ${memo_id}`); + const { data: maxSortData, error: maxSortError } = await memoro_sb + .from('memories') + .select('sort_order') + .eq('memo_id', memo_id) + .order('sort_order', { + ascending: false, + }) + .limit(1) + .single(); + // If error or no data, use random number above 5000, otherwise increment + const nextSortOrder = + maxSortError || !maxSortData?.sort_order + ? Math.floor(Math.random() * 5000) + 5000 // Random between 5000-9999 + : maxSortData.sort_order + 1; + log('INFO', `[${functionId}] Nächste sort_order: ${nextSortOrder}`); + log('INFO', `[${functionId}] Erstelle neues Memory für Memo ${memo_id} mit der Antwort`); + const { data: newMemory, error: newMemoryError } = await memoro_sb + .from('memories') + .insert({ + memo_id: memo_id, + title: `Frage: ${question.length > 50 ? question.substring(0, 50) + '...' : question}`, + content: answer, + media: null, + sort_order: nextSortOrder, + metadata: { + type: 'question', + question: question.trim(), + created_by: 'question_memo_function', + }, + }) + .select() + .single(); + if (newMemoryError) { + log('ERROR', `[${functionId}] Fehler beim Erstellen des Memories:`, newMemoryError); + return new Response( + JSON.stringify({ + error: newMemoryError.message, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 500, + } + ); + } + log('INFO', `[${functionId}] Memory erfolgreich erstellt mit ID ${newMemory.id}`); + log('INFO', `[${functionId}] Question-Memo-Verarbeitung erfolgreich abgeschlossen`); + return new Response( + JSON.stringify({ + success: true, + memory_id: newMemory.id, + answer: answer, + question: question.trim(), + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 200, + } + ); + } catch (error) { + log('ERROR', `[${functionId}] Unerwarteter Fehler bei der Question-Memo-Verarbeitung:`, error); + const errorMessage = error instanceof Error ? error.message : 'Unbekannter Fehler'; + return new Response( + JSON.stringify({ + error: `Unerwarteter Fehler: ${errorMessage}`, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 500, + } + ); + } +}); diff --git a/apps/memoro/apps/backend/supabase/functions/swift-function/index.ts b/apps/memoro/apps/backend/supabase/functions/swift-function/index.ts new file mode 100644 index 000000000..d531cb849 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/swift-function/index.ts @@ -0,0 +1,391 @@ +// Follow this setup guide to integrate the Deno language server with your editor: +// https://deno.land/manual/getting_started/setup_your_environment +// This enables autocomplete, go to definition, etc. +// Setup type definitions for built-in Supabase Runtime APIs +import 'jsr:@supabase/functions-js/edge-runtime.d.ts'; +import { serve } from 'https://deno.land/std@0.215.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'; +/** + * Question Memo Edge Function + * + * Diese Funktion nimmt eine Benutzerfrage und ein Memo-Transkript entgegen, + * sendet beides an Gemini API und erstellt eine neue Memory mit der Antwort. + * + * @version 1.0.0 + * @date 2025-05-23 + */ // ─── Umgebungsvariablen ────────────────────────────────────────────── +const SUPABASE_URL = 'https://npgifbrwhftlbrbaglmi.supabase.co'; +const SERVICE_KEY = + 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im5wZ2lmYnJ3aGZ0bGJyYmFnbG1pIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTc0NTg1MTQxNiwiZXhwIjoyMDYxNDI3NDE2fQ.-6hArOVoEgGwIwdjclLQCTOAu13BFYnp9hPxQks4JPM'; +// Google Gemini Konfiguration +const GEMINI_API_KEY = Deno.env.get('QUESTION_MEMO_GEMINI') || ''; +const GEMINI_MODEL = 'gemini-flash'; +const GEMINI_ENDPOINT = 'https://generativelanguage.googleapis.com/v1beta/models'; +// Azure OpenAI Konfiguration (Backup) +const AZURE_OPENAI_ENDPOINT = 'https://memoroseopenai.openai.azure.com'; +const AZURE_OPENAI_KEY = '3082103c9b0d4270a795686ccaa89921'; +const AZURE_OPENAI_DEPLOYMENT = 'gpt-4.1-mini-se'; +const AZURE_OPENAI_API_VERSION = '2025-01-01-preview'; +const memoro_sb = createClient(SUPABASE_URL, SERVICE_KEY); +// ─── Logging-Funktion ────────────────────────────────────────────── +/** + * Erweiterte Logging-Funktion mit Zeitstempel und Log-Level + */ function log(level, message, data) { + const timestamp = new Date().toISOString(); + const logMessage = `[${timestamp}] [${level.toUpperCase()}] ${message}`; + switch (level.toUpperCase()) { + case 'INFO': + console.log(logMessage); + break; + case 'DEBUG': + console.debug(logMessage); + break; + case 'WARN': + console.warn(logMessage); + break; + case 'ERROR': + console.error(logMessage); + break; + default: + console.log(logMessage); + break; + } + if (data) { + if (level.toUpperCase() === 'ERROR') { + console.error(data); + } else { + console.log(typeof data === 'object' ? JSON.stringify(data, null, 2) : data); + } + } +} +/** + * Sendet Benutzerfrage + Transkript an Gemini und gibt die Antwort zurück + */ async function askQuestionWithGemini(question, transcript, functionIdForLog = 'global') { + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][LLM-${requestId}] Starte Gemini-Anfrage für Frage.`); + try { + const prompt = `Du bist ein hilfreicher Assistent. Beantworte die folgende Frage basierend auf dem gegebenen Transkript: + +Frage: ${question} + +Transkript: ${transcript} + +Antworte direkt und präzise auf die Frage basierend auf den Informationen im Transkript. Falls die Antwort nicht im Transkript zu finden ist, teile das höflich mit.`; + log( + 'DEBUG', + `[${functionIdForLog}][LLM-${requestId}] Vollständiger Prompt (Länge: ${prompt.length})` + ); + const startTime = Date.now(); + const response = await fetch( + `${GEMINI_ENDPOINT}/${GEMINI_MODEL}:generateContent?key=${GEMINI_API_KEY}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + contents: [ + { + parts: [ + { + text: prompt, + }, + ], + }, + ], + generationConfig: { + temperature: 0.7, + maxOutputTokens: 512, + }, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Gemini Antwort erhalten in ${duration}ms, Status: ${response.status}` + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][LLM-${requestId}] Gemini API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Gemini API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.candidates?.[0]?.content?.parts?.[0]?.text?.trim() || ''; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Erfolgreiche Gemini-Antwort (Länge: ${content.length}).` + ); + log( + 'DEBUG', + `[${functionIdForLog}][LLM-${requestId}] Antwort (erste 100 Zeichen): ${content.substring(0, 100)}...` + ); + return content; + } catch (error) { + log('ERROR', `[${functionIdForLog}][LLM-${requestId}] Fehler beim Gemini-Request:`, error); + throw error; + } +} +/** + * Sendet Benutzerfrage + Transkript an Azure OpenAI und gibt die Antwort zurück (Fallback) + */ async function askQuestionWithAzure(question, transcript, functionIdForLog = 'global') { + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][LLM-${requestId}] Starte Azure OpenAI-Anfrage für Frage.`); + try { + const prompt = `Du bist ein hilfreicher Assistent. Beantworte die folgende Frage basierend auf dem gegebenen Transkript: + +Frage: ${question} + +Transkript: ${transcript} + +Antworte direkt und präzise auf die Frage basierend auf den Informationen im Transkript. Falls die Antwort nicht im Transkript zu finden ist, teile das höflich mit.`; + const startTime = Date.now(); + const response = await fetch( + `${AZURE_OPENAI_ENDPOINT}/openai/deployments/${AZURE_OPENAI_DEPLOYMENT}/chat/completions?api-version=${AZURE_OPENAI_API_VERSION}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'api-key': AZURE_OPENAI_KEY, + }, + body: JSON.stringify({ + messages: [ + { + role: 'system', + content: 'Du bist ein hilfreicher Assistent.', + }, + { + role: 'user', + content: prompt, + }, + ], + max_tokens: 512, + temperature: 0.7, + }), + } + ); + const duration = Date.now() - startTime; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Azure OpenAI Antwort erhalten in ${duration}ms, Status: ${response.status}` + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][LLM-${requestId}] Azure OpenAI API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Azure OpenAI API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.choices[0]?.message?.content?.trim() || ''; + log( + 'INFO', + `[${functionIdForLog}][LLM-${requestId}] Erfolgreiche Azure OpenAI-Antwort (Länge: ${content.length}).` + ); + return content; + } catch (error) { + log( + 'ERROR', + `[${functionIdForLog}][LLM-${requestId}] Fehler beim Azure OpenAI-Request:`, + error + ); + throw error; + } +} +/** + * Hauptfunktion zur Beantwortung einer Frage mit Fallback-Logik + */ async function answerQuestion(question, transcript, functionIdForLog = 'global') { + try { + // Zuerst mit Gemini versuchen + return await askQuestionWithGemini(question, transcript, functionIdForLog); + } catch (error) { + log('WARN', `[${functionIdForLog}] Gemini fehlgeschlagen, fallback auf Azure OpenAI`, error); + try { + // Fallback auf Azure OpenAI + return await askQuestionWithAzure(question, transcript, functionIdForLog); + } catch (azureError) { + log('ERROR', `[${functionIdForLog}] Beide LLM-Services fehlgeschlagen`, azureError); + throw new Error('Beide LLM-Services sind nicht verfügbar'); + } + } +} +serve(async (req) => { + const functionId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionId}] Question-Memo-Funktion gestartet`); + const corsHeaders = { + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': 'POST, OPTIONS', + 'Access-Control-Allow-Headers': 'Content-Type, Authorization', + }; + if (req.method === 'OPTIONS') { + log('DEBUG', `[${functionId}] CORS Preflight-Anfrage bearbeitet`); + return new Response(null, { + headers: corsHeaders, + status: 204, + }); + } + try { + const requestData = await req.json(); + const { memo_id, question } = requestData; + log( + 'INFO', + `[${functionId}] Anfrage erhalten für memo_id: ${memo_id}, Frage: ${question?.substring(0, 50)}...` + ); + if (!memo_id) { + log('ERROR', `[${functionId}] Keine memo_id in der Anfrage gefunden`); + return new Response( + JSON.stringify({ + error: 'memo_id ist erforderlich', + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 400, + } + ); + } + if (!question || question.trim().length === 0) { + log('ERROR', `[${functionId}] Keine Frage in der Anfrage gefunden`); + return new Response( + JSON.stringify({ + error: 'Frage ist erforderlich', + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 400, + } + ); + } + log('INFO', `[${functionId}] Rufe Memo mit ID ${memo_id} aus der Datenbank ab`); + const { data: memo, error: memoError } = await memoro_sb + .from('memos') + .select('*') + .eq('id', memo_id) + .single(); + if (memoError || !memo) { + log('ERROR', `[${functionId}] Memo ${memo_id} nicht gefunden:`, memoError); + return new Response( + JSON.stringify({ + error: 'Memo nicht gefunden', + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 404, + } + ); + } + // Transkript extrahieren + const transcript = + memo.source?.content || memo.source?.transcription || memo.source?.transcript || ''; + log('INFO', `[${functionId}] Extrahiertes Transkript (Länge: ${transcript.length})`); + if (!transcript) { + log('ERROR', `[${functionId}] Kein Transkript im Memo ${memo_id} gefunden`); + return new Response( + JSON.stringify({ + error: 'Kein Transkript im Memo gefunden', + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 400, + } + ); + } + log('INFO', `[${functionId}] Sende Frage an LLM: "${question.substring(0, 50)}..."`); + const answer = await answerQuestion(question.trim(), transcript, functionId); + if (!answer) { + log('ERROR', `[${functionId}] Keine Antwort vom LLM erhalten`); + return new Response( + JSON.stringify({ + error: 'Keine Antwort vom LLM erhalten', + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 500, + } + ); + } + log('INFO', `[${functionId}] Erstelle neues Memory für Memo ${memo_id} mit der Antwort`); + const { data: newMemory, error: newMemoryError } = await memoro_sb + .from('memories') + .insert({ + memo_id: memo_id, + title: `Frage: ${question.length > 50 ? question.substring(0, 50) + '...' : question}`, + content: answer, + media: null, + metadata: { + type: 'question', + question: question.trim(), + created_by: 'question_memo_function', + }, + }) + .select() + .single(); + if (newMemoryError) { + log('ERROR', `[${functionId}] Fehler beim Erstellen des Memories:`, newMemoryError); + return new Response( + JSON.stringify({ + error: newMemoryError.message, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 500, + } + ); + } + log('INFO', `[${functionId}] Memory erfolgreich erstellt mit ID ${newMemory.id}`); + log('INFO', `[${functionId}] Question-Memo-Verarbeitung erfolgreich abgeschlossen`); + return new Response( + JSON.stringify({ + success: true, + memory_id: newMemory.id, + answer: answer, + question: question.trim(), + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 200, + } + ); + } catch (error) { + log('ERROR', `[${functionId}] Unerwarteter Fehler bei der Question-Memo-Verarbeitung:`, error); + const errorMessage = error instanceof Error ? error.message : 'Unbekannter Fehler'; + return new Response( + JSON.stringify({ + error: `Unerwarteter Fehler: ${errorMessage}`, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 500, + } + ); + } +}); diff --git a/apps/memoro/apps/backend/supabase/functions/translate/deno.json b/apps/memoro/apps/backend/supabase/functions/translate/deno.json new file mode 100644 index 000000000..0967ef424 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/translate/deno.json @@ -0,0 +1 @@ +{} diff --git a/apps/memoro/apps/backend/supabase/functions/translate/index.ts b/apps/memoro/apps/backend/supabase/functions/translate/index.ts new file mode 100644 index 000000000..08ab8eb86 --- /dev/null +++ b/apps/memoro/apps/backend/supabase/functions/translate/index.ts @@ -0,0 +1,659 @@ +// Follow this setup guide to integrate the Deno language server with your editor: +// https://deno.land/manual/getting_started/setup_your_environment +// This enables autocomplete, go to definition, etc. +// Setup type definitions for built-in Supabase Runtime APIs +import 'jsr:@supabase/functions-js/edge-runtime.d.ts'; +import { serve } from 'https://deno.land/std@0.215.0/http/server.ts'; +import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'; +import { getTranscriptText, getRecordingTranscript } from '../_shared/transcript-utils.ts'; +// Inline error handling utilities to avoid deployment issues +// Atomic status update utilities using RPC to prevent race conditions +async function setMemoErrorStatus(supabaseClient, memoId, processName, error) { + if (!memoId) return; + const errorMessage = error instanceof Error ? error.message : String(error); + const timestamp = new Date().toISOString(); + try { + await supabaseClient.rpc('set_memo_process_error', { + p_memo_id: memoId, + p_process_name: processName, + p_timestamp: timestamp, + p_reason: errorMessage, + p_details: null, + }); + } catch (dbError) { + console.error(`Error setting error status for memo ${memoId}:`, dbError); + } +} +async function setMemoProcessingStatus(supabaseClient, memoId, processName) { + const timestamp = new Date().toISOString(); + try { + await supabaseClient.rpc('set_memo_process_status', { + p_memo_id: memoId, + p_process_name: processName, + p_status: 'processing', + p_timestamp: timestamp, + }); + } catch (dbError) { + console.error(`Error setting processing status for memo ${memoId}:`, dbError); + } +} +async function setMemoCompletedStatus(supabaseClient, memoId, processName, details) { + const timestamp = new Date().toISOString(); + try { + await supabaseClient.rpc('set_memo_process_status_with_details', { + p_memo_id: memoId, + p_process_name: processName, + p_status: 'completed', + p_timestamp: timestamp, + p_details: details, + }); + } catch (dbError) { + console.error(`Error setting completed status for memo ${memoId}:`, dbError); + } +} +function createErrorResponse(error, status = 500, corsHeaders = {}) { + const errorMessage = error instanceof Error ? error.message : String(error); + return new Response( + JSON.stringify({ + error: errorMessage, + timestamp: new Date().toISOString(), + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status, + } + ); +} +/** + * Translate Edge Function + * + * Diese Funktion übersetzt alle Felder eines Memo-Eintrags in eine Zielsprache. + * Übersetzt werden: transcript, headline, intro und alle memory entries (blueprints). + * Die übersetzten Inhalte ersetzen die ursprünglichen Inhalte im selben Memo. + * + * @version 1.0.0 + * @date 2025-05-26 + */ // ─── Umgebungsvariablen ────────────────────────────────────────────── +const SUPABASE_URL = Deno.env.get('SUPABASE_URL'); +if (!SUPABASE_URL) { + throw new Error('SUPABASE_URL not configured'); +} +const SERVICE_KEY = Deno.env.get('C_SUPABASE_SECRET_KEY'); +if (!SERVICE_KEY) { + throw new Error('C_SUPABASE_SECRET_KEY not configured'); +} +// Google Gemini Konfiguration +const GEMINI_API_KEY = Deno.env.get('TRANSLATE_MEMO_GEMINI_MEMORO') || ''; +const GEMINI_MODEL = 'gemini-2.0-flash'; +const GEMINI_ENDPOINT = 'https://generativelanguage.googleapis.com/v1beta/models'; +// Azure OpenAI Konfiguration (Backup) +const AZURE_OPENAI_ENDPOINT = 'https://memoroseopenai.openai.azure.com'; +const AZURE_OPENAI_KEY = Deno.env.get('AZURE_OPENAI_KEY'); +if (!AZURE_OPENAI_KEY) { + throw new Error('AZURE_OPENAI_KEY not configured'); +} +const AZURE_OPENAI_DEPLOYMENT = 'gpt-4.1-mini-se'; +const AZURE_OPENAI_API_VERSION = '2025-01-01-preview'; +const memoro_sb = createClient(SUPABASE_URL, SERVICE_KEY); +// ─── Logging-Funktion ────────────────────────────────────────────── +function log(level, message, data) { + const timestamp = new Date().toISOString(); + const logMessage = `[${timestamp}] [${level.toUpperCase()}] ${message}`; + switch (level.toUpperCase()) { + case 'INFO': + console.log(logMessage); + break; + case 'DEBUG': + console.debug(logMessage); + break; + case 'WARN': + console.warn(logMessage); + break; + case 'ERROR': + console.error(logMessage); + break; + default: + console.log(logMessage); + break; + } + if (data) { + if (level.toUpperCase() === 'ERROR') { + console.error(data); + } else { + console.log(typeof data === 'object' ? JSON.stringify(data, null, 2) : data); + } + } +} +// ─── Sprach-Mapping ────────────────────────────────────────────── +const LANGUAGE_NAMES = { + de: 'German', + en: 'English', + es: 'Spanish', + fr: 'French', + it: 'Italian', + pt: 'Portuguese', + nl: 'Dutch', + pl: 'Polish', + ru: 'Russian', + ja: 'Japanese', + ko: 'Korean', + zh: 'Chinese', + ar: 'Arabic', + hi: 'Hindi', + tr: 'Turkish', + sv: 'Swedish', + da: 'Danish', + no: 'Norwegian', + fi: 'Finnish', + cs: 'Czech', + sk: 'Slovak', + hu: 'Hungarian', + ro: 'Romanian', + bg: 'Bulgarian', + hr: 'Croatian', + sr: 'Serbian', + sl: 'Slovenian', + et: 'Estonian', + lv: 'Latvian', + lt: 'Lithuanian', + mt: 'Maltese', + ga: 'Irish', + el: 'Greek', + uk: 'Ukrainian', + bn: 'Bengali', + ur: 'Urdu', + fa: 'Persian', + vi: 'Vietnamese', + id: 'Indonesian', +}; +function getLanguageName(languageCode) { + return LANGUAGE_NAMES[languageCode.toLowerCase()] || languageCode; +} +// ─── Übersetzungsfunktionen ────────────────────────────────────────────── +/** + * Übersetzt Text mit Google Gemini Flash + */ async function translateWithGemini(text, targetLanguage, functionIdForLog = 'global') { + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][Gemini-${requestId}] Starte Übersetzung.`); + try { + const targetLanguageName = getLanguageName(targetLanguage); + const prompt = `Translate the following text to ${targetLanguageName}. Keep the original formatting, structure, and meaning. Only return the translated text without any explanations or additions:\n\n${text}`; + log( + 'DEBUG', + `[${functionIdForLog}][Gemini-${requestId}] Prompt erstellt für Zielsprache: ${targetLanguageName}` + ); + const response = await fetch( + `${GEMINI_ENDPOINT}/${GEMINI_MODEL}:generateContent?key=${GEMINI_API_KEY}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + contents: [ + { + parts: [ + { + text: prompt, + }, + ], + }, + ], + generationConfig: { + temperature: 0.3, + maxOutputTokens: Math.min(8192, Math.max(512, text.length * 2)), + }, + }), + } + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][Gemini-${requestId}] Gemini API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Gemini API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.candidates?.[0]?.content?.parts?.[0]?.text?.trim() || ''; + if (!content) { + log('ERROR', `[${functionIdForLog}][Gemini-${requestId}] Leere Antwort von Gemini`); + return null; + } + log( + 'INFO', + `[${functionIdForLog}][Gemini-${requestId}] Übersetzung erfolgreich (${content.length} Zeichen)` + ); + return content; + } catch (error) { + const errorMessage = error instanceof Error ? error.message : 'Unbekannter Fehler'; + log( + 'ERROR', + `[${functionIdForLog}][Gemini-${requestId}] Fehler bei der Gemini-Übersetzung:`, + errorMessage + ); + return null; + } +} +/** + * Übersetzt Text mit Azure OpenAI + */ async function translateWithAzure(text, targetLanguage, functionIdForLog = 'global') { + const requestId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionIdForLog}][Azure-${requestId}] Starte Azure OpenAI Übersetzung.`); + try { + const targetLanguageName = getLanguageName(targetLanguage); + const prompt = `Translate the following text to ${targetLanguageName}. Keep the original formatting, structure, and meaning. Only return the translated text without any explanations or additions:\n\n${text}`; + const response = await fetch( + `${AZURE_OPENAI_ENDPOINT}/openai/deployments/${AZURE_OPENAI_DEPLOYMENT}/chat/completions?api-version=${AZURE_OPENAI_API_VERSION}`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + 'api-key': AZURE_OPENAI_KEY, + }, + body: JSON.stringify({ + messages: [ + { + role: 'system', + content: + 'You are a professional translator. Translate the given text accurately while preserving formatting and meaning.', + }, + { + role: 'user', + content: prompt, + }, + ], + max_tokens: Math.min(8192, Math.max(512, text.length * 2)), + temperature: 0.3, + }), + } + ); + if (!response.ok) { + const errorText = await response.text(); + log( + 'ERROR', + `[${functionIdForLog}][Azure-${requestId}] Azure OpenAI API Fehler: ${response.status}`, + errorText + ); + throw new Error(`Azure OpenAI API Fehler: ${response.status} ${errorText}`); + } + const data = await response.json(); + const content = data.choices[0]?.message?.content?.trim() || text; // Fallback auf Originaltext + log( + 'INFO', + `[${functionIdForLog}][Azure-${requestId}] Azure-Übersetzung erfolgreich (${content.length} Zeichen)` + ); + return content; + } catch (error) { + const errorMessage = error instanceof Error ? error.message : 'Unbekannter Fehler'; + log( + 'ERROR', + `[${functionIdForLog}][Azure-${requestId}] Fehler bei der Azure-Übersetzung:`, + errorMessage + ); + return text; // Fallback auf Originaltext + } +} +/** + * Hauptfunktion zur Übersetzung - versucht zuerst Gemini, dann Azure + */ async function translateText(text, targetLanguage, functionIdForLog = 'global') { + if (!text || text.trim().length === 0) { + return text; + } + try { + // Zuerst mit Gemini versuchen + const geminiResult = await translateWithGemini(text, targetLanguage, functionIdForLog); + if (geminiResult) { + log('DEBUG', `[${functionIdForLog}] Übersetzung mit Gemini Flash erfolgreich`); + return geminiResult; + } + // Fallback auf Azure OpenAI + log('DEBUG', `[${functionIdForLog}] Fallback auf Azure OpenAI für Übersetzung`); + return await translateWithAzure(text, targetLanguage, functionIdForLog); + } catch (error) { + const errorMessage = error instanceof Error ? error.message : 'Unbekannter Fehler'; + log('ERROR', `[${functionIdForLog}] Fehler bei der Übersetzung:`, errorMessage); + return text; // Fallback auf Originaltext + } +} +// ─── Hauptfunktion ────────────────────────────────────────────── +serve(async (req) => { + const functionId = crypto.randomUUID().substring(0, 8); + log('INFO', `[${functionId}] Translate-Funktion gestartet`); + // CORS-Header für Entwicklung + const corsHeaders = { + 'Access-Control-Allow-Origin': '*', + 'Access-Control-Allow-Methods': 'POST, OPTIONS', + 'Access-Control-Allow-Headers': 'Content-Type, Authorization', + }; + // OPTIONS-Anfrage für CORS + if (req.method === 'OPTIONS') { + log('DEBUG', `[${functionId}] CORS Preflight-Anfrage bearbeitet`); + return new Response(null, { + headers: corsHeaders, + status: 204, + }); + } + let memo_id_to_update = null; + try { + // Anfrage-Daten extrahieren + const requestData = await req.json(); + const { memo_id, target_language } = requestData; + memo_id_to_update = memo_id; + log( + 'INFO', + `[${functionId}] Anfrage erhalten für memo_id: ${memo_id}, Zielsprache: ${target_language}` + ); + if (!memo_id) { + return createErrorResponse('memo_id ist erforderlich', 400, corsHeaders); + } + if (!target_language) { + return createErrorResponse('target_language ist erforderlich', 400, corsHeaders); + } + // Set processing status + await setMemoProcessingStatus(memoro_sb, memo_id, 'translate'); + // Memo aus der Datenbank abrufen + const { data: memo, error: memoError } = await memoro_sb + .from('memos') + .select('*') + .eq('id', memo_id) + .single(); + if (memoError || !memo) { + log('ERROR', `[${functionId}] Fehler beim Abrufen des Memos:`, memoError); + await setMemoErrorStatus( + memoro_sb, + memo_id, + 'translate', + `Memo nicht gefunden: ${memoError?.message || 'Unbekannter Fehler'}` + ); + return createErrorResponse( + `Memo nicht gefunden: ${memoError?.message || 'Unbekannter Fehler'}`, + 404, + corsHeaders + ); + } + log('INFO', `[${functionId}] Memo erfolgreich abgerufen, beginne mit Übersetzung`); + // 1. Transcript übersetzen (from utterances or legacy fields) + let translatedTranscript = ''; + let transcript = getTranscriptText(memo); + // Handle combined memos with additional_recordings structure + if (!transcript && memo.source?.type === 'combined' && memo.source?.additional_recordings) { + transcript = memo.source.additional_recordings + .map((recording) => getRecordingTranscript(recording)) + .filter(Boolean) + .join('\n\n'); + } + if (transcript) { + log('INFO', `[${functionId}] Übersetze Transcript (${transcript.length} Zeichen)`); + translatedTranscript = await translateText(transcript, target_language, functionId); + } + // 2. Headline übersetzen + let translatedHeadline = ''; + if (memo.title) { + log('INFO', `[${functionId}] Übersetze Headline: "${memo.title}"`); + translatedHeadline = await translateText(memo.title, target_language, functionId); + } + // 3. Intro übersetzen + let translatedIntro = ''; + if (memo.intro) { + log('INFO', `[${functionId}] Übersetze Intro (${memo.intro.length} Zeichen)`); + translatedIntro = await translateText(memo.intro, target_language, functionId); + } + // 4. Neue übersetztes Memo erstellen + log( + 'INFO', + `[${functionId}] Erstelle neues übersetztes Memo basierend auf Original ${memo_id}` + ); + // Bereite source für das neue Memo vor + let newSource = { + ...memo.source, + }; + if (translatedTranscript) { + if (memo.source?.content) { + newSource.content = translatedTranscript; + } else if (memo.source?.transcription) { + newSource.transcription = translatedTranscript; + } else if (memo.source?.transcript) { + newSource.transcript = translatedTranscript; + } else if (memo.source?.type === 'combined' && memo.source?.additional_recordings) { + // Für combined memos, übersetze jeden transcript in den additional_recordings + const translatedRecordings = await Promise.all( + memo.source.additional_recordings.map(async (recording) => { + if (recording.transcript) { + const translated = await translateText( + recording.transcript, + target_language, + functionId + ); + return { + ...recording, + transcript: translated, + }; + } + return recording; + }) + ); + newSource.additional_recordings = translatedRecordings; + } + } + // Bereite Metadata für das neue Memo vor (mit Referenz zum Original) + const newMetadata = { + ...memo.metadata, + translation: { + source_memo_id: memo_id, + source_language: + memo.source?.primary_language || memo.metadata?.primary_language || 'unknown', + target_language: target_language, + translated_at: new Date().toISOString(), + translation_method: 'ai', + translator_model: GEMINI_MODEL, + }, + }; + // Erstelle das neue übersetztes Memo + const { data: newMemo, error: createError } = await memoro_sb + .from('memos') + .insert({ + title: translatedHeadline || memo.title, + intro: translatedIntro || memo.intro, + user_id: memo.user_id, + space_id: memo.space_id, + source: newSource, + metadata: newMetadata, + is_pinned: false, + is_archived: false, + is_public: memo.is_public, + }) + .select() + .single(); + if (createError) { + log('ERROR', `[${functionId}] Fehler beim Erstellen des übersetzten Memos:`, createError); + await setMemoErrorStatus(memoro_sb, memo_id, 'translate', createError); + throw createError; + } + log('INFO', `[${functionId}] Neues übersetztes Memo erstellt mit ID: ${newMemo.id}`); + const newMemoId = newMemo.id; + // 4.1. Aktualisiere das Original-Memo mit Referenz zur Übersetzung + try { + // Lade aktuelles Original-Memo für Broadcast + const { data: originalMemo, error: fetchError } = await memoro_sb + .from('memos') + .select('*') + .eq('id', memo_id) + .single(); + if (!fetchError && originalMemo) { + const currentMetadata = originalMemo.metadata || {}; + const existingTranslations = currentMetadata.translations || []; + // Füge neue Übersetzung zur Liste hinzu (verhindere Duplikate) + const updatedTranslations = existingTranslations.filter( + (t) => t.target_language !== target_language + ); + updatedTranslations.push({ + memo_id: newMemoId, + target_language: target_language, + translated_at: new Date().toISOString(), + translator_model: GEMINI_MODEL, + }); + const updatedMetadata = { + ...currentMetadata, + translations: updatedTranslations, + }; + const { error: updateError } = await memoro_sb + .from('memos') + .update({ + metadata: updatedMetadata, + }) + .eq('id', memo_id); + if (updateError) { + log( + 'WARN', + `[${functionId}] Fehler beim Aktualisieren der Original-Memo-Metadaten:`, + updateError + ); + } else { + log( + 'INFO', + `[${functionId}] Original-Memo erfolgreich mit Übersetzungsreferenz aktualisiert` + ); + + // Send broadcast update to notify clients about the translation reference + try { + const channel = memoro_sb.channel(`memo-updates-${memo_id}`); + + channel.subscribe(async (status) => { + if (status === 'SUBSCRIBED') { + await channel.send({ + type: 'broadcast', + event: 'memo-updated', + payload: { + id: memo_id, + old: originalMemo, + new: { + ...originalMemo, + metadata: updatedMetadata, + }, + user_id: memo.user_id, + }, + }); + log( + 'INFO', + `[${functionId}] Broadcast sent for memo ${memo_id} translation reference update` + ); + // Clean up the channel after sending + memoro_sb.removeChannel(channel); + } + }); + } catch (broadcastError) { + log('WARN', `[${functionId}] Failed to send broadcast update:`, broadcastError); + // Don't fail the function if broadcast fails + } + } + } + } catch (referenceError) { + log('WARN', `[${functionId}] Fehler beim Erstellen der Rückreferenz:`, referenceError); + // Nicht kritisch - Übersetzung ist bereits erstellt + } + // 5. Alle Memories (Blueprint-Antworten) für das neue Memo erstellen + const { data: memories, error: memoriesError } = await memoro_sb + .from('memories') + .select('*') + .eq('memo_id', memo_id); + let translatedMemoriesCount = 0; + if (memoriesError) { + log('WARN', `[${functionId}] Fehler beim Abrufen der Memories:`, memoriesError); + } else if (memories && memories.length > 0) { + log( + 'INFO', + `[${functionId}] Erstelle ${memories.length} übersetzte Memory-Einträge für neues Memo` + ); + for (const memory of memories) { + if (memory.content) { + const translatedContent = await translateText( + memory.content, + target_language, + functionId + ); + const translatedTitle = memory.title + ? await translateText(memory.title, target_language, functionId) + : memory.title; + // Erstelle neues Memory für das übersetzte Memo + const { error: memoryCreateError } = await memoro_sb.from('memories').insert({ + memo_id: newMemoId, + title: translatedTitle, + content: translatedContent, + media: memory.media, + sort_order: memory.sort_order, + metadata: { + ...memory.metadata, + translated_from_memory_id: memory.id, + translation: { + target_language: target_language, + translated_at: new Date().toISOString(), + }, + }, + }); + if (memoryCreateError) { + log( + 'WARN', + `[${functionId}] Fehler beim Erstellen des übersetzten Memory:`, + memoryCreateError + ); + } else { + log( + 'DEBUG', + `[${functionId}] Übersetztes Memory erfolgreich erstellt für Original Memory ${memory.id}` + ); + translatedMemoriesCount++; + } + } + } + } + // Set completed status + await setMemoCompletedStatus(memoro_sb, memo_id, 'translate', { + target_language, + new_memo_id: newMemoId, + translated_fields: { + transcript: !!translatedTranscript, + headline: !!translatedHeadline, + intro: !!translatedIntro, + memories_count: translatedMemoriesCount, + }, + }); + log( + 'INFO', + `[${functionId}] Übersetzung erfolgreich abgeschlossen für Memo ${memo_id}, neues Memo erstellt: ${newMemoId}` + ); + // Erfolgreiche Antwort + return new Response( + JSON.stringify({ + success: true, + original_memo_id: memo_id, + new_memo_id: newMemoId, + translated_fields: { + transcript: !!translatedTranscript, + headline: !!translatedHeadline, + intro: !!translatedIntro, + memories_count: translatedMemoriesCount, + }, + target_language, + }), + { + headers: { + ...corsHeaders, + 'Content-Type': 'application/json', + }, + status: 200, + } + ); + } catch (error) { + log('ERROR', `[${functionId}] Unerwarteter Fehler in der Translate-Funktion:`, error); + // Set error status in database + const errorToLog = error instanceof Error ? error : new Error(String(error)); + await setMemoErrorStatus(memoro_sb, memo_id_to_update, 'translate', errorToLog); + // Return error response + return createErrorResponse(`Unerwarteter Fehler: ${errorToLog.message}`, 500, corsHeaders); + } +}); diff --git a/apps/memoro/apps/backend/test/jest-setup.ts b/apps/memoro/apps/backend/test/jest-setup.ts new file mode 100644 index 000000000..679a5a6d5 --- /dev/null +++ b/apps/memoro/apps/backend/test/jest-setup.ts @@ -0,0 +1,15 @@ +// Global test setup +// Add any global test configuration here + +// Increase timeout for longer running tests +jest.setTimeout(30000); + +// Mock console methods to reduce noise during tests +global.console = { + ...console, + log: jest.fn(), + debug: jest.fn(), + info: jest.fn(), + warn: jest.fn(), + error: jest.fn(), +}; diff --git a/apps/memoro/apps/backend/tsconfig.build.json b/apps/memoro/apps/backend/tsconfig.build.json new file mode 100644 index 000000000..9d5195a3c --- /dev/null +++ b/apps/memoro/apps/backend/tsconfig.build.json @@ -0,0 +1,4 @@ +{ + "extends": "./tsconfig.json", + "exclude": ["node_modules", "test", "dist", "supabase", "**/*spec.ts", "jest.config.js"] +} diff --git a/apps/memoro/apps/backend/tsconfig.json b/apps/memoro/apps/backend/tsconfig.json new file mode 100644 index 000000000..8f5aedf3c --- /dev/null +++ b/apps/memoro/apps/backend/tsconfig.json @@ -0,0 +1,21 @@ +{ + "compilerOptions": { + "module": "commonjs", + "declaration": true, + "removeComments": true, + "emitDecoratorMetadata": true, + "experimentalDecorators": true, + "allowSyntheticDefaultImports": true, + "target": "ES2021", + "sourceMap": true, + "outDir": "./dist", + "baseUrl": "./", + "incremental": true, + "skipLibCheck": true, + "strictNullChecks": false, + "noImplicitAny": false, + "strictBindCallApply": false, + "forceConsistentCasingInFileNames": false, + "noFallthroughCasesInSwitch": false + } +} diff --git a/apps/memoro/apps/backend/verify-build.sh b/apps/memoro/apps/backend/verify-build.sh new file mode 100755 index 000000000..08433b978 --- /dev/null +++ b/apps/memoro/apps/backend/verify-build.sh @@ -0,0 +1,54 @@ +#!/bin/bash +# Script to verify the build and debug logging + +echo "=== Build Verification Script ===" +echo "Current directory: $(pwd)" +echo "" + +echo "1. Checking if dist directory exists..." +if [ -d "dist" ]; then + echo "✓ dist directory exists" + echo " Last modified: $(stat -f "%Sm" dist 2>/dev/null || stat -c "%y" dist 2>/dev/null)" +else + echo "✗ dist directory not found" +fi +echo "" + +echo "2. Checking main.js in dist..." +if [ -f "dist/main.js" ]; then + echo "✓ dist/main.js exists" + echo " Checking for debug logs..." + grep -n "STARTUP DEBUG" dist/main.js | head -5 +else + echo "✗ dist/main.js not found" +fi +echo "" + +echo "3. Checking controller debug logs..." +if [ -f "dist/memoro/memoro.controller.js" ]; then + echo "✓ dist/memoro/memoro.controller.js exists" + echo " Checking for CRITICAL DEBUG logs..." + grep -n "CRITICAL DEBUG" dist/memoro/memoro.controller.js | head -5 +else + echo "✗ dist/memoro/memoro.controller.js not found" +fi +echo "" + +echo "4. Building the project..." +npm run build +echo "" + +echo "5. Checking build output again..." +echo " main.js debug logs:" +grep "STARTUP DEBUG" dist/main.js | head -3 +echo "" + +echo "6. Docker image info..." +echo " Current cloudbuild tag: $(grep -o 'memoro-service:v[0-9.]*' cloudbuild-memoro.yaml | head -1)" +echo "" + +echo "=== Recommendations ===" +echo "1. Increment the version in cloudbuild-memoro.yaml (currently v4.9.8)" +echo "2. Ensure Cloud Run environment variables are set correctly" +echo "3. Check Cloud Run logs filter - it might be filtering INFO level logs" +echo "4. Use console.error() instead of console.log() for critical debug messages" \ No newline at end of file diff --git a/apps/memoro/apps/landing/.env.example b/apps/memoro/apps/landing/.env.example new file mode 100644 index 000000000..a374ffa34 --- /dev/null +++ b/apps/memoro/apps/landing/.env.example @@ -0,0 +1,3 @@ +# Umami Analytics Configuration (optional) +# PUBLIC_UMAMI_WEBSITE_ID=your-website-id +# PUBLIC_UMAMI_URL=https://your-umami-instance.com/script.js \ No newline at end of file diff --git a/apps/memoro/apps/landing/.gitignore b/apps/memoro/apps/landing/.gitignore new file mode 100644 index 000000000..5b59d37a1 --- /dev/null +++ b/apps/memoro/apps/landing/.gitignore @@ -0,0 +1,30 @@ +# build output +dist/ + +# generated types +.astro/ + +# dependencies +node_modules/ + +# logs +npm-debug.log* +yarn-debug.log* +yarn-error.log* +pnpm-debug.log* + +# environment variables +.env +.env.production + +# macOS-specific files +.DS_Store + +# jetbrains setting folder +.idea/ + +# Archive folder +.archive/ + +# Local Netlify folder +.netlify diff --git a/apps/memoro/apps/landing/CLAUDE.md b/apps/memoro/apps/landing/CLAUDE.md new file mode 100644 index 000000000..5b13b0fe2 --- /dev/null +++ b/apps/memoro/apps/landing/CLAUDE.md @@ -0,0 +1,112 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Project Overview + +Memoro is a multilingual marketing website built with Astro for an AI-powered conversation documentation and note-taking app. The site supports German (de) as default and English (en) locales. + +## Build Commands + +```bash +npm run dev # Start development server (localhost:4321) +npm run build # Build production site to ./dist/ +npm run preview # Preview production build locally +npm run astro check # Type-check the project. IMPORTANT: Use this after every change! +``` + +## Architecture + +### Tech Stack + +- **Framework**: Astro 5.3.0 with static site generation +- **Styling**: Tailwind CSS +- **Content**: MDX support with content collections +- **TypeScript**: Strict mode enabled + +### Project Structure + +``` +src/ +├── components/ # Reusable Astro components +├── content/ # Content collections with Zod schemas +│ ├── blog/ # Blog posts (de/en subfolders) +│ ├── team/ # Team member profiles +│ ├── features/ # Feature descriptions +│ ├── guides/ # User guides and tutorials +│ └── ... # Other collections (industries, testimonials, etc.) +├── i18n/ # Internationalization (ui.ts for translations) +├── layouts/ # Page layout templates +├── pages/ # Routes with [lang] dynamic routing +├── styles/ # Global CSS with Tailwind +└── utils/ # Utility functions +``` + +### Internationalization (i18n) + +- **Default locale**: German (de) +- **Supported locales**: German (de), English (en) +- **Routing**: Prefix-based (e.g., /de/blog, /en/blog) +- **Middleware**: Automatically redirects to default locale if missing +- **Translations**: Centralized in `src/i18n/ui.ts` +- **Content**: Organized in language subfolders within collections + +### Content Collections + +All content uses Zod schemas for validation. Key collections: + +- **blog**: Articles with metadata (title, description, pubDate, author, tags) +- **team**: Team profiles with roles and social links +- **features**: Product features with icons and categories +- **guides**: Tutorials with difficulty levels and duration +- **testimonials**: Customer testimonials +- **legal**: Legal pages (privacy, terms, etc.) + +Each content type must include: + +- `lang` field (either 'de' or 'en') +- Proper frontmatter matching the schema +- MDX content body + +### Code Style Guidelines + +#### Components + +- Use PascalCase for component names (e.g., `BlogCard.astro`) +- Define Props interfaces at the top of component files +- Import order: external libraries first, then project files + +#### TypeScript + +- Always use Zod schemas for content validation +- Define interfaces for all component props +- Use strict type checking (enabled in tsconfig) + +#### CSS + +- Use Tailwind CSS utility classes +- Follow kebab-case for custom CSS classes +- Avoid inline styles + +#### Error Handling + +- Middleware handles 404s and missing locale redirects +- Use optional chaining for potentially undefined values +- Provide fallbacks for missing translations + +### Important Implementation Notes + +- Static site generation means no server-side runtime +- All content is pre-built at build time +- Dynamic routes use Astro's `getStaticPaths()` +- Sitemap generation includes all locales +- Images stored in `/public/images/` organized by type + +### Testing + +No test framework is currently configured. Consider manual testing of: + +- All language routes +- Content collection validation +- Build process for production +- 404 handling and redirects diff --git a/apps/memoro/apps/landing/POSTHOG_SETUP.md b/apps/memoro/apps/landing/POSTHOG_SETUP.md new file mode 100644 index 000000000..71cd477aa --- /dev/null +++ b/apps/memoro/apps/landing/POSTHOG_SETUP.md @@ -0,0 +1,168 @@ +# PostHog A/B Testing Setup + +Dieses Dokument beschreibt die PostHog-Integration für A/B-Testing auf der Memoro-Website. + +## 1. Initiale Einrichtung + +### PostHog-Account erstellen +1. Gehe zu [PostHog EU](https://eu.posthog.com) +2. Erstelle einen Account (EU-Region für GDPR-Compliance) +3. Erstelle ein neues Projekt "Memoro Landing" + +### Environment-Variablen +Erstelle eine `.env`-Datei basierend auf `.env.example`: +```bash +PUBLIC_POSTHOG_KEY=phc_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx +``` + +### Testen der Integration +1. Starte den Dev-Server: `npm run dev` +2. Akzeptiere Analytics-Cookies im Cookie-Banner +3. Öffne die Browser-Konsole und prüfe: `window.posthog` sollte verfügbar sein +4. Überprüfe im PostHog-Dashboard, ob Events ankommen + +## 2. A/B-Tests erstellen + +### Im PostHog-Dashboard +1. Navigiere zu "Feature Flags" → "New Feature Flag" +2. Erstelle ein neues Flag, z.B. `hero-cta-test` +3. Konfiguriere die Varianten: + - Control: Default (kein Wert) + - Variant B: `variant-b` +4. Setze die Rollout-Percentage (z.B. 50/50) + +### Im Code implementieren + +#### Option 1: Fertige Experiment-Komponente nutzen +```astro +--- +import HeroCtaExperiment from '../components/experiments/HeroCtaExperiment.astro'; +--- + + +``` + +#### Option 2: Custom Implementation +```astro +
+ +
+ + +``` + +## 3. Verfügbare Utilities + +### `getExperiment(key)` +Holt die aktuelle Variante für einen Test: +```javascript +const variant = await getExperiment('hero-cta-test'); +// Returns: null | 'control' | 'variant-b' | ... +``` + +### `trackEvent(name, properties)` +Trackt ein Custom Event: +```javascript +trackEvent('button_clicked', { + button_id: 'download-cta', + page: 'home' +}); +``` + +### `trackExperimentConversion(experiment, type)` +Trackt eine Conversion für einen A/B-Test: +```javascript +trackExperimentConversion('hero-cta-test', 'download_click'); +``` + +### `applyExperimentClasses(elementId, experimentKey, variantClasses)` +Wendet CSS-Klassen basierend auf der Variante an: +```javascript +applyExperimentClasses('hero-section', 'hero-test', { + 'control': 'bg-blue-500', + 'variant-b': 'bg-green-500' +}); +``` + +## 4. Best Practices + +### Performance +- PostHog lädt nur nach Cookie-Consent +- Script ist asynchron und blockiert nicht +- Feature Flags werden gecached + +### GDPR-Compliance +- Analytics nur mit expliziter Zustimmung +- EU-Server verwenden +- `person_profiles: 'identified_only'` für anonyme Nutzer + +### Testing-Strategie +1. **Start klein**: Beginne mit unkritischen Elementen +2. **Messe richtig**: Definiere klare Success-Metriken +3. **Dokumentiere**: Halte Tests und Ergebnisse fest +4. **Iteriere**: Nutze Learnings für neue Tests + +## 5. Geplante A/B-Tests + +### Phase 1 (Sofort möglich) +- **Hero CTA**: "App herunterladen" vs. "Kostenlos testen" +- **Download Button Position**: Rechts vs. Links in Navigation +- **Testimonial Position**: Oben vs. Unten auf Homepage + +### Phase 2 (Nach ersten Learnings) +- **Pricing Layout**: Grid vs. Tabelle +- **Feature-Reihenfolge**: Verschiedene Priorisierungen +- **Formulare**: Anzahl der Felder reduzieren + +## 6. Monitoring & Analyse + +### PostHog Dashboard +- **Experiments**: Übersicht aller laufenden Tests +- **Feature Flags**: Status und Rollout-Percentage +- **Insights**: Custom Dashboards für Conversions + +### Wichtige Metriken +- Conversion Rate (Download-Klicks) +- Bounce Rate pro Variante +- Time on Page +- Scroll-Tiefe + +## 7. Troubleshooting + +### PostHog lädt nicht +1. Prüfe Cookie-Consent Status +2. Checke Browser-Konsole für Fehler +3. Verifiziere API-Key in `.env` + +### Feature Flag gibt null zurück +1. Stelle sicher, dass PostHog geladen ist +2. Prüfe Flag-Name (Case-sensitive!) +3. Checke Rollout-Settings im Dashboard + +### Events werden nicht getrackt +1. Öffne Network-Tab und suche nach `posthog.com/e/` +2. Prüfe, ob `autocapture: false` gesetzt ist +3. Nutze `posthog.debug()` für Details + +## 8. Migration von Plausible + +Aktuell laufen Plausible und PostHog parallel. Nach 2-4 Wochen: + +1. Vergleiche Metriken beider Tools +2. Stelle sicher, dass PostHog alle benötigten Daten erfasst +3. Entferne Plausible-Komponente +4. Update Cookie-Consent Beschreibung +5. Entferne Plausible DNS-Prefetch + +## Support + +Bei Fragen oder Problemen: +- PostHog Docs: https://posthog.com/docs +- Astro + PostHog: https://posthog.com/tutorials/astro-ab-tests \ No newline at end of file diff --git a/apps/memoro/apps/landing/README.md b/apps/memoro/apps/landing/README.md new file mode 100644 index 000000000..ff19a3e7e --- /dev/null +++ b/apps/memoro/apps/landing/README.md @@ -0,0 +1,48 @@ +# Astro Starter Kit: Basics + +```sh +npm create astro@latest -- --template basics +``` + +[![Open in StackBlitz](https://developer.stackblitz.com/img/open_in_stackblitz.svg)](https://stackblitz.com/github/withastro/astro/tree/latest/examples/basics) +[![Open with CodeSandbox](https://assets.codesandbox.io/github/button-edit-lime.svg)](https://codesandbox.io/p/sandbox/github/withastro/astro/tree/latest/examples/basics) +[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/withastro/astro?devcontainer_path=.devcontainer/basics/devcontainer.json) + +> 🧑‍🚀 **Seasoned astronaut?** Delete this file. Have fun! + +![just-the-basics](https://github.com/withastro/astro/assets/2244813/a0a5533c-a856-4198-8470-2d67b1d7c554) + +## 🚀 Project Structure + +Inside of your Astro project, you'll see the following folders and files: + +```text +/ +├── public/ +│ └── favicon.svg +├── src/ +│ ├── layouts/ +│ │ └── Layout.astro +│ └── pages/ +│ └── index.astro +└── package.json +``` + +To learn more about the folder structure of an Astro project, refer to [our guide on project structure](https://docs.astro.build/en/basics/project-structure/). + +## 🧞 Commands + +All commands are run from the root of the project, from a terminal: + +| Command | Action | +| :------------------------ | :----------------------------------------------- | +| `npm install` | Installs dependencies | +| `npm run dev` | Starts local dev server at `localhost:4321` | +| `npm run build` | Build your production site to `./dist/` | +| `npm run preview` | Preview your build locally, before deploying | +| `npm run astro ...` | Run CLI commands like `astro add`, `astro check` | +| `npm run astro -- --help` | Get help using the Astro CLI | + +## 👀 Want to learn more? + +Feel free to check [our documentation](https://docs.astro.build) or jump into our [Discord server](https://astro.build/chat). diff --git a/apps/memoro/apps/landing/astro.config.mjs b/apps/memoro/apps/landing/astro.config.mjs new file mode 100644 index 000000000..3b6c88fe5 --- /dev/null +++ b/apps/memoro/apps/landing/astro.config.mjs @@ -0,0 +1,48 @@ +import { defineConfig } from 'astro/config'; +import mdx from '@astrojs/mdx'; +import tailwind from '@astrojs/tailwind'; +import sitemap from '@astrojs/sitemap'; +import icon from 'astro-icon'; + +// https://astro.build/config +export default defineConfig({ + image: { + service: { + entrypoint: 'astro/assets/services/sharp', + config: { + limitInputPixels: false, + }, + }, + domains: [], + remotePatterns: [], + }, + integrations: [ + mdx(), + tailwind(), + sitemap({ + i18n: { + defaultLocale: 'de', + locales: { + de: 'de-DE', + en: 'en-US' + } + } + }), + icon({ + include: { + mdi: ["*"] + } + }) + ], + contentCollections: true, + site: 'https://memoro.ai', + output: 'static', + i18n: { + locales: ['de', 'en'], + defaultLocale: 'de', + routing: { + prefixDefaultLocale: true, + redirectToDefaultLocale: false + } + } +}); diff --git a/apps/memoro/apps/landing/context/CopyWritingGuidelines.md b/apps/memoro/apps/landing/context/CopyWritingGuidelines.md new file mode 100644 index 000000000..9be8ba2c4 --- /dev/null +++ b/apps/memoro/apps/landing/context/CopyWritingGuidelines.md @@ -0,0 +1,15 @@ +Texte High Level + +Du bist eine begabte Werbetexterin und Copywriterin. +Du hilfst dabei anhand von diesen Guidelines einen Text umzuschreiben: + +Memoro Text Guidelines +Die 8 Prinzipien unserer Stimme: +Wir reden nicht um den heißen Brei herum. Wir nutzen kurze, klare, prägnante Sätze. +Wir informieren, und wollen helfen, nichts bewerben oder verkaufen. Mehrwert schaffen wir mit unserer Applikation. +Wir versuchen auf Du oder Sie zu verzichten, mehr wie ein Pressetext zu klingen. +Wir ziehen Vergleiche zu Memoro (Einer App die Gespräche aufzeichnet, aufschreibt und passend zusammenfasst, eine Wissenbibliothek und ein ständiger Assistent, der im Hintergrund bleibt und Zeit schafft und einem den Rücken freihält, die Brandfarbe ist Gelb (wie ein Gedanke) und die Formen sind fließend und rund) +Wir wollen andere mit unseren Gedanken anstecken und sie zum ausprobieren bewegen, Inspiration schaffen. +Wir wollen Partizipation fördern und andere an unserer Entwicklung teilhaben lassen und ihre Ideen und Funktion Wünsche, um für sie das beste Produkt bauen zu können. +Wir wollen Engaging mit unserem Publikum kommunizieren. Nutzen das AIDA Framework (Attention, Intrest, Desire, Action) im Aufbau unserer Texte. +Wir wollen das alle Nutzer alles verstehen, also kommunizieren wir in einfache Sprache: Statt DSGVO konform, sage lieber alle Daten werden sicher in Deutschland gespeichert. Wir reden auch nicht über KI, sondern über die besten Ohren, das intelligente mitschreiben und zusammenfassen etc. Ausserdem nutzen wir keine Worte wie revolutionierend oder ähnliches, wir sind bodenständig und ziehen lieber einen lustigen Vergleich. diff --git a/apps/memoro/apps/landing/context/ImagePrompts.md b/apps/memoro/apps/landing/context/ImagePrompts.md new file mode 100644 index 000000000..959a882f5 --- /dev/null +++ b/apps/memoro/apps/landing/context/ImagePrompts.md @@ -0,0 +1,80 @@ +# Image Prompts für Blog-Artikel + +Diese Datei enthält bewährte Prompt-Vorlagen für die Generierung von Blog-Bildern. + +## Blog-Header Icons/Symbole + +### Minimale geometrische Symbole (mit transparentem Hintergrund) + +**Vorlage für Kommunikation/KI-Themen:** +``` +Minimal geometric icon design on transparent background. Two interlocking shapes: a rounded speech bubble (gradient from coral to orange) flowing into a crystalline hexagon (gradient from electric blue to purple). The shapes overlap in the center creating a small overlapping area in vibrant green. Clean vector style, no shadows, no background. Flat design with subtle gradients. Symbol represents human-AI communication. Centered composition, perfect for web use. +``` + +**Verwendung:** Prompt Engineering, KI-Kommunikation, Chat-Themen + +--- + +### Variationen für andere Themen + +**Für Produktivitäts-Themen:** +``` +Minimal geometric icon design on transparent background. Three overlapping circles forming a Venn diagram. Left circle: orange gradient. Right circle: blue gradient. Bottom circle: purple gradient. Where all three overlap in the center: bright teal. Clean, minimal, flat design. No text, no decoration. Modern tech logo style. +``` + +**Für Transformation/Workflow-Themen:** +``` +Minimalist vector symbol on transparent background: Single flowing arrow that transforms from organic curved line (warm orange gradient) to geometric angular shape (cool blue-purple gradient). The transformation happens in the middle with a smooth color transition. Ultra simple, modern icon design. No background, clean edges. +``` + +--- + +## Anpassungshinweise + +- **Farben:** Hauptgradients können angepasst werden (orange/coral → blue/purple ist bewährt) +- **Form:** Grundformen können variiert werden (Kreise, Hexagone, Pfeile, etc.) +- **Stil:** Immer "minimal", "clean", "flat design", "no shadows", "transparent background" +- **Format:** Optimal für Web-Verwendung, zentrierte Komposition + +## Bewährte Farbkombinationen + +1. **Warm → Cool:** Orange/Coral → Blue/Purple +2. **Energie:** Yellow/Orange → Red/Pink +3. **Natur:** Green/Teal → Blue/Purple +4. **Professional:** Gray/Silver → Blue/Navy + +## Technische Spezifikationen + +- Format: PNG mit transparentem Hintergrund +- Stil: Flat Design, Vektor-ähnlich +- Schatten: Keine +- Text: Keiner +- Komposition: Zentriert +- Verwendung: Blog-Header, Icon-Elemente + +--- + +## Neue Prompts für "KI als persönlicher Assistent" + +### Option 1: Automatisierung-Symbol +``` +Minimal geometric icon design on transparent background. Two interlocking gears: left gear organic and rounded (gradient from warm orange to coral), right gear precise and angular (gradient from electric blue to purple). Small floating elements around them representing automated tasks (tiny circles and triangles in teal). Clean vector style, no shadows, no background. Flat design with subtle gradients. Symbol represents automation and AI assistance. Centered composition, perfect for web use. +``` + +### Option 2: Zeit-Effizienz-Symbol +``` +Minimalist vector symbol on transparent background: Clock face transforming into flowing energy streams. Clock portion: gradient from orange to yellow. Energy streams: gradient from blue to purple, flowing outward in elegant curves. Small productivity icons (tiny checklist, envelope, document) floating in the streams in vibrant green. Ultra simple, modern icon design. No background, clean edges. +``` + +### Option 3: Assistent-Verbindung-Symbol +``` +Simple vector icon on transparent background: Central hub (hexagon in blue-purple gradient) connected to four smaller circles representing different tasks. Connections: flowing lines in orange gradient. Each circle different color: orange (meetings), teal (emails), purple (documents), green (tasks). Clean, minimal, flat design. No text, no decoration. Modern tech assistance logo style. +``` + +--- + +## Prompt für "KI-gestützte Entscheidungsfindung" + +``` +Minimal geometric icon design on transparent background. Decision tree visualization: Central diamond shape (gradient from coral to orange) with three branching paths flowing outward. Each path ends in a different geometric shape representing options: circle (blue gradient), triangle (purple gradient), square (teal gradient). The paths are elegant curved lines with subtle glow. Small analytical elements floating around (tiny data points, mini charts) in soft green. Clean vector style, no shadows, no background. Flat design with subtle gradients. Symbol represents AI-supported decision making. Centered composition, perfect for web use. +``` \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/Memoro-Features.md b/apps/memoro/apps/landing/context/Memoro-Features.md new file mode 100644 index 000000000..3d60a1ca9 --- /dev/null +++ b/apps/memoro/apps/landing/context/Memoro-Features.md @@ -0,0 +1,170 @@ +Recording Features +Feature: Perfektes Gehör +Beschreibung: Ein Klick genügt - Memoro hört aufmerksam zu und erfasst jedes Detail deiner Gespräche. +Hauptfunktionen: +Ein-Klick Aufnahme: Memoro ist sofort einsatzbereit, nach Tippen auf den großen runden Aufnahmeknopf. +Intelligentes Hören: Erkennt bis zu 15 verschiedene Sprecher in über 80 Sprachen und Dialekten. +Flexible Aufnahme: Nimm die verschiedensten Gespräche auf: Meetings & Konferenzen, Persönliche Notizen, Interviews & Vorträge, Dokumentation, Beratung +Zuverlässige Technik: Memoro ist komplett Offline-fähig. Jede Aufnahme wird auf deinem Gerät gespeichert und kann aus dem Audioarchiv erneut hochgeladen werden, wenn wieder mit dem Internet verbunden. +Bleibt im Hintergrund: Memoro ist sehr batterie-effizient. Du kannst dein Gerät auch sperren, oder die App wechseln, Memoro bleibt aktiv, ohne abzulenken. +Optionale Standortspeicherung: Wenn in den Einstellungen aktiviert wird, wird zusätzlich automatisch der Standort beim Beenden der Memo gespeichert. Dieser kann dann auf der Karte angesehen werden. + +Feature: Aufnahme Sprachen +Beschreibung: Über 80 Sprachen und Dialekte mit automatischer Erkennung. +Hauptfunktionen: +Automatische Spracherkennung: Memoro erkennt automatisch die gesprochene Sprache, aus über 80 verschiedenen Sprachen und Dialekten. +20 Express-Sprachen: Diese Sprachen können in wenigen Sekunden verarbeitet werden und stehen fast unmittelbar zur Verfügung. Wird verwendet bei Aufnahmen bis zu 40 Minuten. Unterstützte Sprachen: Deutsch, Englisch, Französisch, Spanisch, Italienisch, Niederländisch, Polnisch, Schwedisch, Russisch, Portugiesisch, Arabisch, Hindi, Chinesisch, Japanisch, Koreanisch. +80+ Spachen: Mit 60+ Sprachen mit Standard-Transkription (diese braucht mehrere Minuten zur Verarbeitung). Memoro bietet die schnellste und qualitativ hochwertigste, DSGVO konforme Transkription auf dem Markt. +Dialekt-Erkennung: Memoro unterstützt verschiedene Dialekte, um eine möglichst gute Translationsqualität zu gewährleisten. Diese Varianten werden unterstützt: Regionales Deutsch (Deutschland, Österreich, Schweiz), Englisch-Varianten (USA, UK, Australien, Indien), Spanisch-Dialekte (Spanien, Mexiko, Argentinien, Kolumbien), Chinesische Varianten (Mandarin, Kantonesisch, Taiwanesisch) +Mehrsprachige Meetings: Automatischer Sprachenwechsel während der Aufnahme mit mehreren Sprachen, Sprechererkennung behält Sprachzuordnung bei, auch in unterschiedlichen Sprachen. +Spezialisierte Unterstützung: Memoro versteht die verschiedensten Fachsprachen: Medizinische Terminologie, Technisches Vokabular, Juristische Fachsprache, und viele mehr, je nach gewähltem Aufnahme-Blueprint. + +Feature: Audio-Archiv +Beschreibung: Ihre persönliche Audioarchiv - alle Aufnahmen sicher gespeichert und jederzeit verfügbar +Hauptfunktionen: +Vollständiger Überblick über alle lokal gespeicherten Aufnahmen +Detaillierte Statistiken über Länge und Dateigröße der Aufnahmen +Einfaches exportieren der Audioaufnahmen, einfach über den Teilen Button im Audioplayer +Hilfreich für Vielnutzer, Projektmanager, Compliance + + +Feature: Recording Blueprints (Aufnahme Vorlagen) +Beschreibung: Vordefinierte Aufnahmevorlagen für perfekt strukturierte Ergebnisse +Hauptfunktionen: +Etliche Vorlagen für die verschiedensten Anwendungsfälle +Vorlagen für verschiedene Branchen: Von Bildung an Universitäten oder Schulen und Kindergärten, über Pflegedokumentation an Krankenhäusern oder Altersheimen, zu Baubesprechungen von Handwerkern oder Architekten, bis zur Dokumentation von Familiengeschichten. Und etliche weitere spezialisierte Vorlagen und Industrien. Psst: Bald können auch eigene Blueprints erstellt werden. +Feedback Funktion an Memoro, um den Blueprint zu verbessern. + +Feature: Audio Upload +Beschreibung: Lade deine Aufnahmen hoch und lass sie automatisch analysieren +Hauptfunktionen: +MP3, WAV und M4A Formate unterstützt +Unbegrenzte Aufnahmelänge +Automatische Transkription +Einstellungen für Datum, Uhrzeit, Sprache und Vorlagen +Sofortige KI-Analyse +Verschlüsselte Übertragung +Sichere Cloud-Speicherung +Organization Features +Feature: Memories +Beschreibung: Entdecke die verschiedenen Analysen, die Memoro von deinen Gesprächen erstellt +Hauptfunktionen: +Explorative Memories: Erkenntnisse: Wichtige Punkte chronologisch zusammengefasst, Zusammenfassung: Automatische Formatierung als Protokoll, Vortrag oder Formular, Artikel: Strukturierte Aufbereitung der Inhalte +Exekutive Memories: Aufgaben: Erfasst Verantwortliche, Deadlines und Status, Termine: Sammelt Datum, Ort und Beschreibung + + +Feature: Tags zur Organisation +Beschreibung: Sortiere und finde deine Gespräche blitzschnell mit Tags und intelligentem Filter +Hauptfunktionen: +Tag-System mit flexibler Kategorisierung +Eigene Namen, Farbe und Emojis vergeben +Verschiedene Anwendungsfälle für Tags, zum Beispiel für Status Tracking wie “Offen” und “Erledigt”, oder für Themenschwerpunkte wie “Technologie” und “Sport”, oder für Arten von Gesprächen oder Gesprächspartnern. +Mehrere Tags pro Memo: Baue dein eigenes Memo Organisationssystem, und filtere es dynamisch auf der Memos Seite + +Feature: Memo-Kombination +Beschreibung: Füge mehrere Memos zu einem umfassenden Dokument zusammen +Hauptfunktionen: +Mehrere Memos in einer neuen Memo kombinieren +Perfekt für mehrteilige Meetings, Geschichten, oder Sammlungen von Informationen. +Stelle deiner kombinierten Memo Fragen oder erstelle neue Memories. + +Feature: Mehrsprachig auf allen Ebenen +Memoro unterstützt 24 Sprachen und zwei Dialekte für eine globale Kommunikation +Unterstützte Sprachen: +Europäische Sprachen: Deutsch, Englisch, Französisch, Spanisch, Italienisch, Portugiesisch, Niederländisch, Schwedisch, Polnisch +Osteuropäische: Russisch, Ukrainisch, Tschechisch +Asiatische Sprachen: Hindi, Chinesisch (Mandarin), Japanisch, Koreanisch, Vietnamesisch +Nahost & Afrika: Arabisch, Türkisch, Hebräisch + +Feature: Übersetzen +Beschreibung: Überwinde Sprachbarrieren mit automatischer Übersetzung in 24 Sprachen +Hauptfunktionen: +Automatische Übersetzung in 24 Sprachen +Zweisprachige Ansicht +Erhalt des ursprünglichen Kontexts +Erkennt Fachbegriffe +Berücksichtigt kulturelle Nuancen +Sharing Features +Feature: Memos teilen +Beschreibung: Teile deine Gespräche nahtlos über alle deine Apps +Hauptfunktionen: +Teile deine Memo direkt über alle gängigen Kanäle wie Mail, Whatsapp, Telegram oder kopiere sie in ein Dokument +Nutze die Webapp unter web.memoro.ai um deine Inhalte schnell auf dem Computer oder anderen Geräten aufzurufen. +Schnelles kopieren des Transkriptes +Feature in Arbeit: Teilen von Memos in Memoro - gemeinsames Bearbeiten und erweitern der Memo + +Customization Features +Feature: Mana Credits System +Beschreibung: Flexibles KI-Credit-System für jeden Bedarf + +Vorteile: +- Fair: Zahlen Sie nur für das, was Sie nutzen +- Skalierbar: Verschiedene Pakete für jeden Bedarf +- Transparent: Klare Kostenübersicht +- Flexibel: Monatlich oder jährlich, plus Einmalkäufe +- 33% Rabatt bei Jahresabonnements + +Mana-Kosten für Memoro: +- Aufnahme/Transkription: 2 Mana pro Minute (mindestens 10 Mana) +- Frage an Memo stellen: 5 Mana pro Frage +- Neue Memory erstellen: 5 Mana +- Blueprint anwenden: 5 Mana +- Memos kombinieren: 5 Mana pro kombiniertes Memo + +Mana-Abonnements: +Mana Tropfen (Kostenlos) - 0 € / Monat: +- 150 Mana Startguthaben +- 5 Mana täglich (max. 150 Mana Guthaben) +- Perfekt zum Ausprobieren + +Mana Fluss - 5,99 € / Monat (47,99 € / Jahr): +- 600 Mana Startguthaben +- 20 Mana täglich (max. 1.200 Mana Guthaben) +- Mana verschenken möglich + +Mana Strom - 14,99 € / Monat (119,99 € / Jahr): +- 1.500 Mana Startguthaben +- 50 Mana täglich (max. 3.000 Mana Guthaben) +- Mana verschenken möglich + +Mana See - 29,99 € / Monat (239,99 € / Jahr): +- 3.000 Mana Startguthaben +- 100 Mana täglich (max. 6.000 Mana Guthaben) +- Mana verschenken möglich + +Mana Meer - 49,99 € / Monat (399,99 € / Jahr): +- 5.000 Mana Startguthaben +- 200 Mana täglich (max. 10.000 Mana Guthaben) +- Mana verschenken möglich + +Mana-Pakete (Einmalkauf): +- Kleiner Mana Trank: 350 Mana für 4,99 € +- Mittlerer Mana Trank: 700 Mana für 9,99 € +- Großer Mana Trank: 1.400 Mana für 19,99 € +- Riesiger Mana Trank: 2.800 Mana für 39,99 € + +Feature: Memoro anpassen +Beschreibung: Passen Sie Memoro perfekt an Ihre Bedürfnisse an +Anpassungsoptionen: +Erscheinungsbild: Dunkel & Hell Modus +Anzeigeelemente Aufnahme Seite: Ein und ausblenden von verschiedenen Elementen auf der Aufnahmeseite +Oberflächensprache (48 verschiedene), Aufnahme Sprache (82 verschiedene Sprachen), Übersetzungssprachen (57 verschiedene Sprachen zum Übersetzen) +Datenschutz & Sicherheit: Standortspeicherung opt-in, Analysedaten opt-out + +Feature: Branchenspezifische Anpassung +Beschreibung: Maßgeschneiderte Lösungen für deine Branche mit spezialisierten Vorlagen und Integrationen +Hauptfunktionen: +Spezialisierte Vorlagen für verschiedene Branchen +Systemintegration mit APIs +Individuelle Anpassungen +Spezielle Compliance Lösungen +Branchenlösungen für: Gesundheitswesen, Rechtsberatung, Finanzdienstleistungen, Bildung, Consulting, Immobilien, IT & Software + +Analytics Features +Feature: Detaillierte Statistiken +Beschreibung: Umfassende Einblicke in Ihre Produktivität und Nutzungsmuster +Statistik-Features: +Memos-Metriken: Gesamtanzahl, Länge, Wortanzahl, Trends, etc +Aufnahmedauer-Analysen: Gesamtzeit, Durchschnittliche Länge, etc +Wörter-Statistiken: Gesamtwortanzahl, Wörter pro Memo, etc +Erweiterte Analysen: Zeitverteilung, Tag-Analyse, Produktivitätsmuster, etc \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/Memoro.md b/apps/memoro/apps/landing/context/Memoro.md new file mode 100644 index 000000000..7df935b56 --- /dev/null +++ b/apps/memoro/apps/landing/context/Memoro.md @@ -0,0 +1,301 @@ +# Das ist Memoro + +# Memoro App: Revolutionierung der Gesprächsdokumentation und Gedankenerfassung + +## 1\. Einführung + +Memoro ist eine innovative mobile Anwendung, die darauf abzielt, die Art und Weise, wie Menschen Gespräche dokumentieren und Gedanken festhalten, zu revolutionieren. Entwickelt als Antwort auf die Herausforderungen des manuellen Mitschreibens und der Protokollführung, bietet Memoro eine intuitive Lösung für die automatisierte Erfassung, Transkription und Zusammenfassung von gesprochenen Inhalten. + +## 2\. Hauptfunktionen + +- **Aufnahme**: Ein zentraler Aufnahmeknopf ermöglicht das einfache Starten und Beenden von Aufnahmen. +- **Transkription**: Memoro transkribiert das Gesprochene Wort für Wort und erkennt dabei verschiedene Sprecher. +- **Zusammenfassung**: Die App fasst den Inhalt automatisch zusammen und extrahiert wichtige Informationen wie Aufgaben, Termine und Erkenntnisse. +- **Mehrsprachigkeit**: Unterstützung von 82 Sprachen bei der Aufnahme und Transkription. 56 Sprachen für die Übersetzung und 48 Sprachen für die App Oberfläche. + +## 3\. Besondere Merkmale + +- **Einfache Bedienung**: Die App ist intuitiv gestaltet und erfordert minimales Onboarding. +- **Sprachbarrieren überwinden**: Memoro unterstützt die Integration ausländischer Fachkräfte durch mehrsprachige Funktionen. +- **Fokus auf Face-to-Face-Gespräche**: Optimiert für persönliche Interaktionen und Gedankenerfassung. +- **Branchenspezifische Modi**: Anpassbare Modi für verschiedene Berufsgruppen und Anwendungsfälle. +- **Made in Germany**: Entwickelt nach höchsten Sicherheitsstandards mit Datenspeicherung in Deutschland. + +## 4\. Zielgruppen und Anwendungsfälle + +- **Studierende**: Vorlesungsmitschriften, Brainstorming für Abschlussarbeiten +- **Büroarbeiter**: Meetingprotokolle, E-Mail-Diktate +- **Handwerker**: Materiallisten, Baubesprechungen +- **Pflegekräfte**: Maßnahmenplanung, Dokumentation +- **Marketingexperten**: Interviewführung und \-analyse +- **Kreative**: Ideensammlung und Selbstreflexion + +## 5\. Technische Details + +- **Plattformen**: iOS (Apple App Store) und Android (Google Play Store) + - **iOS App**: https://apps.apple.com/de/app/memoro/id6451258411 + - **Android App**: https://play.google.com/store/apps/details?id=com.memo.beta +- **Desktop-Zugang**: Lesefunktion für aufgenommene Memos +- **Integrationen**: Einfache Übertragung in andere Anwendungen durch Textformat +- **Erklärvideos**: + - **Deutsch**: https://www.youtube.com/watch?v=YTVbhzPY9eI (21.02.2024) + - **Englisch**: https://www.youtube.com/watch?v=bqDx4_V-zZU (21.02.2024) + +## 6\. Preismodell + +- **Kostenlose Version**: 180 Minuten/Monat, max. 50 Minuten/Aufnahme, 22 Memos/Tag +- **Plus**: 9€/Monat (600 Minuten/Monat, 100 Minuten/Memo) +- **Pro**: 23€/Monat (1800 Minuten/Monat, 200 Minuten/Memo) +- **Ultra**: 42€/Monat (4800 Minuten/Monat, 300 Minuten/Memo) +- 20% Rabatt bei jährlicher Abrechnung +- Sonderkonditionen für Bildungseinrichtungen und gemeinnützige Organisationen + +## 7\. Datenschutz und Sicherheit + +- DSGVO- und GDPR-konform +- Daten werden ausschließlich in Deutschland gespeichert +- Ende-zu-Ende-Verschlüsselung bei der Übertragung +- Tägliche Backups +- Keine Weitergabe an Dritte +- On-Premise-Lösungen für Unternehmen verfügbar + +## 8\. Unternehmenswerte und Vision + +- Inklusivität und Barrierefreiheit +- Verbesserung der Kommunikation und des gegenseitigen Verständnisses +- Demokratisierung des Zugangs zu KI-gestützten Dokumentationstools +- Fokus auf Zeitersparnis und effizienteres Arbeiten + +## 9\. Erfolgsgeschichten und Statistiken + +- 800 aktive Nutzer, davon 100 wöchentliche Nutzer +- Geschätzte Zeitersparnis von 2-6 Stunden pro Woche je nach Branche +- Reduzierung der Dokumentationsarbeit um bis zu 75% +- Erfolgreiche Anwendung bei Abschlussarbeiten und Forschungsprojekten + +## 10\. Zukunftspläne und Entwicklung + +- Kontinuierliche Verbesserung der Analysen +- Entwicklung neuer branchenspezifischer Modi +- Enge Zusammenarbeit mit Nutzern zur Produktoptimierung +- Anpassung an individuelle Kundenwünsche + +## 11\. Kundenservice und Support + +- Direkte Unterstützung über [kontakt@memoro.ai](mailto:kontakt@memoro.ai) +- Ausführliche Online-Dokumentation +- Schnelle Reaktion auf Nutzerfeedback und \-probleme + +## 12\. Marketing und Kommunikation + +- **Kernbotschaften**: Freiheit, Sicherheit, Ruhe, Verständnis +- **Emotionale Assoziationen**: Konzentration auf den Moment, Überwindung von Sprachbarrieren, verbesserte Selbstreflexion +- **Terminologie**: + - "Memoro" \- die App + - "Memos" \- einzelne Aufnahmen oder Analysen + - "Memories" \- Analyseabschnitte (z.B. Erkenntnisse, Termine, Aufgaben) + +Memoro positioniert sich als unverzichtbares Tool für effiziente Kommunikation und Dokumentation in einer zunehmend globalisierten und schnelllebigen Arbeitswelt. Durch seine Benutzerfreundlichkeit, Mehrsprachigkeit und branchenübergreifende Anwendbarkeit hat Memoro das Potenzial, die Art und Weise, wie wir Informationen erfassen und verarbeiten, grundlegend zu verändern. + +Memoro Mehrwert: + +1. Definition und Bedeutung von Mehrwert: + - Mehrwert wird definiert als alle Vorteile und Nutzen, die zusätzlich zu einem Produkt oder einer Dienstleistung entstehen. Dies umfasst nicht nur die Funktionalität eines Produkts oder einer App, sondern auch alle damit verbundenen positiven Auswirkungen. + - Der Mehrwert ist entscheidend für Marketing, Vertrieb, Schaffung von Wettbewerbsvorteilen, Kundenbindung und Rentabilität. Ein Unternehmen, das den Mehrwert seiner Produkte und Dienstleistungen gut kennt und nutzt, kann erfolgreicher agieren. +2. Arten von Mehrwert: + - Mehrwert durch das Produkt selbst: Dies bezieht sich auf die grundlegenden Funktionen und Anwendungen eines Produkts oder einer App. Hier wurde diskutiert, wie ein Produkt spezifische Probleme löst und welche Vorteile es bietet. + - Mehrwert durch Prozesse rund um das Produkt: Dies umfasst die Vorteile, die durch die Nutzung des Produkts entstehen, sowie die Verbindungen und Prozesse, die dadurch optimiert werden. + - Mehrwert durch Personen im Unternehmen: Der Fokus lag auf den Kenntnissen, Erfahrungen und Fähigkeiten der Mitarbeiter, die sie in ihrem Bereich und möglicherweise auch außerhalb davon anbieten können. + - Mehrwert durch den entstehenden Profit für Kunden: Dies bezieht sich auf den finanziellen Gewinn, den Kunden durch die Nutzung eines Produkts erzielen, sowie andere Arten von Mehrwert, die nicht direkt monetär sind. +3. Nutzung und Ergebnisse von Mehrwert: + - Mehr Zeit für die eigentliche Arbeit: Durch die Nutzung des Produkts werden repetitiven und zeitaufwändigen Aufgaben reduziert, wodurch mehr Zeit für die Kernaufgaben bleibt. + - Weniger nervige Arbeit: Insbesondere das Führen von Protokollen wird erleichtert, was oft als unangenehme Pflicht empfunden wird. + - Weniger Missverständnisse und Stress in Gesprächen: Die Verfügbarkeit von Protokollen und Aufzeichnungen führt zu klareren Kommunikationen und weniger Missverständnissen. + - Unmittelbare Verfügbarkeit von Informationen: Protokolle und Aufzeichnungen stehen sofort zur Verfügung, was die Effizienz und Transparenz verbessert. + - Verbesserte Qualität der Arbeit: Durch detailliertere und spezifischere Dokumentationen wird die Arbeitsqualität gesteigert. +4. Erfahrungen und Wissen im Unternehmen: + - Breites Wissen in kreativem Design: Das Unternehmen verfügt über umfangreiche Kenntnisse in Bereichen wie Brand Design, Grafikdesign, UX/UI-Design, Film, Musik und Licht. + - Rapid Prototyping: Die Fähigkeit, schnell Prototypen zu erstellen und zu testen, wurde als eine der Kernkompetenzen hervorgehoben. + - KI-Kompetenzen: Das Unternehmen hat tiefgehende Erfahrungen in der Anwendung und Entwicklung von Künstlicher Intelligenz und weiß, welche Technologien zukunftsträchtig sind und welche vermieden werden sollten. + - Kundennähe und proaktive Zusammenarbeit: Eine enge Zusammenarbeit mit Kunden und die Fähigkeit, sich schnell auf deren Bedürfnisse einzustellen, wurden als besondere Stärken genannt. + - Marktkenntnis: Das Unternehmen kennt den deutschen und Schweizer Markt gut und kann seine Produkte entsprechend anpassen. +5. Vermittlung der Expertise: + - Demos und Workshops: Die Durchführung von Demonstrationen und Workshops, um die Funktionalität und Vorteile des Produkts zu zeigen. + - Nutzung der App als Beispiel: Die App selbst dient als lebendiges Beispiel für die eigenen Kompetenzen. + - Networking und Veranstaltungen: Teilnahme an und Organisation von Events wie Hackathons zur Förderung von Kollaborationen und Innovationsaustausch. + - Webseite und Social Media: Nutzung von Blogs, LinkedIn und anderen Plattformen, um Wissen und Updates zu teilen. +6. Emotionale Aspekte der Nutzung: + - Erleichterung und Sicherheit: Nutzer fühlen sich erleichtert und sicher, da sie wissen, dass wichtige Informationen aufgezeichnet und zugänglich sind. + - Selbstbestimmtheit und Selbstwertgefühl: Die App stärkt das Selbstvertrauen und das Gefühl der Selbstbestimmtheit, da Nutzer ihre Gedanken und Ideen festhalten können. + - Gemeinschaftsgefühl: Die Möglichkeit, Wissen und Informationen zu teilen, fördert ein Gefühl der Verbundenheit und Zusammenarbeit. + - Vorfreude und Motivation: Nutzer freuen sich auf regelmäßige Updates und Verbesserungen, die die App noch nützlicher machen. + - Professionelle Unterstützung: Die App wird als professionelles Tool wahrgenommen, das im Alltag eine wertvolle Unterstützung bietet. + - Ermöglichung und Empowerment: Insbesondere für Nutzer mit sprachlichen oder technischen Einschränkungen bietet die App die Möglichkeit, sich besser auszudrücken und einzubringen. + - Gelassenheit und Ausgeglichenheit: Die Nutzung der App trägt zu einem Gefühl der Ruhe und Ausgeglichenheit bei, da Nutzer wissen, dass wichtige Informationen sicher gespeichert sind. + - Aktivierung und Entdeckungslust: Die Möglichkeit, jederzeit Gedanken und Ideen festzuhalten, motiviert die Nutzer, aktiv zu bleiben und Neues zu entdecken. + +# Das Memoro Team + +Unser Team besteht aus erfahrenen Experten aus den Bereichen künstliche Intelligenz, Softwareentwicklung, Datensicherheit und Branchenspezialisten. Gemeinsam arbeitet das Team daran, Memoro kontinuierlich zu verbessern und an die Bedürfnisse der Kunden anzupassen. + +## Till Schneider + +Till Schneider bringt als erfahrener Filmemacher und Mediendesigner eine Fülle von Kreativität und technischem Know-how in das Memoro-Team ein. Mit über einem Jahrzehnt Erfahrung in der digitalen Medienproduktion hat Till eine beeindruckende Karriere aufgebaut, die ihn optimal für seine Rolle bei Memoro qualifiziert. + +Tills Expertise erstreckt sich über ein breites Spektrum der digitalen Medienlandschaft. Seine Leidenschaft für innovative Technologien und interaktive Medien spiegelt sich in seiner Arbeit wider. Er hat maßgeblich an der Entwicklung eines interaktiven Film-Tools mitgewirkt, was seine Fähigkeit zur Konzeption und Umsetzung komplexer digitaler Projekte unter Beweis stellt. Diese Erfahrung ist besonders wertvoll für die kontinuierliche Verbesserung und Erweiterung der Memoro-App. + +In seiner Karriere hat Till über 80 selbständige Filmproduktionen realisiert, darunter Projekte für namhafte Unternehmen und öffentliche Einrichtungen. Diese Vielseitigkeit in der Projektarbeit hat sein Verständnis für unterschiedliche Kundenbedürfnisse geschärft – eine Fähigkeit, die bei der Weiterentwicklung von Memoro von unschätzbarem Wert ist. Seine Erfahrung in der Kundenbetreuung und \-beratung ermöglicht es dem Team, die Anforderungen der Nutzer besser zu verstehen und in die Produktentwicklung einfließen zu lassen. + +Tills akademischer Hintergrund in Mediendesign, mit Schwerpunkten in UX/UI Design, Bewegtbild und interaktiven Medien, bildet eine solide Grundlage für die Gestaltung benutzerfreundlicher und ansprechender Interfaces. Seine Bachelorarbeit, die sich mit der Gestaltung einer Lernplattform befasste, zeigt sein Interesse an der Entwicklung von Tools, die das Lernen und die Informationsverarbeitung erleichtern – ein Kernaspekt der Mission von Memoro. + +Als ehemaliger Geschäftsführer eines kleinen Teams bringt Till wertvolle Führungserfahrung mit. Er versteht die Herausforderungen der Teamkoordination und Projektleitung, was für die agile Entwicklung und das Wachstum von Memoro von großer Bedeutung ist. Seine Fähigkeit, komplexe Projekte von der Konzeption bis zur Umsetzung zu begleiten, trägt wesentlich zur effizienten Realisierung der Unternehmensziele bei. + +Tills technische Fähigkeiten umfassen den professionellen Umgang mit verschiedenen Kamerasystemen, fortgeschrittene Kenntnisse in der Postproduktion von Film und Fotografie sowie Expertise in Branding und Corporate Design. Diese vielseitigen Kompetenzen ermöglichen es ihm, bei Memoro in verschiedenen Bereichen wertvolle Beiträge zu leisten, von der visuellen Gestaltung der App bis hin zur Erstellung von Marketingmaterialien. + +Mit seiner Kombination aus kreativer Vision, technischem Fachwissen und Führungserfahrung ist Till Schneider eine treibende Kraft hinter der Innovation bei Memoro. Seine Fähigkeit, komplexe Ideen in benutzerfreundliche Lösungen zu übersetzen, macht ihn zu einem unverzichtbaren Mitglied des Teams, das kontinuierlich daran arbeitet, die Memoro-App zu verbessern und an die sich wandelnden Bedürfnisse der Nutzer anzupassen. + +## Tobias Müller + +Tobias Müller bringt als Full-Stack-Entwickler eine beeindruckende Bandbreite an technischen Fähigkeiten und Projekterfahrungen in das Memoro-Team ein. Mit seinem Hintergrund in Software Engineering und seiner Expertise in modernen Web-Technologien ist er bestens gerüstet, um die technische Entwicklung und Innovation bei Memoro voranzutreiben. + +Tobias' Fachkenntnisse umfassen ein breites Spektrum an Programmiersprachen und Frameworks, mit besonderem Fokus auf JavaScript-basierte Technologien wie React.js, Vue.js und Node.js. Diese Expertise ist von unschätzbarem Wert für die Weiterentwicklung und Optimierung der Memoro-App, insbesondere im Hinblick auf Benutzerfreundlichkeit und Leistungsfähigkeit. Seine Erfahrung mit SQL- und NoSQL-Datenbanken ermöglicht es ihm, effiziente und skalierbare Datenlösungen für Memoro zu implementieren, was angesichts der Datenintensität der App von großer Bedeutung ist. + +In seiner bisherigen Laufbahn hat Tobias an einer Vielzahl anspruchsvoller Projekte mitgewirkt, die seine Fähigkeit zur Umsetzung komplexer Softwarelösungen unter Beweis stellen. Besonders hervorzuheben ist seine Erfahrung in der Entwicklung von KI-gesteuerten Assistenten und in der Konzeption neuer Software-Architekturen. Diese Kompetenzen sind für Memoro von großem Wert, da sie direkt zur Verbesserung der KI-gestützten Transkriptions- und Zusammenfassungsfunktionen der App beitragen können. + +Ein besonderer Mehrwert liegt in Tobias' Erfahrung mit Cloud-Technologien, insbesondere mit Azure und Google Cloud. Diese Expertise ist entscheidend für die Skalierbarkeit und Zuverlässigkeit der Memoro-Infrastruktur. Seine Kenntnisse in Bereichen wie CI/CD-Pipelines und Containerisierung mit Docker tragen dazu bei, die Entwicklungs- und Bereitstellungsprozesse bei Memoro zu optimieren und zu beschleunigen. + +Mit seiner Kombination aus technischer Expertise, Projekterfahrung und innovativem Denken ist Tobias Müller hervorragend positioniert, um wesentlich zur technologischen Weiterentwicklung von Memoro beizutragen. Seine Fähigkeit, komplexe technische Herausforderungen zu meistern und dabei stets den Nutzen für den Endanwender im Blick zu behalten, macht ihn zu einem wertvollen Mitglied des Memoro-Teams. + +# Aleksandra Vasileva + +Aleksandra Vasileva, die von allen Alex genannt wird, bringt eine einzigartige Kombination aus Marketing-Expertise und Medienerfahrung in das Memoro-Team ein. Mit ihrem Hintergrund in Internet und Online Marketing sowie ihrer praktischen Erfahrung in der TV-Produktion ist Alex bestens gerüstet, um Memoros Marktpräsenz zu stärken und die Nutzerkommunikation zu optimieren. + +Alex' Studium des Internet und Online Marketings an der Hochschule Ravensburg-Weingarten hat ihr ein solides Fundament in den Bereichen Social Media Marketing, Online Marketing und Projektmanagement im Marketing vermittelt. Diese Kenntnisse sind von unschätzbarem Wert für Memoro, insbesondere wenn es darum geht, die App einem breiteren Publikum zugänglich zu machen und die Nutzerbindung zu erhöhen. + +Ihre Erfahrung als Junior TV Production Manager bei Regio TV Bodensee hat Alex' Fähigkeiten in der Content-Erstellung und \-Organisation geschärft. Die Erstellung von Programmlisten und Sendungsinhalten sowie die Organisation von Online-Inhalten mit WordPress sind Kompetenzen, die direkt auf die Content-Strategie von Memoro übertragen werden können. Diese Fähigkeiten sind besonders wertvoll, um die Benutzeroberfläche der App intuitiv und informativ zu gestalten und regelmäßige Updates für die Nutzer zu planen. + +Während ihres Praktikums bei Bitzilla GmbH sammelte Alex wertvolle Erfahrungen in der Entwicklung und Umsetzung von Social Media Marketing Kampagnen. Ihre Fähigkeit, Events zu organisieren und Content-Marketing-Pläne zu erstellen, wird Memoro dabei helfen, eine kohärente und ansprechende Online-Präsenz aufzubauen. Die Erstellung von Social Media Inhalten, Beiträgen und Bildern ist eine Kompetenz, die für die Vermarktung von Memoro in verschiedenen digitalen Kanälen von großer Bedeutung ist. + +Alex' Verständnis für SEO und professionelles Schreiben für Websites, Apps und Blogs ist ein wesentlicher Vorteil für Memoro. Diese Fähigkeiten können genutzt werden, um die Sichtbarkeit der App in Suchmaschinen zu verbessern und ansprechende, informative Inhalte für potenzielle und bestehende Nutzer zu erstellen. + +Ihre Erfahrung in der Usability-Testbegleitung ist besonders wertvoll für Memoro. In Zusammenarbeit mit den Entwicklern kann Alex dazu beitragen, die Benutzerfreundlichkeit der App kontinuierlich zu verbessern und sicherzustellen, dass sie den Bedürfnissen und Erwartungen der Nutzer entspricht. + +Alex' mehrsprachige Fähigkeiten – sie spricht fließend Bulgarisch, Englisch und Deutsch – sind ein großer Vorteil für Memoro, insbesondere im Hinblick auf die internationale Ausrichtung und Expansion des Unternehmens. Ihre Fähigkeit, in verschiedenen Sprachen zu kommunizieren, kann dazu beitragen, Marketingmaterialien und Nutzerkommunikation für verschiedene Märkte zu lokalisieren und anzupassen. + +Ihre vielfältigen Interessen, die von Geschichte über Nachhaltigkeit bis hin zu Physik und Astrophysik reichen, verleihen Alex eine breite Perspektive, die bei der Entwicklung von Marketingstrategien für verschiedene Zielgruppen von Vorteil sein kann. Dies ist besonders relevant für Memoro, da die App in verschiedenen Bereichen und von unterschiedlichen Nutzergruppen eingesetzt werden kann. + +Mit ihrer Kombination aus theoretischem Wissen und praktischer Erfahrung im digitalen Marketing, gepaart mit ihren Fähigkeiten in der Medienproduktion, ist Alex hervorragend positioniert, um die Marketingbemühungen von Memoro zu leiten und zu optimieren. Ihre Kreativität, ihr technisches Verständnis und ihre kommunikativen Fähigkeiten machen sie zu einem wertvollen Mitglied des Teams, das wesentlich zur Steigerung der Bekanntheit und Attraktivität der Memoro-App beitragen kann. + +## Das Umfeld und die Community von Memoro + +# Unsere Zielgruppe + +# Datenschutz, Sicherheit und Infrastruktur + +## Einleitung + +Memoro, eine innovative App zur Gesprächsdokumentation und Gedankenerfassung, legt höchsten Wert auf Datenschutz und Sicherheit. Als in Deutschland entwickelte Anwendung erfüllt Memoro die strengsten Qualitäts- und Sicherheitsstandards. Dieses Dokument bietet einen umfassenden Überblick über die Datenschutzpraktiken, Sicherheitsmaßnahmen und die zugrunde liegende Infrastruktur von Memoro. + +## Datenschutz + +Memoro ist vollständig DSGVO- und GDPR-konform. Das Unternehmen hat strikte Datenschutzrichtlinien implementiert, um die Privatsphäre seiner Nutzer zu schützen. Alle personenbezogenen Daten werden mit größter Sorgfalt behandelt und ausschließlich für die vorgesehenen Zwecke verwendet. + +Folgende Datenschutzmaßnahmen sind bei Memoro implementiert: + +- Daten werden ausschließlich in Deutschland gespeichert, was eine hohe Datensicherheit und die Einhaltung strenger EU-Datenschutzgesetze gewährleistet. +- Es erfolgt keine Weitergabe von Nutzerdaten an Dritte, es sei denn, dies ist gesetzlich vorgeschrieben oder der Nutzer hat ausdrücklich zugestimmt. +- Nutzer haben volle Kontrolle über ihre Daten und können jederzeit Auskunft, Berichtigung oder Löschung ihrer personenbezogenen Daten verlangen. + +## Sicherheit + +Die Sicherheit der Nutzerdaten hat bei Memoro oberste Priorität. Das Unternehmen setzt modernste Sicherheitstechnologien ein, um die Integrität und Vertraulichkeit aller gespeicherten und übertragenen Daten zu gewährleisten. + +Zu den wichtigsten Sicherheitsmaßnahmen gehören: + +- Ende-zu-Ende-Verschlüsselung bei der Datenübertragung, um unbefugten Zugriff zu verhindern. +- Tägliche Backups aller Daten, um Datenverlust zu vermeiden und eine schnelle Wiederherstellung im Notfall zu ermöglichen. +- Strenge Zugriffskontrollen und Berechtigungsmanagement innerhalb des Unternehmens, um sicherzustellen, dass nur autorisiertes Personal Zugang zu sensiblen Daten hat. +- Regelmäßige Sicherheitsaudits und Penetrationstests, um potenzielle Schwachstellen frühzeitig zu erkennen und zu beheben. + +Für Unternehmen, die besonders hohe Sicherheitsanforderungen haben, bietet Memoro On-Premise-Lösungen an. Diese ermöglichen es Kunden, die Memoro-Infrastruktur in ihrer eigenen IT-Umgebung zu betreiben und so die volle Kontrolle über ihre Daten zu behalten. + +## Infrastruktur + +Die technische Infrastruktur von Memoro wurde sorgfältig konzipiert, um höchste Leistung, Skalierbarkeit und Sicherheit zu gewährleisten. Das Unternehmen nutzt eine Kombination aus eigenen Systemen und bewährten Cloud-Diensten, um eine zuverlässige und effiziente Plattform bereitzustellen. + +Kernelemente der Memoro-Infrastruktur sind: + +- Datenspeicherung: Memoro nutzt die Firebase Cloud (Google Cloud) für die Speicherung von Nutzerdaten, Eingaben, Abrechnungsdaten und Memos (einschließlich Transkripte und Zusammenfassungen). Die Datenbank-Server befinden sich in Frankfurt, Deutschland, was kurze Latenzzeiten für europäische Nutzer und die Einhaltung strenger EU-Datenschutzbestimmungen gewährleistet. + +- Audioverarbeitung: Für die Transkription von Audiodateien verwendet Memoro Azure Speech, das ebenfalls in Frankfurt gehostet wird. Dies ermöglicht präzise und schnelle Transkriptionen bei gleichzeitiger Einhaltung der Datenschutzbestimmungen. + +- KI-gestützte Analysen: Die Erstellung von Zusammenfassungen wird durch Azure OpenAI unterstützt, das in Paris gehostet wird. Wichtig zu betonen ist, dass die verarbeiteten Daten nicht zum Training der KI-Modelle genutzt werden, was den Schutz der Nutzerinformationen zusätzlich stärkt. + +- Anwendungshosting: Die Memoro-App ist sowohl für iOS (über den Apple App Store) als auch für Android (über den Google Play Store) verfügbar. Zusätzlich gibt es einen Desktop-Zugang, der es Nutzern ermöglicht, ihre aufgenommenen Memos zu lesen und zu verwalten. + +Die Infrastruktur von Memoro ist so konzipiert, dass sie problemlos skaliert werden kann, um mit dem Wachstum der Nutzerbasis Schritt zu halten. Regelmäßige Leistungsoptimierungen und Kapazitätserweiterungen stellen sicher, dass die App auch bei hoher Auslastung reibungslos funktioniert. + +## Datenspeicherung und \-verarbeitung + +Memoro verarbeitet verschiedene Arten von Daten, um seinen Nutzern einen umfassenden Service zu bieten. Dazu gehören: + +- Sprachaufnahmen +- Transkriptionen der Sprachaufnahmen +- Zusammenfassungen der Transkriptionen +- Nutzungsdaten (z.B. IP-Adresse, Gerätetyp, Betriebssystem) +- Diagnosedaten und Fehlerberichte + +Die Speicherdauer dieser Daten ist wie folgt geregelt: + +- Nutzungsdaten werden maximal 26 Monate nach der Erfassung aufbewahrt. +- Sprachaufnahmen, Transkriptionen und Zusammenfassungen bleiben für die Dauer des Nutzungsverhältnisses gespeichert und werden spätestens 6 Monate nach Beendigung des Nutzungsverhältnisses gelöscht. +- Diagnosedaten und Fehlerberichte werden maximal 90 Tage nach der Erfassung aufbewahrt. + +Bei den verwendeten Azure-Diensten gelten folgende Speicherfristen: + +- Azure Speech: Daten werden maximal 31 Tage gespeichert. +- Azure OpenAI: Daten werden maximal 30 Tage gespeichert. + +Diese Speicherfristen gewährleisten, dass Memoro seinen Nutzern einen zuverlässigen Service bieten kann, während gleichzeitig der Datenschutz gewahrt bleibt. + +## Kontinuierliche Verbesserung und Nutzerfeedback + +Memoro legt großen Wert auf die kontinuierliche Verbesserung seiner Dienste. Zu diesem Zweck werden anonymisierte Nutzungsanalysen durchgeführt, die dabei helfen, die App stetig zu optimieren und an die Bedürfnisse der Nutzer anzupassen. Hierbei kommt Google Analytics zum Einsatz, wobei streng darauf geachtet wird, dass keine personenbezogenen Daten in die Analysen einfließen. + +Zur Erkennung und Analyse von App-Fehlern nutzt Memoro Firebase Crashlytics. Dies ermöglicht es dem Entwicklungsteam, auftretende Probleme schnell zu identifizieren und zu beheben, was zu einer stabilen und zuverlässigen Nutzererfahrung beiträgt. + +Memoro ermutigt seine Nutzer aktiv, Feedback zu geben und Verbesserungsvorschläge einzureichen. Dieses Feedback wird sorgfältig geprüft und fließt in die Weiterentwicklung der App ein. Nutzer können sich jederzeit mit Fragen oder Anliegen an den Kundenservice unter [kontakt@memoro.ai](mailto:kontakt@memoro.ai) wenden. + +## Kernbotschaften + +Bei der Kommunikation über Memoros Datenschutz, Sicherheit und Infrastruktur sollten folgende Kernbotschaften im Mittelpunkt stehen: + +1. **Made in Germany**: Memoro wird vollständig in Deutschland entwickelt und folgt den höchsten Qualitäts- und Sicherheitsstandards. Dies unterstreicht das Engagement für Exzellenz und Vertrauenswürdigkeit. + +2. **DSGVO-Konformität**: Memoro erfüllt alle Anforderungen der Datenschutz-Grundverordnung (DSGVO). Dies gewährleistet, dass die Privatsphäre der Nutzer geschützt und ihre Rechte in Bezug auf ihre persönlichen Daten respektiert werden. + +3. **Datenspeicherung in Deutschland**: Alle Nutzerdaten werden ausschließlich in deutschen Rechenzentren gespeichert. Dies garantiert die Einhaltung strenger europäischer Datenschutzgesetze und minimiert das Risiko des Zugriffs durch ausländische Behörden. + +4. **Ende-zu-Ende-Verschlüsselung**: Memoro verwendet modernste Verschlüsselungstechnologien, um die Sicherheit der Nutzerdaten während der Übertragung zu gewährleisten. Dies schützt vor unbefugtem Zugriff und Datenmissbrauch. + +5. **Keine Weitergabe an unbefugte Dritte**: Memoro verpflichtet sich, Nutzerdaten nicht an Dritte weiterzugeben, es sei denn, dies ist gesetzlich vorgeschrieben oder der Nutzer hat ausdrücklich zugestimmt. Dies unterstreicht das Engagement für den Schutz der Privatsphäre der Nutzer. + +6. **Transparenz und Kontrolle**: Nutzer haben volle Kontrolle über ihre Daten und können jederzeit Auskunft, Berichtigung oder Löschung ihrer personenbezogenen Daten verlangen. Diese Transparenz fördert das Vertrauen in Memoro. + +7. **Kontinuierliche Verbesserung**: Memoro investiert ständig in die Verbesserung seiner Sicherheitsmaßnahmen und Datenschutzpraktiken. Regelmäßige Audits und Updates gewährleisten, dass die App immer auf dem neuesten Stand der Technik ist. + +8. **Branchenführende Technologien**: Durch die Nutzung von Azure Speech und Azure OpenAI bietet Memoro fortschrittliche KI-Funktionen, ohne dabei Kompromisse beim Datenschutz einzugehen. Die Daten werden nicht zum Training der KI-Modelle verwendet. + +9. **Flexible Lösungen für Unternehmen**: Mit On-Premise-Optionen bietet Memoro auch für Unternehmen mit besonders hohen Sicherheitsanforderungen passende Lösungen. Dies unterstreicht die Anpassungsfähigkeit und das Verständnis für unterschiedliche Sicherheitsbedürfnisse. + +10. **Vertrauenswürdiger Partner**: Memoro positioniert sich als vertrauenswürdiger Partner für Einzelpersonen und Unternehmen, der die Bedeutung von Datenschutz und Sicherheit in der digitalen Welt versteht und priorisiert. + +Diese Kernbotschaften sollten in allen Kommunikationskanälen konsistent vermittelt werden, sei es in Marketingmaterialien, Kundengesprächen oder bei der Produktpräsentation. Sie unterstreichen Memoros Engagement für Datenschutz, Sicherheit und technologische Innovation und differenzieren das Unternehmen in einem zunehmend wettbewerbsintensiven Markt. + +# Memoro Kontakt + +Webseite: [https://www.memoro.ai/](https://www.memoro.ai/) +LinkedIn: [https://www.linkedin.com/company/memoroai](https://www.linkedin.com/company/memoroai) +Instagram: [https://www.instagram.com/memoroai/](https://www.instagram.com/memoroai/) +WhatsApp: [+41 79 370 88 99](https://wa.me/41793708899) +Adresse: Reichenaustraße 11a, 78467 Konstanz diff --git a/apps/memoro/apps/landing/context/blueprints/handwerk-blueprints.md b/apps/memoro/apps/landing/context/blueprints/handwerk-blueprints.md new file mode 100644 index 000000000..593f5fc88 --- /dev/null +++ b/apps/memoro/apps/landing/context/blueprints/handwerk-blueprints.md @@ -0,0 +1,242 @@ +# Blueprint-Ideen für das Handwerk - Handwerker (Final) + +## Die 4 wichtigsten Blueprints - NUR mit den 8 verfügbaren System-Prompts (ohne Schlüsselpunkte) + +--- + +## 1. Kundengespräch & Angebotserstellung / Customer Meeting & Quote Preparation +**Kategorie**: Handwerk +**Farbe**: #FF6F00 + +### Beschreibung +**Deutsch**: Dokumentiert Kundenwünsche, technische Anforderungen und Preisabsprachen. Erstellt strukturierte Grundlagen für Angebote und Auftragsbestätigungen. + +**English**: Documents customer requirements, technical specifications, and price agreements. Creates structured foundations for quotes and order confirmations. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Kurzzusammenfassung** (Sort Order: 1) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Kompakte Projektübersicht für die Kalkulation + +2. **Ausführliche Zusammenfassung** (Sort Order: 2) + - Prompt ID: `4370cb68-d676-4b93-8afd-2fb7c4ad78c4` + - Memory Title: Ausführliche Zusammenfassung / Detailed Summary + - Detaillierte Kundenanforderungen und Leistungsumfang + +3. **Aufgaben & Termine** (Sort Order: 3) + - Prompt ID: `7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48` + - Memory Title: Aufgaben & Termine / Tasks & Appointments + - Besichtigungstermine, Lieferzeiten, Fertigstellungsdaten + +4. **Offene Fragen** (Sort Order: 4) + - Prompt ID: `c576e875-5a52-4f6a-abb7-0c62c945af78` + - Memory Title: Offene Fragen / Open Questions + - Klärungsbedarf für Material, Technik oder Genehmigungen + +--- + +## 2. Baustellendokumentation & Qualitätssicherung / Site Documentation & Quality Control +**Kategorie**: Handwerk +**Farbe**: #FF6F00 + +### Beschreibung +**Deutsch**: Erfasst Baufortschritt, Mängel, Abnahmen und wichtige Entscheidungen. Perfekt für Gewährleistung und Nachweise. + +**English**: Records construction progress, defects, approvals, and important decisions. Perfect for warranty and documentation. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Kurzzusammenfassung** (Sort Order: 1) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Tagesübersicht Baufortschritt + +2. **Ausführliche Zusammenfassung** (Sort Order: 2) + - Prompt ID: `4370cb68-d676-4b93-8afd-2fb7c4ad78c4` + - Memory Title: Ausführliche Zusammenfassung / Detailed Summary + - Detaillierte Tagesprotokolle und Fortschrittsdokumentation + +3. **Aufgaben & Termine** (Sort Order: 3) + - Prompt ID: `7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48` + - Memory Title: Aufgaben & Termine / Tasks & Appointments + - Nacharbeiten, Abnahmetermine, Materialbestellungen + +4. **Beantwortete Fragen & Antworten** (Sort Order: 4) + - Prompt ID: `47ce3340-e8c6-437c-928d-854c55589491` + - Memory Title: Q&A / Questions & Answers + - Technische Klärungen und Lösungen + +--- + +## 3. Team-Besprechung & Arbeitsplanung / Team Meeting & Work Planning +**Kategorie**: Handwerk +**Farbe**: #FF6F00 + +### Beschreibung +**Deutsch**: Strukturiert Teambesprechungen, Arbeitseinteilung und Projektkoordination. Dokumentiert Zuständigkeiten und Arbeitsabläufe. + +**English**: Structures team meetings, work allocation, and project coordination. Documents responsibilities and workflows. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Kurzzusammenfassung** (Sort Order: 1) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Wochenplanung kompakt + +2. **Aufgaben & Termine** (Sort Order: 2) + - Prompt ID: `7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48` + - Memory Title: Aufgaben & Termine / Tasks & Appointments + - Arbeitseinteilung, Deadlines, Materialvorbereitung + +3. **Gesammelte Ideen & Vorschläge** (Sort Order: 3) + - Prompt ID: `8cdc89a5-2f76-4d50-a93d-0c177c3e73ab` + - Memory Title: Ideen & Vorschläge / Ideas & Suggestions + - Prozessverbesserungen und Lösungsansätze + +4. **Offene Fragen** (Sort Order: 4) + - Prompt ID: `c576e875-5a52-4f6a-abb7-0c62c945af78` + - Memory Title: Offene Fragen / Open Questions + - Klärungsbedarf mit Auftraggeber oder Lieferanten + +--- + +## 4. Fachliche Weiterbildung & Schulungen / Professional Training & Education +**Kategorie**: Handwerk +**Farbe**: #FF6F00 + +### Beschreibung +**Deutsch**: Dokumentiert Schulungen, neue Techniken und Zertifizierungen. Erstellt Wissensdatenbank für Mitarbeiter und Nachweise für Kunden. + +**English**: Documents training, new techniques, and certifications. Creates knowledge base for employees and proof for customers. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Kurzzusammenfassung** (Sort Order: 1) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Schulung auf einen Blick + +2. **Ausführliche Zusammenfassung** (Sort Order: 2) + - Prompt ID: `4370cb68-d676-4b93-8afd-2fb7c4ad78c4` + - Memory Title: Ausführliche Zusammenfassung / Detailed Summary + - Vollständige Schulungsdokumentation mit allen Details + +3. **Beantwortete Fragen & Antworten** (Sort Order: 3) + - Prompt ID: `47ce3340-e8c6-437c-928d-854c55589491` + - Memory Title: Q&A / Questions & Answers + - Technisches Fachwissen zum Nachschlagen + +4. **Blogbeitrag** (Sort Order: 4) + - Prompt ID: `2c6a6e47-1d0c-441f-9449-b5d908bffba2` + - Memory Title: Blogbeitrag / Blog Post + - Für Firmen-Website oder Kundennewsletter + +--- + +## Implementierungsdetails + +### JSON-Struktur für Supabase +```json +{ + "category_id": "handwerk-category-id", + "blueprints": [ + { + "name": { + "de": "Kundengespräch & Angebotserstellung", + "en": "Customer Meeting & Quote Preparation" + }, + "description": { + "de": "Dokumentiert Kundenwünsche und erstellt Angebotsgrundlagen", + "en": "Documents customer requirements and creates quote foundations" + }, + "prompts": [ + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "4370cb68-d676-4b93-8afd-2fb7c4ad78c4", + "7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48", + "c576e875-5a52-4f6a-abb7-0c62c945af78" + ] + }, + { + "name": { + "de": "Baustellendokumentation & Qualitätssicherung", + "en": "Site Documentation & Quality Control" + }, + "description": { + "de": "Erfasst Baufortschritt, Mängel und Abnahmen", + "en": "Records construction progress, defects, and approvals" + }, + "prompts": [ + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "4370cb68-d676-4b93-8afd-2fb7c4ad78c4", + "7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48", + "47ce3340-e8c6-437c-928d-854c55589491" + ] + }, + { + "name": { + "de": "Team-Besprechung & Arbeitsplanung", + "en": "Team Meeting & Work Planning" + }, + "description": { + "de": "Strukturiert Teambesprechungen und Arbeitseinteilung", + "en": "Structures team meetings and work allocation" + }, + "prompts": [ + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48", + "8cdc89a5-2f76-4d50-a93d-0c177c3e73ab", + "c576e875-5a52-4f6a-abb7-0c62c945af78" + ] + }, + { + "name": { + "de": "Fachliche Weiterbildung & Schulungen", + "en": "Professional Training & Education" + }, + "description": { + "de": "Dokumentiert Schulungen und neue Techniken", + "en": "Documents training and new techniques" + }, + "prompts": [ + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "4370cb68-d676-4b93-8afd-2fb7c4ad78c4", + "47ce3340-e8c6-437c-928d-854c55589491", + "2c6a6e47-1d0c-441f-9449-b5d908bffba2" + ] + } + ] +} +``` + +## Vorteile dieser finalen Version + +### ✅ 100% Kompatibilität +- **NUR die 8 tatsächlich verfügbaren Prompts** werden verwendet (ohne Schlüsselpunkte) +- Keine Phantasie-IDs oder nicht existierende Prompts +- Sofort implementierbar ohne Backend-Änderungen + +### ✅ Vollständige Abdeckung +- **Kundenbetreuung**: Von Erstgespräch bis Angebot +- **Projektdokumentation**: Lückenlose Baustellendokumentation +- **Teamorganisation**: Effiziente Arbeitsplanung +- **Qualifikation**: Weiterbildung und Zertifizierungen + +### ✅ Praktischer Nutzen für Handwerker +- **Rechtssicherheit**: Dokumentation für Gewährleistung und Beweissicherung +- **Effizienz**: Strukturierte Arbeitsabläufe und klare Zuständigkeiten +- **Qualität**: Systematische Erfassung von Mängeln und Nacharbeiten +- **Kundenbindung**: Professionelle Dokumentation und Kommunikation +- **Wissensmanagement**: Schulungsinhalte und Best Practices im Team teilen + +### ✅ Handwerksspezifische Anwendungsfälle +- Mängelprotokolle und Abnahmen +- Materialbestellungen und Liefertermine +- Arbeitszeiten und Leistungserfassung +- Technische Klärungen und Normen +- Kundenwünsche und Änderungen +- Sicherheitsunterweisungen + +### ✅ Einfache Implementierung +- Direkt in Supabase einfügbar +- Keine neuen Prompts nötig +- Verwendet bestehende Infrastruktur +- Mehrsprachigkeit bereits integriert (DE, EN, IT, FR, ES) \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/blueprints/office-blueprints.md b/apps/memoro/apps/landing/context/blueprints/office-blueprints.md new file mode 100644 index 000000000..70b0a0a98 --- /dev/null +++ b/apps/memoro/apps/landing/context/blueprints/office-blueprints.md @@ -0,0 +1,244 @@ +# Blueprint-Ideen für den Büro-Kontext / Office (Final) + +## Die 4 wichtigsten Blueprints - NUR mit den 8 verfügbaren System-Prompts (ohne Schlüsselpunkte) + +--- + +## 1. Meeting-Protokoll & Follow-Up / Meeting Minutes & Follow-Up +**Kategorie**: Büro / Office +**Farbe**: #2196F3 + +### Beschreibung +**Deutsch**: Erstellt strukturierte Meeting-Protokolle mit Entscheidungen, Aufgaben und Terminen. Perfekt für effiziente Nachbereitung und klare Verantwortlichkeiten. + +**English**: Creates structured meeting minutes with decisions, tasks, and deadlines. Perfect for efficient follow-up and clear responsibilities. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Kurzzusammenfassung** (Sort Order: 1) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Meeting-Ergebnisse auf einen Blick + +2. **Ausführliche Zusammenfassung** (Sort Order: 2) + - Prompt ID: `4370cb68-d676-4b93-8afd-2fb7c4ad78c4` + - Memory Title: Ausführliche Zusammenfassung / Detailed Summary + - Vollständiges Protokoll mit allen Diskussionspunkten + +3. **Aufgaben & Termine** (Sort Order: 3) + - Prompt ID: `7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48` + - Memory Title: Aufgaben & Termine / Tasks & Appointments + - Action Items mit Verantwortlichen und Deadlines + +4. **Offene Fragen** (Sort Order: 4) + - Prompt ID: `c576e875-5a52-4f6a-abb7-0c62c945af78` + - Memory Title: Offene Fragen / Open Questions + - Themen für das nächste Meeting + +--- + +## 2. Brainstorming & Ideenentwicklung / Brainstorming & Idea Development +**Kategorie**: Büro / Office +**Farbe**: #2196F3 + +### Beschreibung +**Deutsch**: Erfasst und strukturiert kreative Sessions, Workshops und Strategieentwicklung. Dokumentiert alle Ideen und priorisiert Umsetzungsschritte. + +**English**: Captures and structures creative sessions, workshops, and strategy development. Documents all ideas and prioritizes implementation steps. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Kurzzusammenfassung** (Sort Order: 1) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Session-Ergebnis kompakt + +2. **Gesammelte Ideen & Vorschläge** (Sort Order: 2) + - Prompt ID: `8cdc89a5-2f76-4d50-a93d-0c177c3e73ab` + - Memory Title: Ideen & Vorschläge / Ideas & Suggestions + - Alle Konzepte nach Umsetzbarkeit sortiert + +3. **Aufgaben & Termine** (Sort Order: 3) + - Prompt ID: `7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48` + - Memory Title: Aufgaben & Termine / Tasks & Appointments + - Nächste Schritte zur Umsetzung + +4. **Blogbeitrag** (Sort Order: 4) + - Prompt ID: `2c6a6e47-1d0c-441f-9449-b5d908bffba2` + - Memory Title: Blogbeitrag / Blog Post + - Für internes Wissensmanagement oder Intranet + +--- + +## 3. Projektbesprechung & Statusupdate / Project Meeting & Status Update +**Kategorie**: Büro / Office +**Farbe**: #2196F3 + +### Beschreibung +**Deutsch**: Dokumentiert Projektfortschritt, Meilensteine und Hindernisse. Ideal für Steering Committees, Sprint Reviews und Stakeholder-Updates. + +**English**: Documents project progress, milestones, and obstacles. Ideal for steering committees, sprint reviews, and stakeholder updates. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Kurzzusammenfassung** (Sort Order: 1) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Projektstatus Executive Summary + +2. **Ausführliche Zusammenfassung** (Sort Order: 2) + - Prompt ID: `4370cb68-d676-4b93-8afd-2fb7c4ad78c4` + - Memory Title: Ausführliche Zusammenfassung / Detailed Summary + - Detaillierter Fortschrittsbericht + +3. **Aufgaben & Termine** (Sort Order: 3) + - Prompt ID: `7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48` + - Memory Title: Aufgaben & Termine / Tasks & Appointments + - Meilensteine und kritische Pfade + +4. **Offene Fragen** (Sort Order: 4) + - Prompt ID: `c576e875-5a52-4f6a-abb7-0c62c945af78` + - Memory Title: Offene Fragen / Open Questions + - Risiken und Eskalationsbedarf + +--- + +## 4. Kommunikations-Content / Communication Content +**Kategorie**: Büro / Office +**Farbe**: #2196F3 + +### Beschreibung +**Deutsch**: Verwandelt Besprechungen und Präsentationen in professionelle Kommunikationsinhalte für verschiedene Kanäle und Zielgruppen. + +**English**: Transforms meetings and presentations into professional communication content for various channels and audiences. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Kurzzusammenfassung** (Sort Order: 1) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Management Summary für Führungsebene + +2. **Blogbeitrag** (Sort Order: 2) + - Prompt ID: `2c6a6e47-1d0c-441f-9449-b5d908bffba2` + - Memory Title: Blogbeitrag / Blog Post + - Für Intranet oder Unternehmens-Blog + +3. **Social Media Posts** (Sort Order: 3) + - Prompt ID: `b2e39e0a-ec1f-4d0e-813d-f1a08493332b` + - Memory Title: Social Media Posts + - LinkedIn, Twitter für Corporate Communications + +4. **Ausführliche Zusammenfassung** (Sort Order: 4) + - Prompt ID: `4370cb68-d676-4b93-8afd-2fb7c4ad78c4` + - Memory Title: Ausführliche Zusammenfassung / Detailed Summary + - Hintergrundinformationen für Newsletter + +--- + +## Implementierungsdetails + +### JSON-Struktur für Supabase +```json +{ + "category_id": "office-category-id", + "blueprints": [ + { + "name": { + "de": "Meeting-Protokoll & Follow-Up", + "en": "Meeting Minutes & Follow-Up" + }, + "description": { + "de": "Strukturierte Meeting-Protokolle mit Aufgaben und Entscheidungen", + "en": "Structured meeting minutes with tasks and decisions" + }, + "prompts": [ + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "4370cb68-d676-4b93-8afd-2fb7c4ad78c4", + "7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48", + "c576e875-5a52-4f6a-abb7-0c62c945af78" + ] + }, + { + "name": { + "de": "Brainstorming & Ideenentwicklung", + "en": "Brainstorming & Idea Development" + }, + "description": { + "de": "Erfasst kreative Sessions und priorisiert Umsetzungsschritte", + "en": "Captures creative sessions and prioritizes implementation steps" + }, + "prompts": [ + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "8cdc89a5-2f76-4d50-a93d-0c177c3e73ab", + "7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48", + "2c6a6e47-1d0c-441f-9449-b5d908bffba2" + ] + }, + { + "name": { + "de": "Projektbesprechung & Statusupdate", + "en": "Project Meeting & Status Update" + }, + "description": { + "de": "Dokumentiert Projektfortschritt und Meilensteine", + "en": "Documents project progress and milestones" + }, + "prompts": [ + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "4370cb68-d676-4b93-8afd-2fb7c4ad78c4", + "7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48", + "c576e875-5a52-4f6a-abb7-0c62c945af78" + ] + }, + { + "name": { + "de": "Kommunikations-Content", + "en": "Communication Content" + }, + "description": { + "de": "Professionelle Inhalte für verschiedene Kanäle", + "en": "Professional content for various channels" + }, + "prompts": [ + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "2c6a6e47-1d0c-441f-9449-b5d908bffba2", + "b2e39e0a-ec1f-4d0e-813d-f1a08493332b", + "4370cb68-d676-4b93-8afd-2fb7c4ad78c4" + ] + } + ] +} +``` + +## Vorteile dieser finalen Version + +### ✅ 100% Kompatibilität +- **NUR die 8 tatsächlich verfügbaren Prompts** werden verwendet (ohne Schlüsselpunkte) +- Keine Phantasie-IDs oder nicht existierende Prompts +- Sofort implementierbar ohne Backend-Änderungen + +### ✅ Vollständige Abdeckung +- **Meetings**: Effiziente Protokollierung und Nachverfolgung +- **Innovation**: Strukturierte Ideenentwicklung und Workshops +- **Projektmanagement**: Lückenlose Statusdokumentation +- **Kommunikation**: Multi-Channel Content-Erstellung + +### ✅ Praktischer Nutzen für Büro-Mitarbeiter +- **Zeitersparnis**: Automatische Protokollerstellung statt manueller Notizen +- **Klarheit**: Eindeutige Aufgabenverteilung und Verantwortlichkeiten +- **Transparenz**: Nachvollziehbare Entscheidungen und Projektfortschritte +- **Effizienz**: Ein Gespräch, mehrere Outputs (Protokoll, Blog, Social Media) +- **Compliance**: Revisionssichere Dokumentation wichtiger Meetings + +### ✅ Büro-spezifische Anwendungsfälle +- Board-Meetings und Vorstandssitzungen +- Agile Sprint Reviews und Retrospektiven +- Kundenpräsentationen und Pitches +- Strategieworkshops und OKR-Planungen +- Team-Meetings und Jour Fixes +- Change Management Kommunikation +- Internal Communications +- Stakeholder Updates + +### ✅ Einfache Implementierung +- Direkt in Supabase einfügbar +- Keine neuen Prompts nötig +- Verwendet bestehende Infrastruktur +- Mehrsprachigkeit bereits integriert (DE, EN, IT, FR, ES) \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/blueprints/university-student-blueprints-FINAL.md b/apps/memoro/apps/landing/context/blueprints/university-student-blueprints-FINAL.md new file mode 100644 index 000000000..c60bb3e49 --- /dev/null +++ b/apps/memoro/apps/landing/context/blueprints/university-student-blueprints-FINAL.md @@ -0,0 +1,234 @@ +# Blueprint-Ideen für den Universitären Kontext - Studenten (FINAL) + +## Die 4 wichtigsten Blueprints - NUR mit den 8 verfügbaren System-Prompts + +--- + +## 1. Vorlesungsanalyse / Lecture Analysis +**Kategorie**: Universität +**Farbe**: #9C27B0 + +### Beschreibung +**Deutsch**: Umfassende Analyse von Vorlesungen mit automatischer Erstellung von Zusammenfassungen und offenen Fragen für die Prüfungsvorbereitung. + +**English**: Comprehensive analysis of lectures with automatic creation of summaries and open questions for exam preparation. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Kurzzusammenfassung** (Sort Order: 1) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Schneller Überblick über die Vorlesung + +2. **Ausführliche Zusammenfassung** (Sort Order: 2) + - Prompt ID: `4370cb68-d676-4b93-8afd-2fb7c4ad78c4` + - Memory Title: Ausführliche Zusammenfassung / Detailed Summary + - Detaillierte Nachbereitung + +3. **Offene Fragen** (Sort Order: 3) + - Prompt ID: `c576e875-5a52-4f6a-abb7-0c62c945af78` + - Memory Title: Offene Fragen / Open Questions + - Für Sprechstunden und Verständnisfragen + +4. **Beantwortete Fragen & Antworten** (Sort Order: 4) + - Prompt ID: `47ce3340-e8c6-437c-928d-854c55589491` + - Memory Title: Q&A / Questions & Answers + - Perfekt für Lernkarten + +--- + +## 2. Seminar & Gruppenarbeit / Seminar & Group Work +**Kategorie**: Universität +**Farbe**: #9C27B0 + +### Beschreibung +**Deutsch**: Perfekt für Seminardiskussionen und Gruppenarbeiten - erfasst Aufgaben, Ideen und erstellt strukturierte Dokumentation. + +**English**: Perfect for seminar discussions and group work - captures tasks, ideas, and creates structured documentation. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Aufgaben & Termine** (Sort Order: 1) + - Prompt ID: `7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48` + - Memory Title: Aufgaben & Termine / Tasks & Appointments + - Mit Verantwortlichkeiten und Deadlines + +2. **Gesammelte Ideen & Vorschläge** (Sort Order: 2) + - Prompt ID: `8cdc89a5-2f76-4d50-a93d-0c177c3e73ab` + - Memory Title: Ideen & Vorschläge / Ideas & Suggestions + - Brainstorming und kreative Ansätze + +3. **Kurzzusammenfassung** (Sort Order: 3) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Hauptergebnisse der Diskussion + +4. **Offene Fragen** (Sort Order: 4) + - Prompt ID: `c576e875-5a52-4f6a-abb7-0c62c945af78` + - Memory Title: Offene Fragen / Open Questions + - Für die nächste Sitzung + +--- + +## 3. Prüfungsvorbereitung / Exam Preparation +**Kategorie**: Universität +**Farbe**: #9C27B0 + +### Beschreibung +**Deutsch**: Speziell für die intensive Prüfungsvorbereitung - verwandelt Lernmaterial in strukturierte Lernhilfen mit Q&A und Zusammenfassungen. + +**English**: Specifically for intensive exam preparation - transforms study material into structured learning aids with Q&A and summaries. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Beantwortete Fragen & Antworten** (Sort Order: 1) + - Prompt ID: `47ce3340-e8c6-437c-928d-854c55589491` + - Memory Title: Q&A / Questions & Answers + - Perfekt für Lernkarten und Selbsttest + +2. **Kurzzusammenfassung** (Sort Order: 2) + - Prompt ID: `c4009bef-4504-4af7-86f5-f896a2412a0a` + - Memory Title: Kurzzusammenfassung / Executive Summary + - Schnelle Wiederholung vor der Prüfung + +3. **Ausführliche Zusammenfassung** (Sort Order: 3) + - Prompt ID: `4370cb68-d676-4b93-8afd-2fb7c4ad78c4` + - Memory Title: Ausführliche Zusammenfassung / Detailed Summary + - Detailliertes Prüfungsmaterial + +4. **Offene Fragen** (Sort Order: 4) + - Prompt ID: `c576e875-5a52-4f6a-abb7-0c62c945af78` + - Memory Title: Offene Fragen / Open Questions + - Identifiziert Wissenslücken + +--- + +## 4. Content-Erstellung für Studienarbeiten / Academic Content Creation +**Kategorie**: Universität +**Farbe**: #9C27B0 + +### Beschreibung +**Deutsch**: Verwandelt Recherche und Diskussionen in strukturierte Inhalte für Hausarbeiten, Präsentationen und wissenschaftliche Blogs. + +**English**: Transforms research and discussions into structured content for term papers, presentations, and academic blogs. + +### Verwendete Prompts (100% VERFÜGBAR) +1. **Ausführliche Zusammenfassung** (Sort Order: 1) + - Prompt ID: `4370cb68-d676-4b93-8afd-2fb7c4ad78c4` + - Memory Title: Ausführliche Zusammenfassung / Detailed Summary + - Basis für Literaturarbeit + +2. **Blogbeitrag** (Sort Order: 2) + - Prompt ID: `2c6a6e47-1d0c-441f-9449-b5d908bffba2` + - Memory Title: Blogbeitrag / Blog Post + - Für wissenschaftliche Blogs oder Artikel + +3. **Social Media Posts** (Sort Order: 3) + - Prompt ID: `b2e39e0a-ec1f-4d0e-813d-f1a08493332b` + - Memory Title: Social Media Posts + - Für akademisches Networking (LinkedIn) + +4. **Gesammelte Ideen & Vorschläge** (Sort Order: 4) + - Prompt ID: `8cdc89a5-2f76-4d50-a93d-0c177c3e73ab` + - Memory Title: Ideen & Vorschläge / Ideas & Suggestions + - Kreative Ansätze für Arbeiten + +--- + +## Implementierungsdetails + +### JSON-Struktur für Supabase +```json +{ + "category_id": "b26c7a49-187d-4429-9dc6-ba55de512a8d", + "blueprints": [ + { + "name": { + "de": "Vorlesungsanalyse", + "en": "Lecture Analysis" + }, + "description": { + "de": "Umfassende Analyse von Vorlesungen mit Zusammenfassungen und Q&A", + "en": "Comprehensive lecture analysis with summaries and Q&A" + }, + "prompts": [ + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "4370cb68-d676-4b93-8afd-2fb7c4ad78c4", + "c576e875-5a52-4f6a-abb7-0c62c945af78", + "47ce3340-e8c6-437c-928d-854c55589491" + ] + }, + { + "name": { + "de": "Seminar & Gruppenarbeit", + "en": "Seminar & Group Work" + }, + "description": { + "de": "Erfasst Aufgaben, Ideen und Diskussionsergebnisse", + "en": "Captures tasks, ideas, and discussion results" + }, + "prompts": [ + "7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48", + "8cdc89a5-2f76-4d50-a93d-0c177c3e73ab", + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "c576e875-5a52-4f6a-abb7-0c62c945af78" + ] + }, + { + "name": { + "de": "Prüfungsvorbereitung", + "en": "Exam Preparation" + }, + "description": { + "de": "Strukturierte Lernhilfen mit Q&A und Zusammenfassungen", + "en": "Structured learning aids with Q&A and summaries" + }, + "prompts": [ + "47ce3340-e8c6-437c-928d-854c55589491", + "c4009bef-4504-4af7-86f5-f896a2412a0a", + "4370cb68-d676-4b93-8afd-2fb7c4ad78c4", + "c576e875-5a52-4f6a-abb7-0c62c945af78" + ] + }, + { + "name": { + "de": "Content-Erstellung für Studienarbeiten", + "en": "Academic Content Creation" + }, + "description": { + "de": "Strukturierte Inhalte für Hausarbeiten und Präsentationen", + "en": "Structured content for term papers and presentations" + }, + "prompts": [ + "4370cb68-d676-4b93-8afd-2fb7c4ad78c4", + "2c6a6e47-1d0c-441f-9449-b5d908bffba2", + "b2e39e0a-ec1f-4d0e-813d-f1a08493332b", + "8cdc89a5-2f76-4d50-a93d-0c177c3e73ab" + ] + } + ] +} +``` + +## Vorteile dieser finalen Version + +### ✅ 100% Kompatibilität +- **NUR die 8 tatsächlich verfügbaren Prompts** werden verwendet +- Keine nicht existierenden Prompts (wie "Schlüsselpunkte") +- Sofort implementierbar ohne Backend-Änderungen + +### ✅ Vollständige Abdeckung +- **Vorlesungen**: Nachbereitung und Verständnis +- **Gruppenarbeit**: Organisation und Ideensammlung +- **Prüfungen**: Strukturierte Vorbereitung +- **Wissenschaftliches Schreiben**: Content-Erstellung + +### ✅ Praktischer Nutzen +- Jeder Blueprint löst ein reales studentisches Problem +- Sinnvolle Kombination der verfügbaren Prompts +- Mehrsprachigkeit bereits integriert (DE, EN, IT, FR, ES) + +### ✅ Einfache Implementierung +- Direkt in Supabase einfügbar +- Keine neuen Prompts nötig +- Verwendet bestehende Infrastruktur + +## Wichtiger Hinweis +Der Prompt "Schlüsselpunkte" (ID: 9b411221-6f52-4534-9ea9-dd1904259e8c) existiert NICHT in der Datenbank und wurde in dieser finalen Version komplett entfernt. \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/legal/Memoro-TOMs.md b/apps/memoro/apps/landing/context/legal/Memoro-TOMs.md new file mode 100644 index 000000000..92c11e551 --- /dev/null +++ b/apps/memoro/apps/landing/context/legal/Memoro-TOMs.md @@ -0,0 +1,1099 @@ +Technische und organisatorische Maßnahmen, +Infrastruktur und Datenströme (TOMs) +gemäß Art. 32 DSGVO +Version: 2.3 +Datum: 15.07.2025 +Verantwortlicher: Nils Weiser + +1. Allgemeine Angaben + Unternehmen: + Memoro GmbH + Reichenaustraße 11a + 78467 Konstanz + E-Mail: kontakt@memoro.ai + Telefon: +49 176 444 343 85 + Datenschutzbeauftragter: + Nils Weiser + E-Mail: kontakt@memoro.ai + Telefon: +49 176 444 343 85 + Einleitung: Infrastruktur und Datenflüsse + Dieses Dokument beschreibt transparent unsere technische Infrastruktur und die Verarbeitung + Ihrer Daten. Unser oberstes Credo ist der Schutz Ihrer Privatsphäre: Wir werden Ihre Daten + niemals einsehen oder verkaufen. Wir setzen gezielt auf Lösungen, die höchsten europäischen + Datenschutzstandards entsprechen und reduzieren stetig die Abhängigkeit von außereuropäischen + Anbietern. + Der Weg Ihrer Daten bei Memoro: Von der Aufnahme zur Analyse +1. Aufnahme und Speicherung: Wenn Sie eine Memo aufzeichnen, wird die Audiodatei + zunächst sicher auf Ihrem Endgerät gespeichert. Sobald eine Internetverbindung besteht, + wird die Aufnahme verschlüsselt zu unserem Backend-Dienstleister Supabase + hochgeladen. Die Datenbank und die darin gespeicherten Audiodateien befinden sich + physisch in einem Rechenzentrum in Frankfurt, Deutschland. +1. Transkription: Um Ihre Sprachaufnahme in Text umzuwandeln, wird die Audiodatei an + eine Instanz von Microsoft Azure in Schweden gesendet. Wir nutzen diesen Standort, da + dort die neuesten und qualitativ hochwertigsten Transkriptionsmodelle verfügbar sind. Ihre + Daten verlassen dabei zu keinem Zeitpunkt den Europäischen Wirtschaftsraum. +1. Dateikonvertierung (bei Bedarf): In seltenen Fällen, in denen Ihr Endgerät + Audioaufnahmen in einem nicht-standardisierten Format speichert, wird die Datei zur + Konvertierung an Google Cloud in Frankfurt gesendet. Google konvertiert lediglich das + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 1 / 23 + +Dateiformat, ohne den Inhalt zu analysieren oder dauerhaft zu speichern. 4. Analyse und Erstellung von "Memories": Das erstellte Transkript wird zurück an unsere +Supabase-Datenbank in Frankfurt gesendet. Aus diesem Text werden Analyseabschnitte +("Memories") generiert. Für diesen Schritt nutzen wir je nach Anforderung die +leistungsfähigsten KI-Modelle: +○ Google Gemini: Die Verarbeitung findet auf Servern in Belgien statt. Google +garantiert, dass diese Daten nicht für das Training von Modellen verwendet werden. +Die Daten werden nach spätestens 30 Tagen gelöscht. +○ OpenAI-Modelle via Microsoft Azure: Die Verarbeitung erfolgt auf unserer Instanz +in Schweden, um von den fortschrittlichsten verfügbaren Modellen zu profitieren. +Microsoft garantiert, dass diese Daten nicht für das Training von Modellen +verwendet werden. Die Daten werden nach spätestens 30 Tagen gelöscht. +○ Erklärung: Die Daten werden bei Google Gemini und Microsoft Azure nicht aktiv +gespeichert, sondern befinden sich lediglich in deren internen Caching- und +Logging-Systemen. Diese Systeme löschen die Daten automatisch nach maximal +30 Tagen. Memoro hat keinen Zugriff auf diese zwischengespeicherten Daten, und +sie werden ausschließlich für interne Sicherheits- und Qualitätssicherungszwecke +der Anbieter vorgehalten, jedoch nicht für Modelltraining verwendet. 5. Die finalen Analysen werden wiederum sicher in unserer Supabase-Datenbank in +Frankfurt gespeichert. +Welche Daten werden verarbeitet? +Wenn Sie Memoro nutzen, werden folgende Kategorien von Daten an unser Backend (gehostet bei +Supabase in Frankfurt) übermittelt und dort verarbeitet: +● Inhaltsdaten: Die von Ihnen aufgezeichneten Audiodateien, die daraus erstellten +Transkripte sowie die finalen Analyseabschnitte ("Memories"). +● Account-Daten: Ihre E-Mail-Adresse und Ihr Name (falls angegeben) zur Erstellung und +Verwaltung Ihres Nutzerkontos. Das Passwort wird ausschließlich als verschlüsselter +Hash-Wert gespeichert. +● Nutzungs- und Metadaten: Zeitstempel der Aufnahmen, Dateiformate, Gerätemodell (zur +Fehleranalyse) und Interaktionsdaten innerhalb der App (z.B. welche Funktionen genutzt +werden), um unseren Dienst zu verbessern. +● Technische Verbindungsdaten: Ihre IP-Adresse, die zur Herstellung der Verbindung zu +unseren Servern temporär verarbeitet wird, sowie Authentifizierungstokens zur Sicherung +Ihres Accounts. +Unsere Unterauftragsverarbeiter (Subdienstleister) +● Supabase: (Backend & Datenbank), Serverstandort: Frankfurt, DE. +● Microsoft Azure: (Transkription & KI-Analyse), Serverstandort: Schweden, EU. +● Google Cloud: (Dateikonvertierung & KI-Analyse), Serverstandort: Frankfurt, DE. +● PostHog: (Produktanalyse, Open Source), Serverstandort: EU-Hosting. (Deaktivierbar für +Organisationskunden, je nach AVV). +Ihre Kontrolle und unser Versprechen + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 2 / 23 + +Sie behalten stets die volle Kontrolle über Ihre Daten. +● Vollständige Löschung: Von Ihnen gelöschte Daten werden unwiderruflich und vollständig +von allen unseren Systemen entfernt. +● Anonymisierungsoptionen: Funktionen wie die Sprechererkennung können von Ihnen +deaktiviert werden, um die Rückverfolgbarkeit zu reduzieren. +Diese Vorkehrungen gewährleisten, in Kombination mit den detaillierten Maßnahmen dieses +Dokuments, eine sichere und DSGVO-konforme Verarbeitung Ihrer Daten. +Organisationsspezifische Anpassungen: Für Unternehmens- und Organisationskunden bieten +wir maßgeschneiderte Datenschutzlösungen, einschließlich automatischer Löschfristen und +angepasster Datenverarbeitungsprozesse gemäß individuellen Compliance-Anforderungen. 2. Zweck des Dokuments +Dieses Dokument beschreibt die technischen und organisatorischen Maßnahmen (TOMs) der +Memoro GmbH gemäß Art. 32 DSGVO. Ziel ist die Gewährleistung eines angemessenen +Schutzniveaus für personenbezogene Daten. +Dieses Dokument ist Bestandteil des Verzeichnisses von Verarbeitungstätigkeiten gemäß Art. +30 DSGVO. 3. Geltungsbereich +Die hier dokumentierten Maßnahmen gelten für: + +- Alle IT-Systeme und Prozesse der Memoro GmbH +- Verarbeitung personenbezogener Daten innerhalb der Memoro App +- Die gesamte Infrastruktur, einschließlich externer Dienstleister + +4. Rechtsgrundlagen + Die TOMs basieren auf Art. 32 DSGVO (Sicherheit der Verarbeitung), der angemessene + Schutzmaßnahmen sowie deren regelmäßige Überprüfung fordert. +5. Beschreibung der Technischen Maßnahmen (IT-Sicherheit) + Gemäß Art. 32 DSGVO setzt die Memoro GmbH folgende technische Maßnahmen um, um die + Sicherheit und den Schutz personenbezogener Daten zu gewährleisten. + 5.1 Zugangskontrolle und Authentifizierung + 5.1.1 Zugang zu Cloud-Diensten + +- Systeme und Daten werden bevorzugt über europäische Cloud-Dienstleister oder + solche mit Sitz in Ländern mit Angemessenheitsbeschluss verarbeitet. Bei Dienstleistern in + Drittländern ohne Angemessenheitsbeschluss werden geeignete Garantien gemäß Art. 46 + DSGVO (z.B. Standardvertragsklauseln, Zertifizierungen nach dem Data Privacy + Framework) implementiert, wie in Abschnitt 10 detailliert. +- Jeder Zugriff erfolgt über individuell vergebene Benutzerkonten, es gibt keine + gemeinsam genutzten Logins. + Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 3 / 23 + +- Multi-Faktor-Authentifizierung (MFA) ist für alle kritischen Systeme aktiviert. +- Verwaltung von Zugangsdaten über einen Passwort-Manager (1Password) mit + Sicherheitsüberwachung. + 5.1.2 Sicherheitsvorgaben für Endgeräte +- Nutzung eines Passwort-Managers (1Password) ist verpflichtend für alle + Mitarbeitenden. +- Watchtower-Funktion von 1Password erkennt unsichere oder kompromittierte + Passwörter und alarmiert Nutzer. +- Alle Geräte müssen aktuelle Sicherheitsupdates installiert haben. +- Es sind Firewall und Virenschutz-Software aktiviert. + 5.1.3 Berechtigungsmanagement +- Nutzerzugriffe werden über Google Workspace verwaltet. +- Berechtigungen werden nach dem Need-to-Know-Prinzip vergeben und regelmäßig + überprüft. +- Kritische Änderungen an Berechtigungen erfolgen dokumentiert und mit Zustimmung + einer berechtigten Person. + 5.2 Datenverschlüsselung +- Transportverschlüsselung (TLS 1.2/1.3) für alle Cloud-Kommunikationen. +- AES-256-Verschlüsselung für gespeicherte Daten innerhalb der eingesetzten + Cloud-Dienste. +- Ende-zu-Ende-Verschlüsselung für besonders sensible Daten innerhalb der Systeme. + 5.3 Netzwerksicherheit +- Zugriff auf Cloud-Dienste wird durch Sicherheitsrichtlinien gesteuert, um unbefugten + Zugriff zu verhindern. +- Bestimmte sicherheitskritische Dienste erfordern zusätzliche Freigaben, um Zugriff + auf sensible Daten zu ermöglichen. +- Sicherheitsmechanismen der Cloud-Plattformen werden zur Erkennung verdächtiger + Aktivitäten genutzt. + 5.4 Backup- und Notfallmanagement + 5.4.1 3-2-1-Backup-Strategie +- Tägliche Backups aller personenbezogenen Daten mit Versionierung und + Zugriffsbeschränkungen. +- Daten werden gemäß der 3-2-1-Backup-Strategie gesichert: +- 3 Kopien der Daten werden auf unterschiedlichen Speicherorten vorgehalten. +- 2 verschiedene Medientypen werden für die Sicherung verwendet (z. B. + Cloud-Storage & verschlüsselte Offline-Backups). +- 1 Backup befindet sich an einem separaten Standort zur Ausfallsicherheit. +- Backups sind verschlüsselt und nur für berechtigte Personen zugänglich. + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 4 / 23 + +5.4.2 Notfall- und Wiederherstellungsmaßnahmen + +- Alle Cloud-Dienste sind auf Hochverfügbarkeit ausgelegt, um Ausfälle und + Datenverluste zu minimieren. +- Datenwiederherstellung wird regelmäßig getestet, um die Integrität der Backups zu + gewährleisten. +- Notfallprozesse sind dokumentiert und werden regelmäßig aktualisiert. + 5.5 Protokollierung und Monitoring + 5.5.1 Logging von Zugriffen und Änderungen +- Alle Zugriffe auf Cloud-Dienste werden automatisch protokolliert (Google Workspace + Security Logs). +- Verdächtige Aktivitäten werden erkannt und gemeldet. +- Revisionssichere Speicherung von Audit-Logs, um Zugriffe nachverfolgen zu können. + 5.5.2 Sicherheitsüberwachung +- Google Security- und Überwachungsfunktionen werden zur Erkennung von + Sicherheitsbedrohungen eingesetzt. +- Automatische Benachrichtigungen bei sicherheitskritischen Ereignissen. +- Regelmäßige Überprüfung der Protokolle durch den Datenschutzbeauftragten. + +6. Organisatorische Maßnahmen (Prozesse & Richtlinien) + Die Memoro GmbH setzt neben technischen Schutzmaßnahmen auch organisatorische + Maßnahmen um, um die Sicherheit und den Datenschutz der verarbeiteten personenbezogenen + Daten zu gewährleisten. + 6.1 Datenschutzrichtlinien und Schulungen + +- Interne Datenschutzrichtlinien sind dokumentiert und für alle Mitarbeitenden verbindlich. +- Alle Mitarbeitenden müssen eine Verpflichtung zur Vertraulichkeit und zum + Datenschutz gemäß Art. 5 und Art. 32 DSGVO unterzeichnen. +- Regelmäßige Datenschutz- und IT-Sicherheitsschulungen für alle Mitarbeitenden. +- Datenschutz-Themen sind Bestandteil des Onboardings für neue Mitarbeitende. + 6.2 Verwaltung von Benutzerrechten +- Zugriff auf personenbezogene Daten erfolgt nach dem Prinzip der minimalen + Berechtigungen (Need-to-Know-Prinzip). +- Rechte werden dokumentiert und regelmäßig überprüft. +- Kritische Änderungen an Zugriffsrechten erfolgen nur mit dokumentierter Genehmigung. +- Alle Zugriffe werden protokolliert und regelmäßig auf unbefugte Aktivitäten überprüft. + 6.3 Umgang mit externen Dienstleistern +- Die Memoro GmbH nutzt externe Dienstleister und Cloud-Anbieter, um bestimmte + Verarbeitungstätigkeiten durchzuführen. +- Eine Liste der relevanten externen Dienstleister wird geführt und regelmäßig aktualisiert. + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 5 / 23 + +- Externe Dienstleister mit Zugriff auf personenbezogene Daten müssen + DSGVO-konform sein und die gesetzlichen Anforderungen erfüllen. +- Die datenschutzrechtliche Absicherung erfolgt je nach Anbieter durch: +- Einen Auftragsverarbeitungsvertrag (AVV) gemäß Art. 28 DSGVO, sofern der + Dienstleister als Auftragsverarbeiter tätig ist. +- Datenverarbeitungsbedingungen (DPA), falls der Anbieter eine standardisierte + Regelung zur DSGVO-Compliance bereitstellt. +- Standardvertragsklauseln (SCC) gemäß Art. 46 DSGVO, falls personenbezogene + Daten in ein Drittland ohne Angemessenheitsbeschluss übermittelt werden. +- Transfer Impact Assessments (TIA) zur Risikobewertung bei Datenübermittlungen + in Drittländer. + +- Vor der Beauftragung neuer Dienstleister wird geprüft, ob eine + Datenschutz-Folgenabschätzung (DSFA) gemäß Art. 35 DSGVO erforderlich ist. +- Datenübermittlungen an Dritte erfolgen nur auf einer rechtlichen Grundlage, z. B. + Einwilligung, vertragliche Notwendigkeit oder gesetzliche Verpflichtung. + 6.3.2 Nutzung von Payment-/Abo-Dienstleistern +- Für die Abwicklung von In-App-Käufen und Abo-Verwaltung wird ein externer Dienstleister + genutzt. +- Ein Auftragsverarbeitungsvertrag (AVV) gemäß Art. 28 DSGVO ist vorhanden. +- Bei Übermittlungen in Drittländer (USA) werden Standardvertragsklauseln (SCC) + eingesetzt. +- Die Datenübertragung erfolgt TLS-verschlüsselt, und es findet nur eine + pseudonymisierte bzw. minimierte Übermittlung relevanter Abodaten statt. +- Das Einhalten von Sicherheits- und Compliance-Standards (z. B. SOC 2, ISO 27001) + wird regelmäßig überprüft. + 6.4 Datenschutz-Folgenabschätzung (DSFA) +- Für Verarbeitungen mit hohem Risiko für die Rechte und Freiheiten betroffener Personen + wird eine Datenschutz-Folgenabschätzung gemäß Art. 35 DSGVO durchgeführt. +- Die DSFA erfolgt nach einer standardisierten internen Bewertung und wird in kritischen + Fällen mit der Aufsichtsbehörde abgestimmt. + + 6.5 Grundsätze zur Datenaufbewahrung und Löschkonzept + Die Memoro GmbH folgt strikt den Grundsätzen der Datenminimierung und Speicherbegrenzung + gemäß Art. 5 Abs. 1 lit. e DSGVO. Personenbezogene Daten werden nur so lange in einer Form + gespeichert, die die Identifizierung der betroffenen Personen ermöglicht, wie es für die Zwecke, für + die sie verarbeitet werden, erforderlich ist. + Dieses Kapitel beschreibt das verbindliche Löschkonzept der Memoro GmbH und ersetzt eine + separate Richtlinie zur Datenaufbewahrung. Es legt die konkreten Fristen für die Löschung der + verschiedenen Datenkategorien fest. + 6.5.1 Reguläre Speicher- und Löschfristen + Die folgenden Fristen gelten für die in den aktiven Systemen der Memoro GmbH verarbeiteten + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 6 / 23 + +Daten: + +Datenkategorie Zweck der Verarbeitung Speicherdauer & Löschfrist +Inhaltsdaten +(Audioaufnahmen, +Transkripte & +Analysen/"Memories") + +Bereitstellung der +Kernfunktionalität der +Memoro App; dauerhafter +Zugriff für den Nutzer auf +seine Inhalte. + +Gespeichert, solange das +Nutzerkonto besteht. Die +Daten werden +unverzüglich gelöscht, +wenn der Nutzer die +jeweilige Memo oder seinen +gesamten Account löscht. + +Account-Daten (z.B. +E-Mail-Adresse, Name, +Passwort-Hash) + +Authentifizierung, +Verwaltung des +Nutzerkontos und +Kommunikation mit dem +Nutzer. + +Gespeichert, solange das +Nutzerkonto besteht. Bei +Löschanfrage des Accounts +werden diese Daten +innerhalb von 30 Tagen aus +den aktiven Systemen +entfernt. + +Technische +Protokolldaten (z.B. +IP-Adressen, Server-Logs) + +Gewährleistung der +Sicherheit, Stabilität und +Funktionsfähigkeit der +Infrastruktur; Erkennung +und Abwehr von Angriffen. + +Die Speicherung erfolgt für +maximal 90 Tage. Danach +werden die Daten +automatisch gelöscht oder +vollständig anonymisiert. + +Produktanalysedaten (via +PostHog) + +Verbesserung der +Nutzererfahrung und +Optimierung der +App-Funktionen. + +Die Daten werden +pseudonymisiert erfasst und +nach spätestens 12 +Monaten automatisch +gelöscht. + +Zahlungs- & +Vertragsdaten (bei +Abonnements) + +Abwicklung von +Abonnements; Erfüllung +vertraglicher Pflichten. + +Daten zum +Abonnementstatus werden +bis zur Beendigung des +Vertragsverhältnisses +gespeichert. + +6.5.1.1 Sonderregelungen für Organisationskunden +Für Organisationskunden können im Rahmen des Auftragsverarbeitungsvertrags (AVV) +abweichende Löschfristen vereinbart werden. Diese Sonderregelungen haben Vorrang vor den +Standardfristen und können umfassen: + +Automatische Löschung nach definierten Zeiträumen: + +- Organisationen können festlegen, dass alle Inhaltsdaten (Audioaufnahmen, Transkripte und + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 7 / 23 + +Analysen) ihrer Mitarbeiter automatisch nach einem vereinbarten Zeitraum gelöscht werden + +- Die Löschung erfolgt unwiderruflich und umfasst alle zugehörigen Daten des jeweiligen + Memos +- Mitarbeiter werden vorab über die organisationsspezifischen Löschfristen informiert + Die konkreten Löschfristen und -modalitäten werden im jeweiligen AVV dokumentiert und technisch + implementiert." + + 6.5.2 Gesetzliche Aufbewahrungspflichten + Ungeachtet der oben genannten Löschfristen können gesetzliche Aufbewahrungspflichten eine + längere Speicherung bestimmter Daten erfordern. Insbesondere kaufmännische oder + steuerrechtliche Vorgaben (z.B. aus dem Handelsgesetzbuch (HGB) oder der Abgabenordnung + (AO)) können eine Aufbewahrung von rechnungsrelevanten Unterlagen von bis zu 10 Jahren + vorschreiben. Sollten Daten der Memoro GmbH solchen Pflichten unterliegen, werden sie für die + Dauer der gesetzlichen Frist aufbewahrt. Nach Ablauf der Frist erfolgt die Löschung. Diese Daten + werden für die Dauer der Aufbewahrung für andere Zwecke gesperrt. + 6.5.3 Umgang mit Daten in Backups + Zur Gewährleistung der Datensicherheit und für den Notfall (Disaster Recovery) werden + regelmäßig verschlüsselte Backups unserer Systeme erstellt. Für Daten in Backups gilt folgendes + spezielles Löschverfahren: + +1. Keine selektive Löschung: Aus technischen Gründen ist es nicht möglich, einzelne + Datensätze aus bestehenden Backup-Archiven zu entfernen. +2. Sperrung im Live-System: Sobald ein Nutzer seine Daten im aktiven System löscht, sind + diese nicht mehr zugänglich und werden nicht mehr verarbeitet. +3. Endgültige Löschung durch Überschreibung: Die in Backups enthaltenen, zur Löschung + markierten Daten werden im Zuge des regulären Backup-Zyklus endgültig und unwiderruflich + überschrieben. Die maximale Vorhaltezeit für Backups beträgt 30 Tage. +4. Zweckbindung: Während ihrer Speicherfrist werden Backups ausschließlich für den Zweck + der Datensicherheit und Wiederherstellung vorgehalten und nicht für operative + Geschäftsprozesse genutzt. + 6.5.4 Ausübung des Rechts auf Löschung + Jeder Nutzer kann sein Recht auf Löschung ("Recht auf Vergessenwerden" gemäß Art. 17 + DSGVO) jederzeit ausüben. Die Löschung einzelner Memos oder des gesamten Accounts kann + direkt in den Einstellungen der Memoro App vorgenommen werden. Für darüberhinausgehende + Löschanfragen steht unser Datenschutzbeauftragter zur Verfügung. Die Löschung erfolgt + fristgerecht gemäß den oben beschriebenen Prozessen. + 6.6 Umgang mit Datenschutzverletzungen + +- Datenschutzverletzungen werden intern dokumentiert und gemäß Art. 33 DSGVO + bewertet. +- Falls erforderlich, erfolgt eine Meldung an die zuständige Aufsichtsbehörde innerhalb + von 72 Stunden. + Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 8 / 23 + +- Betroffene Personen werden gemäß Art. 34 DSGVO informiert, wenn ein hohes Risiko + besteht. +- Vorfallsmanagement-Prozesse sind definiert und beinhalten Eskalationsstufen für + kritische Sicherheitsereignisse. + 6.7 Nachweise und Protokolle +- Liste der eingesetzten IT-Systeme und Verarbeitungsprozesse ist dokumentiert. +- Schulungsnachweise und Verpflichtungserklärungen werden revisionssicher archiviert. +- Zugriffsprotokolle und Audit-Logs werden regelmäßig überprüft. +- Interne und externe Datenschutzprüfungen sind dokumentiert und erfolgen regelmäßig. + +7. Nachweise und Protokolle + Die Memoro GmbH dokumentiert alle relevanten Datenschutzmaßnahmen und führt regelmäßige + Prüfungen durch, um die Einhaltung der DSGVO nachweisen zu können. + 7.1 Verzeichnis von Verarbeitungstätigkeiten + +- Ein Verzeichnis aller Verarbeitungstätigkeiten gemäß Art. 30 DSGVO wird geführt und + regelmäßig aktualisiert. +- Dieses enthält insbesondere: +- Zweck und Rechtsgrundlagen der Verarbeitung +- Kategorien betroffener Personen und personenbezogener Daten +- Empfänger von Daten, einschließlich externer Dienstleister +- Technische und organisatorische Maßnahmen (TOMs) + + 7.2 Schulungen und Verpflichtungserklärungen + +- Alle Mitarbeitenden werden regelmäßig zu Datenschutz und IT-Sicherheit geschult. +- Die Schulungsinhalte und Teilnehmerlisten werden dokumentiert. +- Alle Mitarbeitenden müssen eine Verpflichtung zur Vertraulichkeit gemäß Art. 5 und 32 + DSGVO unterzeichnen. + 7.3 Zugriffskontrollen und Berechtigungsmanagement +- Vergabe und Änderungen von Zugriffsrechten werden dokumentiert. +- Regelmäßige Überprüfung der Zugriffsrechte, um nicht mehr benötigte Berechtigungen + zu entfernen. +- Protokollierung von administrativen Änderungen und sicherheitskritischen Zugriffen. + 7.4 Protokollierung von Zugriffen und Änderungen +- Zugriffsprotokolle auf Cloud-Dienste (Google Workspace Security Logs, Audit-Logs von + Cloud-Anbietern) werden revisionssicher gespeichert. +- Alle sicherheitskritischen Änderungen an Daten, Zugriffsrechten und Einstellungen werden + automatisch protokolliert. +- Protokolldaten werden regelmäßig ausgewertet, um verdächtige Aktivitäten frühzeitig zu + erkennen. + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 9 / 23 + +7.5 Datenschutz-Audits und Prüfberichte + +- Interne Datenschutzprüfungen werden regelmäßig durchgeführt und dokumentiert. +- Falls erforderlich, werden externe Datenschutzprüfungen oder Zertifizierungen in + Betracht gezogen. +- Ergebnisse von Audits werden mit Maßnahmen zur kontinuierlichen Verbesserung + verbunden. + 7.6 Umgang mit Datenschutzvorfällen +- Dokumentation aller Datenschutzvorfälle mit Risikobewertung und ergriffenen + Gegenmaßnahmen. +- Falls eine Meldung an die Aufsichtsbehörde gemäß Art. 33 DSGVO erforderlich ist, + erfolgt dies innerhalb der vorgeschriebenen 72-Stunden-Frist. +- Betroffene Personen werden gemäß Art. 34 DSGVO informiert, falls ein hohes Risiko für + ihre Rechte und Freiheiten besteht. + 7.7 Speicherfristen für Nachweise und Protokolle +- Schulungsnachweise und Verpflichtungserklärungen werden mindestens 3 Jahre + aufbewahrt. +- Zugriffsprotokolle und sicherheitskritische Logs werden für mindestens 12 Monate + gespeichert, sofern keine längeren Speicherfristen erforderlich sind. +- Datenschutzprüfungen und Audit-Berichte werden mindestens 5 Jahre archiviert. + +8. Regelmäßige Aktualisierung und Kontrolle + Die Memoro GmbH stellt sicher, dass alle technischen und organisatorischen Maßnahmen (TOMs) + regelmäßig überprüft, aktualisiert und an neue rechtliche und technische Anforderungen angepasst + werden. + 8.1 Verantwortlichkeit für die Wartung des Dokuments + +- Die Verantwortung für die Aktualisierung der TOM-Dokumentation liegt bei der + Geschäftsführung und dem Datenschutzbeauftragten. +- Anpassungen erfolgen in Abstimmung mit IT-Sicherheit, Compliance und relevanten + Fachabteilungen. + 8.2 Regelmäßige Überprüfung der TOMs +- Jährliche Kontrolle und Aktualisierung der TOMs, um neue gesetzliche, technische oder + organisatorische Änderungen zu berücksichtigen. +- Zusätzliche Überprüfung erfolgt bei: +- Änderungen in der IT-Infrastruktur oder den eingesetzten Cloud-Diensten. +- Änderungen in den Verarbeitungstätigkeiten oder den verarbeiteten Daten. +- Relevanten Gesetzesänderungen oder neuen regulatorischen Anforderungen. +- Ergebnissen interner oder externer Datenschutzprüfungen. + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 10 / 23 + +8.3 Audit- und Kontrollmechanismen + +- Interne Datenschutz- und Sicherheitsaudits werden mindestens einmal jährlich + durchgeführt. +- Externe Datenschutzprüfungen oder Zertifizierungen werden nach Bedarf in Betracht + gezogen. +- Ergebnisse aus Audits und Prüfungen werden dokumentiert und für zukünftige + Optimierungen genutzt. + 8.4 Änderungsmanagement und Dokumentation +- Jede Änderung an den TOMs wird dokumentiert und in einer Änderungshistorie + festgehalten. +- Änderungen werden mit Versionsnummer, Datum und verantwortlicher Person + gekennzeichnet. +- Mitarbeitende werden über wesentliche Änderungen informiert, insbesondere wenn + diese Auswirkungen auf Sicherheits- oder Datenschutzmaßnahmen haben. + 8.5 Sensibilisierung und Schulung +- Mitarbeitende werden regelmäßig über aktualisierte Sicherheits- und + Datenschutzmaßnahmen informiert. +- Jährliche Datenschutz- und IT-Sicherheitsschulungen werden aktualisiert, um neue + Maßnahmen oder gesetzliche Änderungen abzubilden. + 8.6 Notfallkontrolle und Reaktionsstrategie +- Unvorhergesehene Sicherheitsvorfälle oder Datenschutzverletzungen lösen eine + sofortige Überprüfung der TOMs aus. +- Falls erforderlich, werden Sofortmaßnahmen zur Risikominimierung implementiert. +- Lessons Learned aus Vorfällen fließen in die Weiterentwicklung der + Sicherheitsmaßnahmen ein. + +9. Risikoanalyse + 9.1 Ziel der Risikoanalyse + Die Risikoanalyse dient dazu, potenzielle Datenschutz- und Sicherheitsrisiken im Zusammenhang + mit der Verarbeitung personenbezogener Daten innerhalb der Memoro GmbH zu identifizieren + und geeignete Maßnahmen zur Risikominimierung zu definieren. Sie erfüllt die Anforderungen aus + Art. 32 DSGVO (Sicherheit der Verarbeitung) sowie Art. 35 DSGVO + (Datenschutz-Folgenabschätzung – DSFA) für risikobehaftete Verarbeitungstätigkeiten. + Memoro verfolgt einen risikobasierten Ansatz, bei dem Bedrohungen anhand ihrer + Eintrittswahrscheinlichkeit und möglichen Auswirkungen bewertet werden. Ziel ist es, + Sicherheitslücken frühzeitig zu erkennen und durch technische und organisatorische Maßnahmen + zu minimieren. + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 11 / 23 + +9.2 Identifizierte Risiken und Gegenmaßnahmen +9.2.1 Technische Risiken +Risiko Beschreibung Gegenmaßnahmen +Datenverlust durch +Systemausfall + +Verlust gespeicherter Daten +durch Hard- oder +Softwarefehler + +Tägliche Backups, 3-2-1 + +Backup-Strategie, Notfall- +Wiederherstellungspläne + +Unbefugter Zugriff auf +Sprachaufnahmen & +Transkripte + +Kompromittierung +persönlicher Daten durch +Angreifer + +AES-256 Verschlüsselung, +Zugriff nur für autorisierte +Nutzer, +Zero-Trust-Sicherheitsmode +ll + +Hackerangriffe (DDoS, +Brute-Force) + +Versuch, Memoro-Dienste +durch Überlastung oder +Hacking zu stören + +DDoS-Schutz, Firewalls mit +Intrusion Detection System +(IDS), Rate Limiting + +Missbrauch von +API-Schnittstellen + +Exploits durch unbefugte +API-Nutzung + +OAuth 2.0-Authentifizierung, +Rate Limits für API-Zugriffe, +regelmäßige +Sicherheitsüberprüfungen + +Externe +Cloud-Sicherheitsrisiken + +Datenleck oder Ausfall durch +Anbieter wie Google Cloud +oder Azure + +Cloud-Sicherheitsüberprüfu +ng (SOC2, ISO 27001), +regelmäßige +Penetrationstests + +9.2.2 Organisatorische Risiken +Risiko Beschreibung Gegenmaßnahmen +Fehlende +Datenschutzschulungen + +Mitarbeitende könnten +unbeabsichtigt +Datenschutzverstöße +begehen + +Regelmäßige Datenschutz- +und + +IT-Sicherheitsschulungen, +verpflichtende Zertifizierungen + +Fehlende Kontrolle über +externe Dienstleister + +Risiken durch Cloud-Anbieter, +Auftragsverarbeiter oder Dritte + +Auftragsverarbeitungsverträ +ge (AVV) mit Dienstleistern, +regelmäßige +Compliance-Prüfungen + +9.2.3 Datenschutzrechtliche Risiken +Risiko Beschreibung Gegenmaßnahmen +Fehlende oder unklare +Einwilligung + +Nutzer könnten nicht +ausreichend über + +Transparente +Datenschutzerklärung, + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 12 / 23 + +Risiko Beschreibung Gegenmaßnahmen + +Datenverarbeitung informiert +sein + +aktive Einwilligung vor +Aufnahme, +DSGVO-konformes Opt-in + +Fehlende +Datenschutz-Folgenabschät +zung (DSFA) + +KI-gestützte Verarbeitung +könnte ein hohes Risiko für +Betroffene darstellen + +DSFA regelmäßig +aktualisieren, unabhängige +Datenschutzprüfung einholen + +Nichteinhaltung +organisationsspezifischer +Löschfristen + +Versäumnis der +automatischen Löschung nach +vereinbarten Zeiträumen + +Automatisierte Löschprozesse +mit täglicher Überprüfung; +Monitoring und Alerting bei +Löschfehlern; redundante +Löschmechanismen; +monatliche +Compliance-Reports + +9.2.4 Betriebsrisiken +Risiko Beschreibung Gegenmaßnahmen +Skalierungsprobleme bei +hohem Nutzeraufkommen + +Überlastung der Infrastruktur +könnte zu +Performance-Einbußen führen + +Dynamische Skalierung und +Lasttests zur Optimierung + +Datenverfügbarkeit & +Wiederherstellung + +Unzureichende Notfallplanung +könnte zu Datenverlust führen + +Notfall-Wiederherstellungsplä +ne (Disaster Recovery +Plans), jährliche Tests der +Backups + +9.3 Kontinuierliche Überprüfung und Verbesserung +Die identifizierten Risiken werden regelmäßig im Rahmen von internen Audits und +Datenschutzprüfungen evaluiert. Falls erforderlich, werden Maßnahmen zur Risikominimierung +aktualisiert und dokumentiert. +Die Risikoanalyse wird: + +- Mindestens einmal jährlich aktualisiert oder wenn sich wesentliche Änderungen in den + Verarbeitungstätigkeiten ergeben. +- In Zusammenarbeit mit IT-Sicherheit und Datenschutzbeauftragten überprüft. +- Als Grundlage für Datenschutz-Folgenabschätzungen (DSFA) nach Art. 35 DSGVO + verwendet. + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 13 / 23 + +10. Zertifizierungen + 10.1 Microsoft Azure Compliance + Die Memoro GmbH nutzt Microsoft Azure als Teil ihrer Infrastruktur. Microsoft Azure stellt + umfassende technische und organisatorische Sicherheitsmaßnahmen (TOMs) bereit, die eine + sichere und DSGVO-konforme Verarbeitung personenbezogener Daten gewährleisten. + Microsoft verpflichtet sich vertraglich zur Einhaltung der Datenschutz-Grundverordnung + (DSGVO) durch die Microsoft Online Services Terms, die Standardvertragsklauseln (SCC) + und die EU Data Boundary. Weitere Details sind unter Microsoft Azure Compliance abrufbar. + Globale und EU-spezifische Compliance-Zertifizierungen + Microsoft Azure erfüllt eine Vielzahl an internationalen und europäischen Sicherheitsstandards: + +- ISO/IEC 27001 – Informationssicherheits-Managementsystem (ISMS) +- ISO/IEC 27017 – Sicherheitskontrollen für Cloud-Dienste +- ISO/IEC 27018 – Schutz personenbezogener Daten in Public Clouds +- ISO/IEC 27701 – Privacy Information Management System (PIMS) +- ISO 22301 – Business Continuity Management (BCMS) +- ISO 9001 – Qualitätsmanagementsystem +- SOC 1, SOC 2, SOC 3 – Prüfberichte zur Sicherheitsvalidierung +- PCI DSS – Schutz von Zahlungsverkehrsdaten +- CSA STAR – Ergänzende Cloud-Sicherheitsbewertungen +- EU GDPR/DSGVO-konform – Standardvertragsklauseln (SCC), EU Cloud CoC (Scope + Europe zertifiziert) + Datenschutzmaßnahmen in Azure +- Volle Kontrolle über Kundendaten – Microsoft verarbeitet Daten nur gemäß vertraglicher + Vereinbarung. +- Keine Nutzung für Werbezwecke – Kundendaten werden nicht für Marketingzwecke + verwendet. +- Regionale Speicherung – Microsoft bietet die Möglichkeit der Speicherung von Daten + innerhalb der EU gemäß EU Data Boundary. +- Revisionssichere Löschung – Microsoft stellt Tools zur Verfügung, die eine + revisionssichere Löschung gemäß Kundenanforderungen ermöglichen. + Verschlüsselung und Datensicherheit +- Datenverschlüsselung im Ruhezustand (at rest) +- AES-256-Verschlüsselung +- FIPS 140-2-konforme Verschlüsselung +- Unterstützung von kundengemanagten Schlüsseln (Azure Key Vault) +- Azure Confidential Computing – Hardwarebasierte Verschlüsselung zur Isolierung + sensibler Daten + +- Datenverschlüsselung während der Übertragung (in transit) +- TLS 1.2/1.3 für sichere Datenkommunikation +- IEEE 802.1AE MAC Security zur Netzwerksicherheit + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 14 / 23 + +Zugriffskontrollen und Berechtigungen + +- Azure Role-Based Access Control (RBAC) für granulare Berechtigungen +- Azure Information Protection (AIP) für den Schutz sensibler Daten +- Multi-Faktor-Authentifizierung (MFA) für privilegierte Konten +- Zero Trust Security Model – Zugriffskontrolle durch kontinuierliche Authentifizierung und + Geräteintegritätsprüfungen + Microsoft Azure KI-Compliance + Microsoft aktualisiert regelmäßig seine Compliance-Maßnahmen, um neue regulatorische + Anforderungen wie den EU AI Act zu erfüllen. + Externe regulatorische Vorgaben +- EU AI Act (ab 2025) +- Transparenzpflichten (z. B. Kennzeichnung KI-generierter Inhalte) +- Strenge Dokumentationsanforderungen für Hochrisiko-KI +- Verbot der automatisierten Emotionserkennung in Bildung/Arbeitsplatz sowie + Echtzeit-Fernbiometrie durch Strafverfolgung (Art. 5 EU AI Act) + +- ISO 42001:2023 (KI-Managementsysteme) +- Risikobewertung, Ethikrichtlinien und Nachvollziehbarkeit von KI-Entscheidungen + +Microsoft-interne KI-Governance + +- Responsible AI Standard (Version 2) +- Bias-Erkennung und Fairness-Prüfung in Azure Machine Learning +- Explainability-APIs für transparente Entscheidungsfindung +- Content Safety zur Verhinderung von Missbrauch (z. B. Schutz vor Prompt Injection) +- Azure OpenAI EU-Region – Der Azure OpenAI Service wird in EU-Rechenzentren + gemäß EU Data Boundary betrieben. + Notfallmanagement und Incident Response + Microsoft implementiert ein strukturiertes Incident Response Model: + +1. Erkennung einer Sicherheitsverletzung +2. Analyse der Bedrohung und Bewertung der Auswirkungen +3. Reaktionsmaßnahmen, um das Risiko zu minimieren +4. Stabilisierung der betroffenen Systeme +5. Schließung des Vorfalls mit Lessons Learned + Microsoft verpflichtet sich, Kunden innerhalb von 72 Stunden über Datenschutzverletzungen zu + informieren. Microsoft bietet forensische Unterstützung, aber die finale DSGVO-Meldepflicht + gemäß Art. 33 DSGVO bleibt beim Kunden (Memoro GmbH). + Gemeinsame Verantwortung für Datenschutz und Sicherheit + Das Shared Responsibility Model definiert klare Abgrenzungen zwischen Microsoft und Kunden: + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 15 / 23 + +Microsoft Verantwortung Kundenverantwortung +Sichere Cloud-Infrastruktur Konfiguration und Absicherung der + +Cloud-Umgebung + +Netzwerksicherheit & Compliance Zugangskontrollen und Datenverschlüsselung +Zertifizierungen & Audit-Berichte Eigenständige Prüfung von +Compliance-Anforderungen + +10.2 Google Cloud Compliance +Die Memoro GmbH nutzt Google Cloud als Teil ihrer Infrastruktur. Google Cloud stellt +umfangreiche technische und organisatorische Sicherheitsmaßnahmen (TOMs) bereit, um +Datenschutz, Compliance und regulatorische Anforderungen in der EU zu gewährleisten. +Google verpflichtet sich zur Einhaltung der Datenschutz-Grundverordnung (DSGVO) durch +Standardvertragsklauseln (SCCs) sowie technische Maßnahmen wie EU Data Boundaries für +bestimmte Dienste. Weitere Details sind unter Google Cloud Compliance abrufbar. +Globale und EU-spezifische Compliance-Zertifizierungen +Google Cloud erfüllt folgende internationale und europäischen Sicherheitsstandards: + +- ISO/IEC 27001 – Informationssicherheits-Managementsystem (ISMS) +- ISO/IEC 27017 – Cloud-spezifische Sicherheitskontrollen +- ISO/IEC 27018 – Schutz personenbezogener Daten in Public Clouds +- ISO/IEC 27701 – Datenschutzmanagementsystem für DSGVO-Anforderungen +- ISO 9001 – Qualitätsmanagementsystem +- SOC 1, SOC 2, SOC 3 – Prüfberichte zur Sicherheitsvalidierung +- PCI DSS – Schutz von Zahlungsverkehrsdaten +- CSA STAR – Ergänzende Cloud-Sicherheitsbewertungen +- EU GDPR/DSGVO-konform – Standardvertragsklauseln (SCC), EU Cloud Code of + Conduct + Regionale Zertifizierungen und Compliance +- DSGVO (GDPR) – Google Cloud erfüllt DSGVO-Anforderungen durch + Standardvertragsklauseln (SCCs) und optionale Datenlokalisierung über EU Data + Boundaries für bestimmte Dienste. +- EU Cloud Code of Conduct (Scope Europe zertifiziert) – Konformität mit + DSGVO-Standards für Google Cloud Platform. +- C5:2020 (BSI, Deutschland) – Zertifizierung des Bundesamts für Sicherheit in der + Informationstechnik. + EU Data Boundaries und Datenlokalisierung + Google Cloud bietet EU Data Boundaries zur Speicherung und Verarbeitung von Daten innerhalb + der EU an. Dabei gelten folgende Einschränkungen: + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 16 / 23 + +- Unterstützte Dienste: Compute Engine und Vertex AI, die Memoro nutzt, werden + vollständig in der EU betrieben, in Belgien. +- Technische Maßnahmen zur Absicherung: +- VPC Service Controls begrenzen Datenbewegungen außerhalb der EU. +- IAM-Richtlinien steuern Zugriffskontrollen für Datenverarbeitung in der EU. +- Client-seitige Verschlüsselung stellt sicher, dass Daten nur mit + kundenspezifischen Schlüsseln verarbeitet werden. + Technische Sicherheitsmaßnahmen und Infrastruktur +- Datenverschlüsselung – Alle Daten werden im Ruhezustand (AES-256) und während + der Übertragung (TLS 1.2/1.3) verschlüsselt. +- Zugriffskontrolle – IAM (Identity and Access Management), VPC Service Controls und + Firebase Security Rules gewährleisten granulare Berechtigungsstrukturen. +- Audit-Logs und Monitoring – Google Cloud Logging/Stackdriver und das Security + Command Center ermöglichen eine kontinuierliche Sicherheitsüberwachung. + 10.4 Supabase Compliance und Datenschutzmaßnahmen + Die Memoro GmbH nutzt Supabase als Backend-Plattform, die Datenbankdienste, + Authentifizierung, Speicherlösungen und Serverless-Funktionen bereitstellt. Supabase verpflichtet + sich zur Einhaltung der Datenschutz-Grundverordnung (DSGVO) und anderer relevanter + Datenschutzgesetze. Die Verarbeitung personenbezogener Daten durch Supabase ist durch ein + Data Processing Addendum (DPA), Version vom 02. Juni 2025, geregelt, das + Standardvertragsklauseln (SCCs) für internationale Datentransfers beinhaltet. + Datenübertragung und Speicherorte: Der primäre Speicherort für alle Kundendaten der Memoro + GmbH (insbesondere Audioaufnahmen, Transkripte, Analysen und Nutzer-Accountdaten) befindet + sich ausschließlich in einem Rechenzentrum in Frankfurt, Deutschland. + Mögliche Datenübermittlung in Drittländer (USA): Obwohl unsere Kundendaten die EU + physisch nicht verlassen, ist Supabase ein Unternehmen mit Sitz in den USA. Daher kann in + folgenden, eng begrenzten Fällen ein Zugriff auf Daten aus den USA oder eine Übermittlung von + Metadaten stattfinden: + ● Wartungs- und Supportarbeiten: Wenn technische Unterstützung durch + Supabase-Mitarbeiter erforderlich ist, die ihren Sitz außerhalb der EU (z.B. in den USA) + haben, kann ein Zugriff auf die in der EU gespeicherten Daten notwendig sein. Dieser + Zugriff erfolgt nach dem "Need-to-know"-Prinzip und ist streng reglementiert. + ● Nutzung von Subdienstleistern durch Supabase: Supabase selbst nutzt + Unterauftragsverarbeiter (z.B. für Monitoring oder Support-Tickets), die ihren Sitz in den + USA haben könnten. Hierbei werden in der Regel nur Metadaten oder + Support-Kommunikation verarbeitet, nicht jedoch die Kern-Nutzerdaten (Aufnahmen, + Transkripte). + ● Anforderungen durch US-Behörden (z.B. via CLOUD Act): Ein Restrisiko, dass + US-Behörden Zugriff auf Daten von US-Unternehmen verlangen, kann rechtlich nicht + vollständig ausgeschlossen werden. + Rechtsgrundlage und Schutzmaßnahmen: Für all diese potenziellen Übermittlungsszenarien + dient der mit Supabase geschlossene Auftragsverarbeitungsvertrag (AVV) inklusive der + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 17 / 23 + +EU-Standardvertragsklauseln (SCCs) als rechtliche Grundlage. Diese Maßnahmen, kombiniert +mit den technischen Vorkehrungen von Supabase (z.B. Verschlüsselung), stellen sicher, dass das +Datenschutzniveau dem der EU entspricht. +Wichtige Compliance-Aspekte und Zertifizierungen: + +- SOC 2 Type 2: Supabase ist SOC 2 Typ 2 konform, was die Sicherheit, Verfügbarkeit, + Verarbeitungsintergrität, Vertraulichkeit und den Datenschutz der Kundendaten durch + unabhängige Prüfung bestätigt. +- HIPAA: Supabase ist HIPAA-konform. Die Speicherung von geschützten + Gesundheitsinformationen (Protected Health Information - PHI) ist auf der gehosteten + Plattform möglich, sofern ein Business Associate Agreement (BAA) mit Supabase + abgeschlossen wird und die Memoro GmbH ihre HIPAA-Verpflichtungen im Rahmen des + Shared Responsibility Modells erfüllt. +- Data Processing Addendum (DPA): Ein DPA (Version 02. Juni 2025) ist vorhanden und + regelt die Auftragsverarbeitung. Supabase agiert hierbei als Auftragsverarbeiter für die + Memoro GmbH. Das DPA enthält detaillierte Angaben zu den technischen und + organisatorischen Maßnahmen (Schedule 1), den eingesetzten Unterauftragsverarbeitern + (Schedule 3) und den Standardvertragsklauseln (Schedule 2). + Technische und Organisatorische Maßnahmen (gemäß Supabase DPA Schedule 1 und + Compliance-Informationen): + Datenverschlüsselung: +- Alle Kundendaten werden im Ruhezustand (at rest) mit AES-256 verschlüsselt. +- Daten während der Übertragung (in transit) werden via TLS (mindestens TLS 1.2) + verschlüsselt. +- Sensible Informationen wie Zugriffstoken und Schlüssel werden auf Anwendungsebene + verschlüsselt, bevor sie in der Datenbank gespeichert werden. +- Zugangskontrolle und Authentifizierung: +- Multi-Faktor-Authentifizierung (MFA) kann für Supabase-Nutzerkonten aktiviert werden. +- Interne Zugriffskontrollen basieren auf dem Prinzip der geringsten Rechte (Least Privilege). +- Starke Passwörter und obligatorische Zwei-Faktor-Authentifizierung (nicht SMS-basiert) für + interne Ressourcen bei Supabase. +- Rollenbasierte Zugriffskontrolle (Role-based access control - RBAC) ermöglicht die + granulare Rechtevergabe für Organisationsmitglieder auf spezifische Ressourcen. + Backup- und Notfallmanagement: +- Tägliche Backups für alle kostenpflichtigen Kundendatenbanken. +- Point-in-Time Recovery (PITR) ist als Add-on für Pro-Plan-Kunden verfügbar, um + Datenbanken zu jedem beliebigen Zeitpunkt wiederherzustellen. +- Backups werden verschlüsselt und auf unabhängigen Systemen gespeichert. + Netzwerksicherheit und DDoS-Schutz: +- Schutz vor Distributed Denial of Service (DDoS)-Angriffen auf CDN-Ebene durch + Cloudflare. +- Einsatz von fail2ban zur Verhinderung von Brute-Force-Logins. + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 18 / 23 + +- Möglichkeit zur Konfiguration von Ratenbegrenzungen (Rate Limits) für kritische + API-Routen und Ausgabenlimits (Spend Caps). +- Segmentierung von Kundenprojekten und internen Supabase-Diensten durch Firewalls. + Protokollierung und Monitoring (Audit Trails): +- Protokollierung von Nutzeraktionen und Interaktionen. +- Traffic-Flow-Logs. + Vulnerability Management und Tests: +- Regelmäßige Penetrationstests durch externe Sicherheitsexperten. +- Einsatz von Tools wie GitHub, Vanta und Snyk zur Code-Überprüfung auf Schwachstellen. +- Internes Monitoring und ein Breach Response Plan, der regelmäßig getestet wird. + Umgang mit Sicherheitsvorfällen: +- Supabase benachrichtigt die Memoro GmbH ohne unangemessene Verzögerung + (innerhalb von 48 Stunden nach Kenntnisnahme) über Sicherheitsvorfälle (Security + Incidents), die Kundendaten betreffen. + Datenlöschung: +- Nach Vertragsende können Daten innerhalb einer Frist von 30 Tagen durch die Memoro + GmbH exportiert werden. Anschließend werden die Daten durch Supabase gelöscht. + Unterauftragsverarbeiter (Sub-processors): + Supabase setzt verschiedene Unterauftragsverarbeiter für Hosting, Support und andere Dienste + ein. Eine aktuelle Liste ist im DPA (Schedule 3) enthalten. Zu den wichtigsten gehören: +- Supabase Pte. Ltd (Support) +- Amazon Web Services, Inc. (Hosting) +- Google, LLC (Hosting) +- Cloudflare, Inc. (Hosting, CDN) + Die Memoro GmbH wird mindestens 30 Tage im Voraus über geplante Änderungen bei den + Unterauftragsverarbeitern informiert und hat ein Einspruchsrecht. Ein Restrisiko durch den US + CLOUD Act bei Datenverarbeitung durch US-amerikanische Subprozessoren kann trotz SCCs und + weiterer Maßnahmen nicht vollständig ausgeschlossen werden. + Verantwortungsbewusster Einsatz von Supabase durch die Memoro GmbH: + Die Memoro GmbH setzt Supabase unter Beachtung der Datenschutzgrundsätze ein und ergreift + folgende Maßnahmen zur Gewährleistung einer sicheren und DSGVO-konformen Nutzung: +- Abschluss und Einhaltung des Data Processing Addendums (DPA) mit Supabase, inklusive + der Anwendung der aktuellen Standardvertragsklauseln (SCCs) als Rechtsgrundlage für + Datenübertragungen in die USA. +- Transparente Information der Nutzer über die Datenverarbeitung durch Supabase im + Rahmen der Datenschutzerklärung der Memoro App gemäß Art. 13 DSGVO, insbesondere + über mögliche Datentransfers in die USA. + Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 19 / 23 + +- Konfiguration der Supabase-Dienste nach dem Prinzip der Datenminimierung, sodass nur + für die jeweiligen Zwecke notwendige Daten verarbeitet werden. +- Implementierung starker Authentifizierungsmechanismen und sorgfältige Verwaltung von + Zugriffsrechten innerhalb der Supabase-Umgebung. +- Regelmäßige Überprüfung der rechtlichen und technischen Entwicklungen bezüglich + Supabase und Anpassung der eigenen Datenschutzmaßnahmen bei Bedarf. +- Nutzung der von Supabase bereitgestellten Sicherheitsfeatures wie MFA und rollenbasierte + Zugriffskontrollen. + Durch die Kombination aus den von Supabase implementierten Sicherheitsmaßnahmen, den + vertraglichen Vereinbarungen (DPA und SCCs) und den eigenen organisatorischen und + technischen Maßnahmen gewährleistet die Memoro GmbH ein hohes Datenschutzniveau beim + Einsatz von Supabase. + 10.5 PostHog Compliance und Datenschutzmaßnahmen + Die Memoro GmbH nutzt PostHog als Plattform für Produktanalysen, um das Nutzerverhalten + innerhalb der Memoro App zu verstehen, die Nutzererfahrung zu verbessern und die Anwendung + zu optimieren. PostHog wird als Alternative zu traditionellen Analysediensten wie Google Analytics + eingesetzt, da es als Open Source lizensiert ist und komplett in der EU gehostet werden kann. Bei + der Nutzung von PostHog Cloud agiert PostHog Inc. als Auftragsverarbeiter (Data Processor) und + die Memoro GmbH als Verantwortlicher (Data Controller) im Sinne der DSGVO. + +Datenhosting und Internationale Datentransfers: + +- Hosting: Die Memoro GmbH nutzt das EU-Hosting in Deutschland (Frankfurt) für Posthog, + um Datentransfers in Drittländer zu vermeiden. +- Datentransfers: Sollten Datenübertragungen in die USA oder andere Drittländer ohne + Angemessenheitsbeschluss erforderlich sein (z.B. für bestimmte Support-Prozesse durch + PostHog-Mitarbeiter außerhalb der EU), stützt sich PostHog auf Standardvertragsklauseln + (SCCs) der EU-Kommission und die Zertifizierung unter dem Data Privacy Framework. + +Wichtige Compliance-Aspekte und Zertifizierungen: + +- SOC 2 Type II: PostHog ist SOC 2 Typ II zertifiziert (Bericht vom 31. Mai 2024). Dies + bestätigt, dass PostHog über robuste Kontrollen in Bezug auf Sicherheit, Verfügbarkeit, + Verarbeitungsintegrität, Vertraulichkeit und Datenschutz verfügt. Ein Brückenbrief ist bis + zum nächsten Bericht verfügbar. +- GDPR (DSGVO): PostHog hat seine Architektur, Datenflüsse und Vereinbarungen + überprüft, um die DSGVO-Konformität seiner Plattform sicherzustellen. PostHog stellt + Kunden umfangreiche Kontrollen zur Minimierung der Erfassung personenbezogener + Daten zur Verfügung. +- CCPA: Für Kunden in Kalifornien agiert PostHog als "Service Provider". Ein CCPA-Zusatz + ist Bestandteil der Datenschutzerklärung von PostHog. +- Data Processing Agreement (DPA): Die Memoro GmbH schließt mit PostHog ein Data + Processing Agreement (DPA) ab. PostHog stellt hierfür einen DPA-Generator zur + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 20 / 23 + +Verfügung. Dieses DPA regelt die Auftragsverarbeitung und beinhaltet die +Standardvertragsklauseln (SCCs) für internationale Datentransfers. + +- EU-U.S. Data Privacy Framework: PostHog Inc. ist nach dem EU-U.S. Data Privacy + Framework (EU-U.S. DPF), der UK-Erweiterung zum EU-U.S. DPF und dem Swiss-U.S. + Data Privacy Framework zertifiziert. + +Technische und Organisatorische Maßnahmen (TOMs) von PostHog (Auswahl): +PostHog implementiert eine Vielzahl von Sicherheitsmaßnahmen und unterhält detaillierte interne +Richtlinien zur Gewährleistung der Datensicherheit und des Datenschutzes, die auch im Rahmen +der SOC 2-Zertifizierung geprüft werden. Dazu gehören unter anderem: + +- Zugriffskontrolle: Strenge Zugriffskontrollen und Multi-Faktor-Authentifizierung (MFA) für + interne Systeme. +- Datenverschlüsselung: Verschlüsselung von Daten während der Übertragung (in transit) + und im Ruhezustand (at rest). +- Sicherheitsrichtlinien: Umfassende interne Sicherheitsrichtlinien (z.B. Acceptable Use + Policy, Data Protection Policy, Encryption Policy, Incident Response Plan), die auf Anfrage + eingesehen werden können. +- Schwachstellenmanagement: Regelmäßige Penetrationstests (zuletzt April 2024) und + interne Sicherheitsüberprüfungen. +- Incident Response: Ein definierter Incident Response Plan zur Reaktion auf + Sicherheitsvorfälle. +- Datenminimierung: PostHog erfordert nicht zwingend personenbezogene Daten für + Produktanalysen und stellt der Memoro GmbH Werkzeuge zur Verfügung, um die + Erfassung personenbezogener Daten zu minimieren oder zu vermeiden. +- Datenlöschung: PostHog stellt Werkzeuge zur Verfügung, mit denen die Memoro GmbH + Anfragen zur Löschung von Endnutzerdaten nachkommen kann. + Unterauftragsverarbeiter (Sub-processors): + PostHog setzt für die Erbringung seiner Dienste Unterauftragsverarbeiter ein, insbesondere + Amazon Web Services (AWS) für das Cloud-Hosting. Eine aktuelle Liste der + Unterauftragsverarbeiter wird im Rahmen des DPA von PostHog geführt und zur Verfügung + gestellt. + +Verantwortungsbewusster Einsatz von PostHog durch die Memoro GmbH: +Die Memoro GmbH stellt einen datenschutzkonformen Einsatz von PostHog durch folgende +Maßnahmen sicher: + +- Abschluss eines Data Processing Agreements (DPA): Ein DPA inklusive + Standardvertragsklauseln (SCCs) wurde mit PostHog abgeschlossen, am 02.06.2025. +- EU-Hosting: Primäre Auswahl des Datenhostings auf Servern innerhalb der EU + (Deutschland), um Datentransfers in Drittländer zu vermeiden. +- Datenminimierung und Pseudonymisierung: Konfiguration von PostHog mit dem Ziel, keine + oder möglichst wenige personenbezogene Daten zu erfassen. Wenn möglich, werden + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 21 / 23 + +Daten pseudonymisiert. Es werden keine sensiblen Daten im Sinne von Art. 9 DSGVO an +PostHog übermittelt. + +- Transparenz: Information der Nutzer der Memoro App über den Einsatz von PostHog zu + Analysezwecken in der Datenschutzerklärung gemäß Art. 13 und 14 DSGVO, inklusive der + Erläuterung der Zwecke und der Rechte der Betroffenen. +- Einwilligungsmanagement: Sofern für die konkrete Datenerfassung und -verarbeitung durch + PostHog eine Einwilligung erforderlich ist, wird diese vorab von den Nutzern eingeholt. +- Regelmäßige Überprüfung: Die Konfiguration und Nutzung von PostHog wird regelmäßig + auf Einhaltung der Datenschutzvorgaben überprüft. +- Datenschutzbeauftragter von PostHog: Für datenschutzrelevante Anfragen an PostHog + steht deren Datenschutzbeauftragter, Charles Cook (VP Operations), unter + privacy@posthog.com zur Verfügung. + Durch diese Maßnahmen gewährleistet die Memoro GmbH, dass der Einsatz von PostHog im + Einklang mit der DSGVO und den Erwartungen der Nutzer erfolgt und die Sicherheit der + verarbeiteten Daten sichergestellt ist. + + 10.5.1 Sonderregelung für Enterprise-/Organisationsvarianten + Für spezielle Organisationsvarianten der Memoro App kann die Produktanalyse via PostHog + vollständig deaktiviert werden. In diesen Fällen: + ● Werden keine Daten an PostHog übermittelt + ● Findet keine Nutzungsanalyse statt + ● Gilt diese Regelung für alle Nutzer der jeweiligen Organisationsvariante + Die konkrete Konfiguration wird im jeweiligen Auftragsverarbeitungsvertrag mit der Organisation + festgelegt. + +11. Verantwortungsbewusstes Handeln der Memoro GmbH + +- Regulatorische Entwicklungen werden aktiv verfolgt – Memoro stellt sicher, dass alle + eingesetzten Cloud-Dienste mit den neuesten DSGVO-Anforderungen konform sind. +- Transparente Kommunikation – Nutzer werden umfassend über die Nutzung und + Verarbeitung ihrer Daten informiert. +- Technische Schutzmaßnahmen werden konsequent umgesetzt – Zusätzliche + Verschlüsselung und Anonymisierung werden in sicherheitsrelevanten Bereichen + angewendet. + Diese Maßnahmen gewährleisten eine rechtskonforme und sichere Nutzung von + Cloud-Diensten innerhalb der Memoro GmbH und stehen im Einklang mit den aktuellen + Datenschutzanforderungen in Deutschland und der EU. + +12. Auftragsverarbeitungsvertrag (AVV) + Die Memoro GmbH stellt für Organisationskunden einen standardisierten + Auftragsverarbeitungsvertrag gemäß Art. 28 DSGVO zur Verfügung. Dieser regelt: + +Memoro - Technische und organisatorische Maßnahmen (TOMs) Seite 22 / 23 + +● Die spezifischen Verarbeitungstätigkeiten für die jeweilige Organisation +● Besondere Konfigurationen (z.B. Deaktivierung von Analysediensten) +● Zusätzliche technische und organisatorische Maßnahmen +● Unterauftragsverarbeiter-Regelungen +Der aktuelle Muster-AVV kann über kontakt@memoro.ai angefordert werden. +Individuelle Anpassungen werden nach Absprache vorgenommen. diff --git a/apps/memoro/apps/landing/context/personas.md b/apps/memoro/apps/landing/context/personas.md new file mode 100644 index 000000000..9621a8079 --- /dev/null +++ b/apps/memoro/apps/landing/context/personas.md @@ -0,0 +1,316 @@ +# Personas Erstellungs-Guide + +## Überblick + +Personas sind detaillierte, datenbasierte Profile unserer Zielkunden. Sie helfen uns, Produkt- und Marketing-Entscheidungen aus Kundenperspektive zu treffen. + +## Dateistruktur + +### Speicherort +``` +src/content/_personas/ +├── de/ # Deutsche Personas +│ └── [slug].mdx # z.B. handwerksmeister-thomas.mdx +└── en/ # Englische Personas + └── [slug].mdx +``` + +### Namenskonvention +- Kleinschreibung mit Bindestrichen +- Format: `[rolle]-[vorname].mdx` +- Beispiele: + - `handwerksmeister-thomas.mdx` + - `startup-founder-alex.mdx` + - `projektmanagerin-sabine.mdx` + +## Persona-Struktur + +### 1. Frontmatter (YAML) + +Die Persona-Datei beginnt mit strukturierten Metadaten zwischen `---` Markierungen: + +```yaml +--- +# PFLICHTFELDER +name: "Thomas Bauer" # Vollständiger Name +title: "Der digitale Handwerksmeister" # Beschreibender Titel +lang: "de" # Sprache (de/en) + +# DEMOGRAFISCHE DATEN +demographics: + age: 45 # Alter oder Altersbereich (z.B. "35-45") + gender: "male" # male/female/diverse/unspecified + location: "Augsburg, Deutschland" # Stadt, Land + education: "Meisterbrief" # Bildungsabschluss + income: "75.000-95.000€/Jahr" # Optional: Einkommensspanne + familyStatus: "Verheiratet, 3 Kinder" # Optional: Familienstand + +# BERUFLICHES PROFIL +professional: + jobTitle: "Geschäftsführender Elektromeister" + company: "Bauer Elektrotechnik GmbH" # Optional: Firmenname/typ + companySize: "12 Mitarbeiter" + industry: "Handwerk - Elektrotechnik" + experience: "22 Jahre" + responsibilities: # Array von Hauptaufgaben + - "Betriebsführung" + - "Kundenakquise" + teamSize: "11 Mitarbeiter" # Optional + +# PSYCHOGRAFISCHE MERKMALE +psychographics: + personality: # Persönlichkeitsmerkmale + - "praktisch" + - "qualitätsbewusst" + values: # Werte und Überzeugungen + - "Handwerksqualität" + - "Zuverlässigkeit" + motivations: # Was treibt die Person an? + - "Betrieb modernisieren" + frustrations: # Pain Points + - "Zu viel Bürokratie" + goals: # Persönliche/berufliche Ziele + - "Digitalisierung vorantreiben" + +# VERHALTEN +behavior: + techSavviness: "intermediate" # beginner/intermediate/advanced/expert + workStyle: # Arbeitsweise + - "Früh auf Baustelle" + tools: # Genutzte Tools + - "WhatsApp Business" + communicationPreference: # Kommunikationskanäle + - "Persönlich" + - "WhatsApp" + buyingBehavior: "Braucht Empfehlungen" + informationSources: # Wo informiert sich die Person? + - "Handwerkskammer" + +# MEMORO-KONTEXT +memoroContext: + useCase: # Wie würde Memoro genutzt? + - "Baustellenprotokolle" + benefits: # Welche Vorteile sind relevant? + - "Rechtssicherheit" + concerns: # Bedenken/Hindernisse + - "Datenschutz" + features: # Wichtigste Features + - "Offline-Funktion" + priceSensitivity: "medium" # low/medium/high + adoptionLikelihood: "medium" # low/medium/high/very-high + influencers: # Optional: Wer beeinflusst Entscheidung? + - "Andere Handwerker" + +# USER STORY +userStory: | + Detaillierte Erzählung eines typischen Arbeitstages... + +# SZENARIEN (Optional) +scenarios: + - title: "Baustellenbegehung" + description: "..." + outcome: "..." + +# ZITATE (Optional) +quotes: + - "Ich verbringe mehr Zeit mit Papierkram als auf der Baustelle" + +# MARKETING +marketing: + segment: "secondary" # primary/secondary/tertiary + channels: # Marketing-Kanäle + - "Handwerkskammer" + messaging: # Kernbotschaften + - "Rechtssicherheit" + contentPreferences: # Content-Formate + - "Praxisberichte" + +# META-INFORMATIONEN +status: "active" # draft/active/archived +visibility: "internal" # internal/team/stakeholders +tags: + - "handwerk" + - "b2b" +relatedPersonas: # Optional: Verwandte Personas + - "architekt-claudia" + +# ZEITSTEMPEL +createdAt: 2024-12-01T10:00:00Z +lastUpdated: 2024-12-01T10:00:00Z +validatedAt: 2024-11-28T10:00:00Z # Optional + +# VERANTWORTLICHKEITEN +owner: "Marketing Team" +contributors: # Optional + - "Vertrieb" +--- +``` + +### 2. Content-Bereich (Markdown) + +Nach dem Frontmatter folgt der narrative Teil in Markdown: + +```markdown +## Detaillierte Persona-Beschreibung + +[Einführender Absatz über die Persona und ihre Bedeutung] + +### Kernherausforderungen + +1. **Challenge 1**: Beschreibung +2. **Challenge 2**: Beschreibung + +### Memoro Value Proposition + +[Wie löst Memoro die Probleme dieser Persona?] + +### Kommunikationsansatz + +[Wie sollten wir diese Persona ansprechen?] +``` + +## Schritt-für-Schritt Anleitung + +### 1. Research & Datensammlung + +Bevor du eine Persona erstellst: +- Analysiere vorhandene Kundendaten +- Führe Interviews mit echten Kunden +- Sammle Feedback von Vertrieb und Support +- Recherchiere Branchentrends + +### 2. Datei erstellen + +```bash +# Neue Persona-Datei anlegen +touch src/content/_personas/de/[rolle]-[name].mdx +``` + +### 3. Frontmatter ausfüllen + +Kopiere die Struktur von einer bestehenden Persona und passe an: +- Alle Pflichtfelder müssen ausgefüllt sein +- Nutze realistische, spezifische Details +- Vermeide Stereotypen + +### 4. User Story schreiben + +Die User Story sollte: +- Einen typischen Tag/Situation beschreiben +- Konkrete Pain Points zeigen +- Emotional nachvollziehbar sein +- 3-5 Absätze lang sein + +### 5. Content-Bereich verfassen + +Ergänze: +- Zusammenfassung der Persona +- 3-5 Kernherausforderungen +- Memoro's Value Proposition +- Kommunikationsempfehlungen + +### 6. Review & Validierung + +- [ ] Lasse die Persona vom Team reviewen +- [ ] Validiere mit echten Kundendaten +- [ ] Aktualisiere `validatedAt` Datum +- [ ] Setze Status auf `active` + +## Best Practices + +### DO's ✅ + +- **Spezifisch sein**: "45 Jahre, Elektromeister" statt "mittleres Alter, Handwerker" +- **Echte Zitate verwenden**: Aus Kundeninterviews +- **Pain Points priorisieren**: Die wichtigsten zuerst +- **Konsistent bleiben**: Persona-Details sollten zusammenpassen +- **Regelmäßig aktualisieren**: Quartalsweise Review + +### DON'Ts ❌ + +- **Keine Fantasie-Personas**: Basiere auf echten Daten +- **Keine Stereotypen**: Vermeide Klischees +- **Nicht zu vage**: "mag Technologie" → "nutzt WhatsApp Business täglich" +- **Nicht zu viele**: 3-5 Kern-Personas reichen meist +- **Nicht statisch**: Personas entwickeln sich weiter + +## Persona-Typen für Memoro + +### Primäre Zielgruppe +- Projektmanager:innen +- Team-Leads +- Consultants +- Sales Manager + +### Sekundäre Zielgruppe +- Startup-Gründer:innen +- Handwerker:innen +- Coaches & Trainer +- Freelancer + +### Tertiäre Zielgruppe +- HR-Manager:innen +- Forscher:innen +- Journalist:innen +- Anwält:innen + +## Verwendung der Personas + +### Im Produkt +- Feature-Priorisierung +- UX-Entscheidungen +- Onboarding-Flows + +### Im Marketing +- Content-Strategie +- Kampagnen-Planung +- Messaging & Positioning + +### Im Vertrieb +- Pitch-Anpassung +- Objection Handling +- Use-Case-Demos + +## Wartung & Pflege + +### Monatlich +- Neue Insights aus Kundengesprächen einarbeiten + +### Quartalsweise +- Vollständiger Review aller aktiven Personas +- Validierung mit aktuellen Kundendaten +- Archivierung veralteter Personas + +### Jährlich +- Strategische Überprüfung der Persona-Landschaft +- Neue Personas bei Markterweiterung + +## Tools & Ressourcen + +### Templates +- Basis-Template: `src/content/_personas/template.mdx.example` +- Interview-Leitfaden: `docs/persona-interview-guide.md` + +### Analyse-Tools +- Customer Analytics Dashboard +- Support-Ticket-Analyse +- Sales-Call-Recordings + +### Validierung +- A/B-Tests mit Persona-basiertem Content +- Conversion-Tracking nach Persona-Segment +- Regelmäßige Customer Surveys + +## Kontakt & Support + +**Fragen zu Personas?** +- Marketing Team: marketing@memoro.ai +- Product Team: product@memoro.ai + +**Neue Persona vorschlagen?** +- Erstelle ein Issue im Repository +- Oder kontaktiere direkt das Marketing Team + +--- + +*Letzte Aktualisierung: Dezember 2024* \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/press/podcast.md b/apps/memoro/apps/landing/context/press/podcast.md new file mode 100644 index 000000000..c95a87714 --- /dev/null +++ b/apps/memoro/apps/landing/context/press/podcast.md @@ -0,0 +1,313 @@ +programmier.bar + +Deep Dive 168 – +Low Code mit Till Schneider & Tobias Müller +13.12.2024 +// Podcast +// Deep Dive 168 +Shownotes +Die meisten Entwickler:innen schreiben gerne Code. Aber muss das wirklich immer sein? Und ist das in jedem Szenario wirklich sinnvoll? + +In dieser Episode sprechen Garrelt und Jan mit Till und Tobias, den Köpfen hinter memoro.ai, über ihre Erfahrung mit Low-Code-Tools zur App-Entwicklung. Till und Tobias haben im Rahmen eines Hackathons die Idee für die Code-Minimierung entwickelt und anschließend ohne konventionelle Entwicklungsarbeit umgesetzt. Ein Beweis, dass man auch mit Low-Code-Tools moderne Apps an Endkund:innen ausliefern kann. + +Wir unterhalten uns über die Erfahrungen, die sie auf ihrer Reise gemacht haben: Von der Auswahl der richtigen Tools über den Einstieg und die Lernkurve bis hin zur Kollaboration zwischen Entwickler:innen und Designer:innen erforschen wir alle Facetten des Projektes. + +Aber wir beleuchten auch, wann und wie ein solcher Ansatz an seine Grenzen stoßen kann. Wir fragen uns, ob das unausweichlich ist und wie am besten damit umgegangen werden sollte. Außerdem stellen wir die alles entscheidende Frage: Können Till und Tobias den Ansatz wirklich empfehlen und würden sie es nochmal genauso machen? + + +Transkript: + +Jan +Hallo und herzlich willkommen zu einem neuen Deepdive in der programmier.bar. Heute mit einem ganz besonders spannenden Thema, nämlich einem Thema, was immer die Gemüter spaltet. Wir sprechen einmal über No Code beziehungsweise Low Code Lösungen. Und mit mir im virtuellen Studio in der unteren rechten Ecke meines Screens sitzt der Garelt. Hi, Garelt. Hi. Garelt, wie oft hast Du schon mit so Low Code und Low Code Lösungen gearbeitet? +Garrelt +Oh, wir hatten das in der Schule. Da hatten wir was zum Zusammenklicken, das hieß Scratch. Da hab ich ein megageiles Spiel gemacht, hab eine Eins bekommen und meine Schülerkameraden haben auch eine Eins bekommen für Trash Spiele, aber na ja, ist nicht mein Ding. +Jan +Die Zuhörerinnen da draußen, die jetzt nicht sehen, wie jung Du noch aussiehst, fragen sich natürlich, wann warst Du in der Schule und hast Scratch programmiert? Wie lang ist das her? +Garrelt +Das war zweitausendvierzehn, also genau zehn Jahre. +Jan +Zweitausendvierzehn. Zweitausendvierzehn hatt ich schon meinen Uniabschluss in der Tasche, dementsprechend länger ist mein Informatikunterricht auch schon her, ja. Aber bei mir gab's keinen Low Code in der Schule. So Fernsehsieg gab's damals noch nicht, ja. Wir haben mit Tobi Pascal und c irgendwie programmiert. Und weil ich dementsprechend ganz wenig Ahnung von Low Code und No Code hab, sondern nur viel Meinung dazu, haben wir uns noch Leute eingeladen, die uns mit viel Wissen zur Seite stehen können. Wir haben hier Till und Tobias von Memoro. Hallo Till. +Till Schneider +Hi Jan. +Jan +Grüß dich. Hallo Tobias. Hi. Tobi passt übrigens. Tobi auch wunderbar, kein Thema. Ihr habt zusammen Memoro gemacht mit noch ein paar anderen Leuten. Vielleicht wollt ihr einmal ganz kurz erklären, was Memoro ist, wie ihr auf die Idee gekommen seid und dann sprechen wir einmal ans Eingemachte. +Till Schneider +Jawoll, gerne. Wir haben Memoro vor so eineinhalb Jahren circa gestartet auf 'nem Hackathon hier in Konstanz am Bodensee. Ich kam da mit der Idee die Ecke, dass ich doch gerne meine Gedanken und Gespräche aufnehmen will. Also ich hatte das Hauptproblem, dass ziemlich viel Projekte parallel liefen und ich nicht hinterherkam irgendwie mit der Organisation, mit der Strukturierung von allem. Und dann hab ich mir so angewöhnt, abends 'n Spaziergang zu machen und das einfach so mir von der Seele zu reden. Und dann hab ich das mal getestet, aufm iPhone einfach Sprachnotiz aufgenommen, Sprachnotiz hochgeladen, transkribieren lassen, das dann in Chat GPT reinkopiert und in Sachen gefragt über den Text. Und das kam mir gleich vor wie sone kleine Superkraft, weil man sich selber halt supergut reflektiert bekommt und und recht unstrukturiert reden kann und strukturiert das Ganze zurückbekommt und halt son Überblick bekommt. Also es war wie son 'n kleiner kleiner seelischer Doktor, kann man fast sagen. Jemand, der mir dann zuhört und dann konnte man sich einfach son bisschen entleeren. Und das hat sich einfach so toll angefühlt, dass ich das dann vorgestellt habe auf dem Heck gefahren, als einfach nur eine eine grobe Idee. Und dann hat sich direkt 'n Team aus fünf, sechs Leuten eigentlich die Idee formiert und wir haben zwei Tage lang gebrainstormt. Wir haben sehr viel waren sehr viel spazieren, haben sehr viel geredet, das ist natürlich alles direkt aufgenommen und hatten dann eigentlich 'n super Startschuss rein in die in das neue Start-up, da wir diese ganzen Gespräche festgehalten haben und direkt wussten, was wo wir hinwollen. Und Tobi war auch direkt von Anfang an dabei und wir zwei haben dann tatsächlich direkt gestartet. Wir sind das Kernteam, wir haben angefangen zu entwickeln, die anderen unterstützen son bisschen. Und wir hatten dann nach zwei Wochen eigentlich einen einen eine Alpha und haben die verteilt an Leute, haben die Menschen in die Hand gedrückt und haben gefragt, wie sie das Tool gerne nutzen könnten. Und dann kamen sehr viele Anfragen aus verschiedenen Industrien, aus verschiedenen Branchen, die 'n großen Mehrwert daran sehen. Und heute sind wir jetzt bei tausenddreihundert Nutzern, haben auch Paying Subscriber schon und haben 'n Stipendium gewonnen, was uns 'n Jahr lang finanziert und haben auch sehr viel Platz und Raum, uns mit neuer KI Technologie grundsätzlich auseinanderzusetzen. +Jan +Das, was Du jetzt natürlich nur so halb implizit gesagt hast, ist, dass ihr natürlich Memoro komplett mit Low Code No Code Toolchain gebaut habt, sozusagen, ja? Ja, genau. Darüber wollen wir einmal einmal sprechen. Deswegen meine erste Frage vielleicht so an Tobi. Was macht man als Entwickler in soner No Code Low Code Bude? So wird wird man da überhaupt gebraucht Fragezeichen, +Tobias Müller +ne? Tatsächlich ist es gar nicht so gar nicht so falsch, weil auch als Coder überlegt man sich, wie kommt man schnell zu Ziel? Und ich komm auch son bisschen ausm Start up Bereich oder auf Hecertons. Und da muss alles immer schnell gehen. Und da braucht man natürlich das das Verständnis für Code, wie Programme aufgebaut sind. Aber es hilft einem unglaublich, mit sonem No Code Low Code Bilder einfach mal schnell 'n Prototypen zusammenzuklicken. Und da das ergänzt sich, find ich, super. Das ist gar kein Widerspruch zueinander. +Garrelt +Würdest Du sagen, man ist mit Low Code, mit dem Zusammenklicken wirklich schneller, wenn man gleich gut das auch entwickeln könnte? Einfach vom Speed her, wenn man beides gut kann. +Tobias Müller +Man muss 'n bisschen differenzieren zu früh und heute. Heute hat man KI. Da ist vielleicht tatsächlich son bisschen die Überlegung, hat man mit 'ner KI vielleicht schneller mal 'n Projekt initiiert und schnell mal 'n Code druntergeschrieben, grade wenn's 'n bisschen was Spezielleres ist. Aber allgemein, wenn man einfach mal nur was darstellen will, das heißt ja so schön, Dann dann kann man einfach mal schnell 'n Gefühl dafür bekommen und auch einfach mal einfache Logiken damit umsetzen, keine Ahnung, Kalkulation oder irgendwelche Sachen zu generieren und so weiter. Und vielleicht schreibt man dann durchaus auch mal 'n Kostencode. Aber da geht's halt alles wirklich Geschwindigkeit. Im Hackathon hat man nicht viel Zeit. +Till Schneider +Was ich dann noch ergänzen könnte vielleicht, und das war eigentlich so auch, das, wir hätten gar nicht, ich hätte nicht anfangen können, Tobi zu helfen, wenn wir nicht mit Low Code, Low Code gestartet wären. Also ich bin vom Designhintergrund und gut, ich hab mal bisschen Tailwind HTML CSS son bisschen rumgewurstelt, aber ich hatte dadurch halt die Möglichkeit, superschnell das Frontend zu iterieren. Und wir haben dann im Endeffekt innerhalb von, was war's, 'nem Jahr oder hatten wir über vierhundert Versionsnummern durchgeschoben und da hat sich auch in jeder was geändert. Also da war dann 'n ordentlicher Druck im Kessel und now 2 be im wir haben mehr aufs mehr im Backend und guckt, dass das Zeug sauber funktioniert, hat dann ja in Python das Backend geschrieben, also Backend nicht nicht Low Cop, No Code. Wir haben mal Tests gemacht mit mit anderen Tools auch und haben kleine kleine Backends Low Carb, No Code technisch aufgebaut. Aber sonst hätt die Zusammenarbeit mit uns gar nicht, wir hätten gar nicht starten können, hätten wir den Stack nicht gewählt. +Jan +Genau. Vielleicht machen wir da noch mal zweieinhalb Schritte zurück. Jetzt warst Du schon bei den vierhundert Versionen, die ihr da irgendwie durchgeschippt habt. Aber bevor ihr angefangen habt, also ihr hattet, ne, über das Thema son bisschen gesprochen, ihr hattet irgendwie 'n grobes Konzept. So weit sind ja, ich sag mal, Codeentwicklung und No Code Entwicklung sich ja auch gleich und ähnlich, ja. Erst mal muss man darüber sprechen, was man so baut, bevor man anfängt, das hilft. Und dann wollt ihr irgendwann anfangen. Wie habt ihr euch denn für ein spezielles Tool entschieden? Weil also alle Entwickler kennen ja diese Diskussion, Du willst jetzt was bauen. Da ist erst mal die wichtigste Diskussion vorher so, was ist der Stack, was ist die Sprache, was ist das Framework, ja? Gab es da auch ähnliche Diskussionen so in diesem No Code Bereich, wo man sich überlegt, na ja, es gibt da halt zig verschiedene Tools da draußen, ja? Also wenn man einfach nur mal No Code oder Low Code googelt, da wird man ja jetzt zugeschüttet mit irgendwie Plattformen, die einem da was anbieten wollen und verschiedenen Wändern und basierend auf verschiedenen Technologien. Und wie habt ihr euch da überhaupt erst mal durchnavigiert, euch für eines von diesen Zehntausenden zu entscheiden? Und und was waren da ausschlaggebende Faktoren? +Tobias Müller +Was ich mal sagen kann dazu, ist, dass es auf jeden Fall Reibungspunkte auch da schon am Anfang gab. Weil ich als Entwickler hab natürlich meine Tools, die ich schon kennengelernt hab und so weiter. Und dann bleib ich natürlich auch gerne aus Gemütlichkeit bei dem, was ich schon kenne. Till war da schon deutlich offener am Anfang. Er guckt sich gerne aktuelle Berichte an, über was was für neue IDs es gibt, welche neuen Frameworks und so weiter. Er ist da deutlich offener, 'n bisschen weniger voreingenommen, würd ich sagen. Und da haben wir schon am Anfang auch schon angefangen zu diskutieren. Ja, und irgendwie kamen wir dann auf Flatterflow. Ich weiß gar nicht mehr so genau, wie's dazu kam. Weißt Du's noch, Tür? +Till Schneider +Ich glaube, wir hatten einfach verschiedene Test Testberichte gelesen. Ich bin 'n großer Youtube Videoanker und da verschiedenste Vergleiche mir reingezogen. Und ja, Flatterflow ordnet sich da einfach am Ende am am am am Ende zu Code eigentlich ein von den Low Code Tools. Also man kann sehr viel Custom Code mit einbauen und das das Tool eigentlich mehr und mehr erweitern. Und die anderen Low Code No Code Tools sind da eher noch mehr Low Code No Code, sag ich mal. Also wir haben uns dann so für diesen für für für das Komplexeste entschieden, was es gibt, haben dann auch geschaut, wo gibt's eine gute Community? Wo wo kann man Fragen stellen? Wo wird wo werden Features geschipped? Natürlich son bisschen, was ist der was ist grade heiß am Markt auch, aber was existiert auch schon länger? Also wir Flatterflor war damals schon 'n paar Jahre alt und und aufm Markt, heißt, auch da 'n bisschen auf die Langlebigkeit dann geschaut. Dann hat sich das so bisschen rauskristallisiert recht schnell eigentlich. +Garrelt +Aber Tobias, Du kannst aus nicht aus der Ecke Low Code, No Codes. Dein Setup vorher war reiner Code. +Tobias Müller +Also ich für mich selbst hab bisher weniger mit Low Code, No Code gearbeitet, klar. Und so weiter. Schon ein bisschen welche SDKs Frameworks kannst Du nutzen, wirklich 'n schnelles Ergebnis zu erzielen. Aber wirklich eingesetzt hatt ich's bisher dann noch weniger. Deswegen war jetzt halt auch son bisschen die die Diskussion, wo steigen wir ein? Und der große Punkt war, wie Till schon sagte, wie kann er halt auch mitentwickeln? Weil wenn wir jetzt wirklich mit Code anfangen, wird's halt schwierig. Und wir hatten dann auch getestet und angeschaut, inwieweit sind wir eingeschränkt? Wie viel Freiheiten haben wir? Und mit florida Flow war halt auch eben dies diese Möglichkeit, trotzdem noch eigenen Code einzufügen, was für uns auch notwendig war. +Jan +Das ist ja 'n spannender Punkt. Also wenn ihr relativ früh schon erkannt habt, okay, ist es halt notwendig, da auch Custom Logik oder UI Elementter oder so was zu machen. Wie lange ging es denn gut mit nur den Onboard Tools, ja, in eurem Prototyping, Bootstrapping Prozess? Und wann war so das erste Mal, wo ihr gesagt habt, okay, jetzt müssen wir mal hier Ärmel hochkrempeln und mal was Eigenes noch dazu bauen? +Tobias Müller +Eigentlich ging das ziemlich schnell, weil 'n Kerninhalt von unserer App ist halt die Audioaufnahme. Und Floodderflow bietet schon einen einfachen Recorder, aber sehr eingeschränkt. Also großes Kriterium ist bei uns, dass man's halt wirklich das Handy aufn Tisch legen kann und ohne drüber nachzudenken aufnehmen lassen kann. Und da darf halt die App nicht ins Sleep Mode gehen. Die der Screen muss gesperrt werden können. Und da muss man halt die Anforderungen von der jeweiligen Plattform erfüllen können. Dieser einfache Recorder, der hat dafür nicht ausgereicht. Das heißt, wir haben sehr schnell damit angefangen, wirklich den Recorder selbst zu implementieren. +Jan +Und was war das fürn Erlebnis? Also wie wie sehr arbeitet man da mit oder gegen die Plattform, wenn man dann da eigene Komponenten einbauen möchte? +Tobias Müller +Das war schon teilweise recht anstrengend. Man hat schon schmerzhafte Einschränkungen, würd ich sagen. Man hat halt wirklich nur, wie soll ich sagen? Man man kann Custom Actions, Custom Functions, Custom Widgets anlegen, aber man bewegt sich halt dann immer nur in einem Codebereich. Wenn man dann eben Sachen in die App integrieren muss, vielleicht direkt schon beim beim Starten der App mit initialisieren wird, dann wird's eben komplex und manchmal muss man dann auch Workaruns suchen. Wie kriegt man eben das integriert ohne diese Freiheiten? Wobei natürlich, man muss auch noch ergänzen, man könnte durchaus den Code verändern, aber dann wird's noch mal komplizierter. +Garrelt +Mhm. +Jan +Jetzt ist ja so auf Funktionalitätsebene das eine, aber auf Designebene ist ja das andere, ne. Also ich stell mir vor, wenn man da in sonem grafischen Editor arbeitet und sich da seine Elemente zusammenklickt, dann kommt man Till ja vielleicht auch relativ schnell an so dem Punkt so, das ist jetzt nicht so ganz Pixelperfekt, wie ich's eigentlich gerne hätte oder wie's in unserem Designentwurf aussah? Und gab es da auch Punkte, wo man da irgendwann so die die Edge Cases des Editors erreicht hat, sag ich mal und sich dann was anderes einfallen lassen muss? +Till Schneider +Würd ich nicht sagen. Also ich hab am Anfang noch angefangen, alles in Figma zu designen und wie die UI und UX da zu planen. Und dann war es auch son schleichender Prozess, dass dass halt Flatterflow im Grunde genommen so schnell funktioniert, dass ich eigentlich weniger und weniger die Sachen in Figma vorgescatcht hab und angefangen hab, wieder eigentlich direkt in Flatterflow Dinge auszuprobieren. Ist son schmaler Grat. Also manchmal hab ich's damals bereut, die Sachen nicht besser geplant zu haben in Figma. Manchmal war's eine gute Entscheidung, also da son bisschen der Mittelweg. Aber ich kam nie, also doch an ein, zwei Stellen, aber da würd ich sagen, also sehr, es gibt es gab dann gewisse Bereiche, die doch nicht funktioniert haben. Zum Beispiel wollt ich eine eine Liste am unteren Ende des Bildschirms haben und wenn auf die Liste getippt wird, sollte sie hochfahren und dann den ganzen Screen ausfüllen. Also quasi so wie eine Page Navigation, aber halt auf auf eigentlich in einer Seite contained, ging nicht. Solche Sachen wie wie Blur, haben Blurview, also auf iOS kennt man das ja, dass dann der der der Header zum Beispiel so so diesen, ja, ja, diese diesen Blur bekommt. Den konnte man nicht schön umsetzen beziehungsweise dann hat Flatterflow irgendwie im im Preview Mode dann rumgesponnen und wurde langsamer, wurde irgendwie slackgisch. Das gab's bei 'n paar Punkten, wo man wo man so, wenn man Sachen bauen will im Frontend, dass auch Flatter Flow es nicht mehr richtig gut handeln kann und man, ja, man eigentlich dann Abstriche machen muss. Was an sich okay war, glaub ich, weil im Grunde genommen sind halt die die Patterns, die User gewohnt sind, recht gut dargestellt darin. Und wenn man dann eigentlich an diese Grenzen kommt, sollte man sich vielleicht eher überlegen, ob das auch kein kein gewohntes Pattern ist, was man vielleicht auch anders bauen könnte. +Jan +Wie ist es denn in der in der täglichen Arbeit? Weil im Prinzip fahrt ihr ja zweigleisig, also oder auf zwei Ebenen, ne. Die die unterste Ebene ist das Betriebssystem iOS, Android, auf was auch immer ihr so deployed. Das verändert sich ja auch. Da kommen neue Versionen, neue Sales, neue raus. Und dann zwischen euch und dem Betriebssystem ist da diese Plattform, die es ja wahrscheinlich auch jetzt geändert hat so in der Zeit, in der ihr sie benutzt, ja auch mit mit neuen Features, mit neuen Funktionen, neue Widgets vielleicht out of the box bereitstellt oder so was. Was was macht es so mit dem Erlebnis? +Tobias Müller +Das hat aber auf jeden Fall dazu geführt, dass wir öfters mal zu plötzlich Errors hatten, die wir vorher nicht hatten. Und nach dem Debugging hat sich raus rausgestellt, okay, jetzt haben sich Dependencies geändert. Doch das ist son Beispiel gewesen. Entweder hat dann halt Flatterflow Updates gezogen oder ich hab dann irgendwie eine Dependency reingemacht, die halt auch nicht mehr mit 'ner anderen Version kompatibel war. Das war so 'n Problem, über das wir öfters gestolpert sind, was dann auch natürlich 'n bisschen frustrierend war, erst mal überhaupt rauszufinden, was ist denn da jetzt passiert? Auch von Android Seite gab's mal eine Änderung, dass man zusätzlich permissions grad mit dieser Backgroundaufnahme noch hinzufügen musste. Auch das war eine längere Recherche, dann wieder rauszufinden, okay, hier müssen wieder zusätzliche Angaben zu machen. Ja, da haben wir auf jeden Fall oder hab ich auch viel Zeit mit verbringen müssen mit dem Debugging, mit dem Recherchieren. +Jan +Mhm. Und was bringt denn sone Low Code Lösung am Ende alles mit? Also wir haben jetzt viel, ne, über so grafische Editoren für das Interface gesprochen und auf und Komponentenebene, die man selber bauen kann. Aber zu soner richtigen in Anführungszeichen App gehört ja oftmals noch viel mehr dazu, ne. Insbesondere wenn's irgendwie was kommerzielles ist, da braucht man irgendwie Usermanagement, da braucht man 'n 'n Payment oder Billing vielleicht auch, ja. Da braucht man irgendwie eine Art von Storage, die da dranhängen muss. Ihr habt schon über eure Aufnahmen gesprochen. Die werden ja auch irgendwo aufgerufen werden müssen. Ihr macht Transkcription, das wird irgendwo gemacht. Wie viel davon kann ich so alles in meinem Sandkasten machen? Und was muss ich selber extern zur Verfügung stellen? Und wie krieg ich das dann am Ende connected? +Tobias Müller +Ich glaube eben mal allgemein zu dem Low Code No Code Thema. Das ist auch der große Vorteil, weswegen man vielleicht das wählen kann, weil den meisten Low Code No Code Tools eben solche Integration schon mit drin sind. Da braucht man sich eben nicht darüber Gedanken machen, wie mach ich die das Setup? Was muss ich da beachten und so weiter? Und das war eben jetzt auch speziell bei Flood oflow sehr angenehm, weil da zum Beispiel Firebase schon komplett integriert ist mit dem der der und Firestore als Datenbank. Und das hat auch noch weitere Sachen wie zum Beispiel Superbase. Klar, schon eine gewisse Einschränkung. Es ist jetzt nicht alles drin, aber super zum Durchstarten, kein größeres Set-up kann man hat man direkt, was man braucht. +Till Schneider +Und auch weitergehen noch in Richtung eben, wie bekomme ich dir die Sachen denn in die Appstores rein? Wie bekomme ich das auf Testflight oder in den Play Store, Betatrack? Das ist alles eigentlich schon recht gut eingebaut. Das ist dann 'n ziemlich viel händisches Set-up, bis man mal eben den den Apple Developer Profil und den Android Developer Profil sauber aufgesetzt hat und da die recht alles alle Formulare ausgefüllt hat, bis man da überhaupt mal was reinpushen kann. Aber das ist im Grunde genommen alles schon eingebaut und dann auch Revenue Cat zum Beispiel, Integration, die dann die Payments handeln und und eigentlich den die App Store und Play Store Payments dann an einer zentralen Stelle wieder verwalten. Da musste, glaube ich, Tobi dann schon noch 'n bisschen was quasi auf Datenbankseite nachziehen, dass das wieder, dass dass wir auch die Daten sauber behalten und und das sauber abbuchen können sozusagen. Aber tatsächlich ging das jetzt in Flatterflow, dass wir die App dahin gebracht haben, dass sie inklusive in App Payments und Abomodell und so weiter vollkommen in Flatterflow entstanden mit viel Customcode dann geschippt werden kann. Und aktuell ist das immer noch die Version, die im App Store live ist. +Garrelt +Es klingt ja schon nach ziemlich viel Funktionalität. Und als ich mir das mal auf der Seite angeguckt hab, wirkte flutterflow auch sehr mächtig und auch für mich fast schon überladen. Wie ist denn so eure Erfahrung? Wie schnell seid ihr da reingekommen? War viel intuitiv für euch? Musstet ihr diese Software erst mal lernen, zu also überhaupt erst mal was machen zu können oder findet man eigentlich alles sehr schnell, was man braucht? +Tobias Müller +Also wenn man jetzt eben von diesen Integrationen redet, wie man die nutzt, auch UI Elemente, wie man die platziert, einfache Klickfunktionen und so weiter, ist es, denk ich, schon sehr einfach. Sobald's dann aber 'n bisschen Logiken geht, so das typische f F ifs, Schleifen, also ich red jetzt grade eher 'n bisschen über die algorithmische Ebene, da ist es dann vielleicht nicht mehr so intuitiv. Da hilft's dann's durchaus, sich auch 'n paar Tutorials anzuschauen, sich mal 'n bisschen damit zu beschäftigen, grade wenn man vielleicht jetzt nicht so die Codingerfahrung hat. +Till Schneider +Auch vom Frontend her finde ich, es gibt eine gewisse Lernkurve. Also man es ist wirklich 'n Tool, was was in Richtung, man muss schon Experte sein geht. Es hilft, wenn man Verständnis hat von Atomic Design System und weiß, wie man Komponenten richtig anlegt, einfach eine Konsistenz auch zu halten, sonst läuft es schon schnell ausm Ruder. Also ich musste da auch im Frontend dann immer wieder quasi refactern und mir überlegen, okay, wie krieg ich das jetzt einiger bisschen sauberer noch hin, dass ich hier nicht auf jeder Seite eine einzelne Header Komponente hab, sondern eine zentrale und da mach ich das halt mit Variablen. Also man, es wird schon recht schnell komplex. Also man braucht 'n gewisses Grundverständnis dafür. Und ich würde sagen, von den No No No No Go Tools eben hat es eine eine relativ steile Lernkurve. +Jan +Und wie habt ihr euch da geholfen? Also wie wie habt ihr diese Lernkurve gemeistert? +Till Schneider +Also in meinem Fall ist es tatsächlich Youtube Tutorials angucken. Flatterflow hat 'n supereigenen Channel, wo sie gute, auch lange Tutorials haben, wo sie mal eine ganze Applikation bauen. Oder sie haben solche solche Webinare, wo sie zusammen sich zusammenschalten und bisschen über Themen reden. Es war aber tatsächlich 'n extremes Gesuch. Also wir haben dann so zwei, drei Stunden lange Webinare und ich habe einen Fehler. Und der Fehler ist dann bei Minute dreiundfünfzig, wo sie genau darüber reden und erklären, dass das ja grad 'n Fehler ist in ihrer Software. Und man muss gucken, da muss man das so umbauen. Also es war schon oft son bisschen das Suchen nach der Nadel im Heuhaufen. +Tobias Müller +Also man merkt auch schon, es ist eine Community da, vermutlich auch größer wie bei anderen Tools, aber sie ist doch noch am Aufbauen. Also grad wenn's dann speziellere Dinge geht, ist es schon 'n bisschen an Gesuche und auch 'n bisschen mit Glück verbunden, ob denn tatsächlich schon mal jemand diese Frage überhaupt gestellt hat. +Till Schneider +Und und was ich noch ergänzen will, ist, dass wirklich diese diese zusätzliche Ebene, Abstraktionsebene, die halt Flatterflow noch mal auf Flatter draufsetzt, halt auch zu zu Problemen führt. Dann weiß man nie genau Und und auch die Flatterflow Community weiß nicht, ist das jetzt 'n Flatter Error oder entsteht das jetzt durch die Flatterflow Abstraktion noch mal, ist da der Fehler und es sorgt schon zu noch mehr Reibungen. +Garrelt +Aber die Community, mit der da im Austausch war, war schon auch die Flatter Community oder ist diese Flatterflow Community arg getrennt? +Till Schneider +Nur, Flatterflow hat selber eine Community, also auf Community, glaub ich, Punkt Flatterflow Punkt com oder wie auch immer die URL ist, gibt's eine eigene Community, die auch recht stark ist. Da gibt's 'n paar sehr, sehr tolle Members, die auch sofort antworten, die schnell antworten. Und wir haben uns dann, also ich mich fürs Frontend her, meistens darin bewegt. Tobi musste, glaub ich, noch öfters dann auch in auf die Flatterebene wechseln, gewisse Dinge zu verstehen oder zu debuggen. +Garrelt +Ich frag mich schon die ganze Zeit, also Till, Du sagst ja, Du kommst eher aus dem Design Background. Tobias, Du wärst so im Dev Background. Wie gut oder wie habt ihr euch so aufgeteilt aufgabentechnisch? Hat das gut funktioniert? Auch auch eine Frage in meinem Kopf ist, wie hat das mit Version Control funktioniert, weil das manchmal bei so Tools, glaub ich, 'n bisschen komplizierter sein kann. Und ich glaube, in meinem Kopf ist son Tool ja megastark, was so Design- und Entwicklungszusammenarbeit geht. So, was was sind da eure Erfahrungen? Was hat da gut funktioniert, was vielleicht nicht so gut? +Tobias Müller +Also flatterflow bietet schon eine git ähnliches Brunching mit ja, auch eine Versionskontrolle, aber die hat uns eigentlich nicht so wirklich geholfen. Meistens war eher der Prozessor. Till hat was eben UI mäßig umgesetzt, irgendwelche Elemente. Und nach ihm bin ich dann ran und hab irgendwelche Logiken dann dazu noch implementiert. Also es war schon irgendwie 'n getrenntes Arbeiten, da wir aber mehrere Baustellen hatten, hatten wir uns dann halt schon auch irgendwo aufgesplittet. Es war halt, das ist auch ein ein großes Problem, was wir eben auch mit Flatterflow hatten, son Bug könnte man sagen. Sobald man parallel auf einer Seite gearbeitet hat, was man theoretisch machen könnte, hat es oft dazu geführt, dass einfach Änderungen verschwinden sind. Till hat irgendwas gemacht, ich hab dann was eingefügt, dann +Garrelt +geh ich +Tobias Müller +in den Testmode und plötzlich ist es nicht mehr da. Und das war auch schon sehr frustrierend. Deswegen war dann einfach die Konsequenz, okay, wir müssen das halt etappenweise +Garrelt +machen. Also fast mehr Absprache nötig als sonst. +Tobias Müller +Genau, auf jeden Fall. +Garrelt +Ja. Und Till, Du konntest aber ohne viel Entwicklungs, nee, das weiß gar nicht, wie viel Entwicklungserfahrung hattest Du denn vorher? Oder bist Du da einfach wirklich aus dem reinen Designbackground eingestiegen und konntest da Dinge entwickeln? +Till Schneider +Also ich hab einmal eine eine Designlibrary eben mit mit Tellwind HTML, CSS irgendwie geschrieben. Das so Grundverständnisse waren eben da jetzt übers Boxmodel oder wie grundsätzlich die die die Dinge in in Code übersetzt werden. Aber tatsächlich war es dann doch eine recht steile Lernkurve eben noch mal eben viel, viel Tutorials angucken, dass man weiß, wie sich halt der flatterflow wieder verhält. Also ich wär dann wahrscheinlich schneller gewesen, hätt ich's dann in in mit irgendwie schreiben können. Ging an der Stelle aber nicht, weil andere Sachen wieder nicht gegangen wären. Und tatsächlich diese die Grenzen waren dann recht fließend. Also ich konnte dann halt auch über dieses dieses son System, wo man quasi die die Action Blocks dann einfügen kann, die irgendwelche Dinge triggern, wie zum Beispiel haptisches Feedback. Das waren für mich halt dann zwei Klicks, ein neue Note anlegen, sagen hier, haptisches Feedback auf, wenn der Button gedrückt wird. Und dann hat kann ich das supereinfach auch bewerkstelligen. Das wär auf Codebene dann wieder deutlich komplizierter gewesen, beziehungsweise hätt ich wahrscheinlich noch mehr Fehler gemacht. Also da konnte ich mich dann mehr Richtung Tobi auch arbeiten, sag ich mal, von der von der Funktionalität her. Auch Stück für Stück hab ich mich auch sicherer gefühlt, wie wie mach ich eine eine Firebase Datenbankabfrage und so weiter und so fort, weil der Flatterflow einfach das schon relativ verständlich macht für den Nutzer. +Jan +Das wär jetzt meine nächste Frage gewesen. Also selbst wenn Du dann durch diverse Tutorials und und so dich in dem Editor irgendwie wohlfühlst, musst Du ja trotzdem am Ende noch über kurz oder lang andere Services integrieren, sei es euer eigenes Firebase oder vielleicht habt ihr auch noch andere Services angebunden, keine Ahnung. Und wie schafft das denn son Tool an der Stelle, da sone Translation hinzubekommen zwischen, na ja, na ja, Du machst dir zwar eine Datenbankabfrage und da kommt irgend 'n 'n JSON Objekt vielleicht zurück am Ende des Tages. Aber wie sagt man dann, okay, und jetzt von diesem JSON Notizenobjekt hier an dieser Stelle den Titel anzeigen und hier den Inhalt und hier vielleicht, ne, also also wie wie ist dieser Connect so? +Tobias Müller +Ich glaub, da hat sich dann auch schnell gezeigt, wo so Grenzen dann liegen, was was trotzdem aber möglich ist. Einfache Updates, Datenbankabfragen und so weiter. Da kam Till auch super mit klar. Ich glaub, da braucht man nicht wirklich das Coding Verständnis, wenn's aber dann drum geht, wirklich mal Daten zu extrahieren, mehrere multiple Datensätze zu verändern oder auch in der UI ja berechnen und so weiter zu machen. Da musste ich dann schon einschreiten. Das ist nämlich auch flatterflow möglich, aber da gehört dann halt wirklich auch das Verständnis 'n bisschen dazu. +Jan +Und wie habt ihr euer euer Backend dazu gebaut? Weil ich nehm mal an, das macht man ja nicht in flatterflow. +Garrelt +Genau. Also wir hatten ja jetzt erst +Tobias Müller +mal über die Integration von Feuilleton zum Beispiel geredet. Jetzt kommt eben der Punkt unsere Verarbeitung bei Memoro, unsere unsere Prozesse, wie wir das natürlich aufgearbeitet ausgeben, das liegt alles an 'nem Backend. Und auch da bietet eigentlich Flatterflow 'n guten zentralen Punkt, wo man einfach seine Backend Route einfügt, seine Restcalls definiert und die dann auch als Action abfeuern kann. Aber auch da gibt's dann halt wiederum Limitierungen. Wir hatten jetzt Glück, dass wir jetzt da backend rotemäßig nicht nur hohe Komplexität haben, aber wir waren jetzt trotzdem auf ein back End eingegrenzt. Wir konnten so was wie On Preme und so weiter nicht umsetzen damit. +Garrelt +Was ich +Till Schneider +vielleicht noch ergänzen will an der Stelle, im Bezug auf die Datenbankabfragen, ist es son bisschen auch eine Blackbox gewesen. Also wir hatten dann im Endeffekt das Problem, dass wir in den Memory Leak reingelaufen sind und mussten dann ewig lang debuggen, woran liegt es? Wir haben's dann auch so halb rausgefunden, aber dann wieder doch nicht. Also es war irgendwie der Timer, den mussten wir dann Custom nachbauen, damit es besser funktioniert. Also drückst Aufnahme starten und dann zählt 'n Timer hoch und irgendwie der eingebaute Timer von Flatterflow hat diesen Memoryleak wohl verursacht, aber es war dann doch nicht, das war das Hauptproblem, aber da war noch mal 'n tiefer liegendes Problem und wir kamen da partout nicht dahinter. Und das war hat mich dann auch sehr verrückt gemacht. Ich als quasi, ich nutze Memoro am meisten, ich kipp da wirklich alles rein, ich nenn mein ganzes Leben damit auf. Ich find, das sind ganz essenzielle Metadaten. Da kann man ja dann irgendwann eine schöne eine KI mit personalisieren. Hatte dann die Probleme, dass das unglaublich langsam würde, die Oberfläche. Und ich hab jetzt im Verlauf von der Entwicklung von Nimuru, glaube ich, vier- oder fünfmal meinen Account gewechselt, weil einfach das das Ding so voll war, dass es dass es nicht mehr performant lief. Und wir hatten, wir wussten aber, es gibt eigentlich keine Lösung dafür. Wir haben ewig lange Schleifen gedreht in der Community, haben versucht, das rauszufinden, kamen dann halt aber zu der Erkenntnis, hey, wenn wir uns angucken, wie wie viel der Durchschnittsnutzer Memos hat, merkt er das Problem gar nicht. Also machen wir uns jetzt deswegen verrückt oder schieben wir das Problem vor uns her? Und wir waren dann an 'nem gewissen Punkt, war uns klar, okay, wir sollten den den den den Stack wechseln. Wir brauchen die Kontrolle über den ganzen Stack und dann haben wir dieses Thema einfach vertagt. Aber das hat uns sehr, sehr viele Bauchschmerzen bereitet. +Jan +Jetzt sprichst sprichst schon 'n wichtigen Punkt an so, diese Frage nach, ne, wann entwächst man halt so diesem Tool und wann muss man sich den Stack noch mal irgendwie grundlegend Gedanken machen. War das für euch von vornherein klar, dass ihr sagt, na gut, wir starten hier mit diesem No Code No Code Thema, einfach schnell sein zu können? Aber irgendwann werden wir's irgendwie selber bauen müssen, wenn alles gut läuft. Oder war eigentlich die Annahme, na ja, wir bauen das jetzt hier mit 'ner Flatterflow oder irgend 'nem anderen Tool und das kann uns auch bis zu den ersten zehn Millionen Usern tragen? +Tobias Müller +Ich glaub, gestartet sind wir schon 'n bisschen mit der Einstellung, dass wir das jetzt mal unlimitiert nutzen können. Natürlich trotzdem mit der Ungewissheit, welche Anforderungen wir noch haben werden. Wir haben aber auch schnell dann gemerkt, wir haben wirklich 'n Problem mit dem Faktor Zeit. Am Anfang eben diese diese Geschwindigkeiten, mit der uns wir uns bewegen, hat uns unglaublich geboostet. Wir konnten schnell iterieren. Wie Thuill schon sagte, unsere vierhundert Versionen, die wir da hatten, konnten direkt auf Feedback eingehen, konnten Sachen testen. Das hat uns sehr viel Geschwindigkeit gegeben, aber wir haben dann gemerkt, je komplexer das Projekt wurde, je größer hatten wir, je mehr hatten wir Probleme mit mit dem Thema Zeit ja bedingt durch. Zeiten, die teilweise bis zu zehn Minuten gingen, einfach nur, wenn man was in der UI ändert und hat dann eben dieses Hot Reload. Auch da sind die Zeiten bei so, keine Ahnung, eine halbe Minute. Und dann stimmt mal was nicht und man muss noch mal neu laden. Man will's aufm Device mal testen, ob da muss man dann wieder warten. Und Flatterflow, ja, kostet da halt leider viel Zeit. Und dann muss man halt irgendwann überlegen, ab welchem Punkt muss bringt man muss man zu viel Zeit für die Entwicklung aufbringen für diese Wartezeiten? Und wann wäre es es sinnvoller, eben auf 'ner anderen eigenen Stack zu wechseln? +Till Schneider +Wir hatten auch ein Problem. Wir haben ja die App dann direkt sehr mehrsprachig aufgebaut, also gleich vierundzwanzig Sprachen unterstützt von Anfang an. Da geht's eben auch drum, dass ausländische Fachkräfte ihr ihre Arbeit in ihrer Heimatsprache dokumentieren können per Sprache. Wir haben son bisschen son Speech to form System geschaffen dann in der App, da wir eben aus verschiedenen Industrien da Anfragen bekommen haben. Und da war einfach einen einen Bug in Flatterflow, dass ich hier die händisch oder ich ich hab einen Google Translate Button, der war erst mal super. Ich konnte oben die englische Sprache eintragen, dann hat er mir das übersetzt in die anderen dreiundzwanzig Sprachen. Erst mal war war da das Problem, dass Google Translate keinen Kontext mitgegeben bekommen hat. Also der wusste nicht, dass es eine UI Übersetzung hielt. Heißt, ich musste ganz viel dann, ich musste dann andere englische Wörter suchen, damit er besser versteht, dass es dass es sich eben eine App handelt. Das war schon 'n Krampf und dann gab's einfach einen Bug, dass er immer wieder die Übersetzung gelöscht hat. Und dann hatten wir sogar geschippte Versionen, wo dann wo er uns eine Seite, die wir überhaupt nicht angefasst hatten, wieder alle Übersetzungen zerschossen hat. Und da gab's, da haben wir auch eben Tickets für angelegt im in der Flatterfloor Community und und und. Kam aber nix. Also da gibt's, glaub ich, bis heute kein kein Fix dafür. Und wir hatten dann vermehrt genau dieses Problem der Hilflosigkeit eigentlich an der Stelle, wo wir wissen, okay, wenn wir das jetzt, hätten wir den Stack im Griff, könnten wir das lösen, aber wir können's grade nicht lösen. Und das waren dann so zwei, drei Baustellen und wo man dann mehr und mehr Bauchschmerzen bekommt und und dann halt eben die Zeit verliert. Weil auf einmal muss ich alle Seiten noch mal kontrollieren vor jedem Shipment und schauen, hey, ist da sind alle Übersetzungen drin. Also das ist genau der Punkt, Tobi meint so, diese die Zeit, am Anfang ging's unglaublich schnell und dann ging's unglaublich schnell unglaublich langsam. +Tobias Müller +Ja, und da haben wir halt schon früh gemerkt oder zu 'nem gewissen Punkt, wir müssen irgendwann den Absprung planen. Wir haben dann angepeilt, eine stabile Version zu haben, die wir in Ruhe lassen können. Und ab da dann eben zu sagen, okay, jetzt machen wir 'n Cut und jetzt wechseln wir auf unseren eigenen Stack. Also das haben wir schon früh angefangen zu planen. +Garrelt +Ja. Aber +Jan +das heißt ja natürlich oder vermutlich, in dem Fall ist das 'n kompletter, ne, weil son Tool wie Flatterflow sich wahrscheinlich nicht dafür anbietet zu sagen, wir machen hier son son hybrides Ding und ersetzen das so peu à peu vielleicht View by View oder Action by Action oder so was. Sondern Du musst im Prinzip, wie Du schon gesagt hast, ne, sone auch sone stabile Version mal kommen, die in Ruhe lassen und nebenbei dann im Prinzip komplett von vorne noch mal die ganze App bauen. Hat euch flutterfloda in irgend 'ner Art und Weise unterstützt? Also konntet ihr, weiß nicht, das Projekt, was ihr da hattet, exportieren und irgendwie dann im Code weiterbearbeiten? Oder habt ihr komplett auf der grünen Wiese angefangen? Wie wie läuft das? Oder habt ihr überhaupt schon angefangen oder plant ihr's grade nur? Vielleicht, das mal ganz vorne anzufangen. +Tobias Müller +Also also wir sind schon am Umbauen. +Till Schneider +Mhm. +Tobias Müller +Mhm. Also ich will nicht sagen, dass es nicht technisch möglich ist. Es ist durchaus technisch möglich, die Codebasis zu nehmen. Sobald man 'n Abo hat, hat man das Recht und kann man eben die die Codebasis nehmen und kann den Code auch selbst verändern. Aber dann kommt man, würd ich sagen, wirklich in Teufelsküche. Ich würde von abraten, weil dann noch mit wirklich der Flow Oberfläche noch Dinge zu verändern, fangt's an mit Merch Konflikten. Das geht So, nee, +Jan +dann hab ich's nämlich, glaub ich, falsch verstanden. Das ging mir nicht darum, parallel auch an der Codebase rumzuarbeiten, sondern die quasi zu nehmen und dann, also die Codebase einmal zu exportieren, flatterflow sozusagen abzuschalten, aber an der exportierten Codebase sozusagen weiterzuarbeiten einfach nur, ja? Dass man halt diesen Plattformlog in son bisschen umgehen kann. +Tobias Müller +Ja, okay. Ja gut, also prinzipiell gleiche Aussage, man hat den Code, man kann den nutzen, man könnte dann auch Vorurteile davon exportieren. Aber dadurch, dass halt flatterflow den Code selbst generiert und auch sein eigenes SDK dadrin nutzt, ist es, denke ich, nicht praktikabel. Also ich würde wirklich davon abraten, lieber die Logik, die dahintersteckt, nehmen und +Garrelt +aus der Logik die eigene App noch mal schreiben. Das geht ja +Tobias Müller +dann auch super mit KI heute. Da KI heute. Deine, Du schreibst deine Logik runter und sagst mir, erzeug mir daraus den Code. Viel effektiver als jetzt wirklich den Code da rauszuexrahin. +Jan +Aber das ist ja eigentlich 'n superwichtiges Learning, ne. Weil ich glaub, für ganz viele Leute ist so dieser Punkt, na ja, hab ich da 'n Plattformlog in oder kann ich am Ende meine App rausexportieren und mitnehmen? +Garrelt +Ja. +Jan +So, gibt ja gefühlte Sicherheit so eigentlich, ne, wenn Du damit anfängst. Und was Du jetzt ja eigentlich gesagt hast so durch die Blume ist, das hat eigentlich nichts wert, dieses Feature. Also es es gibt dir so sone gefühlte Sicherheit noch mit, aber 'n wirklich großen Nutzen außerhalb das zu retten, was Du hast, hast Du halt eigentlich nicht davon. +Tobias Müller +Ich ich kann mal kein Szenario vorstellen, wo das wirklich der Mehrwert größer wär als die Probleme, die man da durchkriegen würde. +Till Schneider +Ich würd das auch mal unter unter Marketing irgendwie verbuchen, weil wenn man wenn sich 'n Flatter Dev mal diesen Code anguckt, den man exportieren kann aus Flatterflow raus, dann wird der sagen, nee, nee, nee, wir müssen das so oder so komplett neu schreiben. Das entsteht halt aus aus diesem Low Code, No Code Ansatz, dass die halt das irgendwie zusammenschustern. Ich find da eine Parallele ganz spannend mit der Superbase, weil wir uns jetzt im im Sinne des auch damit beschäftigt haben, gehen wir auch aus diesem Log von Firebase raus. Und Superbase wirbt es ja ganz ähnlich an. Man kann da ja loslegen ganz einfach, aber sie sind ja Open Source und so weiter. Man kann es ja dann auch on prem hosten und selber hosten oder was auch immer. Ist halt dann aber doch nicht ganz der Fall, weil dieses ganze Tooling, was die Superbase so einfach macht, wenn man sie über deren Dienste nutzt, dann nicht mehr verfügbar ist. Und bei flut of floast, das würde ich sagen, noch extremer. Man vom Frontend her unglaublich wichtig, weil man konnte eben sehr viel Schleifen drehen. Das eine 'n Frontend nachzubauen geht verhältnismäßig einfach, es zu finden, ist ja das Schwierige. Also wo wo muss das Zeug hin? Wie wir deswegen, wir haben ganz schöne Screenshots von diesen vierhundert Versionsnummern und sehen genau, wie hat sich das verändert über die Zeit. Und dann haben wir eine eine gute Linie gefunden und die Linie jetzt nachzubauen, geht verhältnismäßig schnell dann. Aber ja, von der Codebasis, die mitzunehmen, nein. Und wir hatten ja nun dann auch eine Recherche gemacht und uns eigentlich auch gegen Flatter entschieden an der Stelle und haben auf REAC Native gesetzt. +Jan +Das ist jetzt noch mal 'n spannender Punkt, ne. Ich mein, jetzt habt ihr ja im Prinzip die ganze App schon mal in Flatter gebaut oder zumindest den Teil, den ihr custom gebaut gebaut habt davon, der ist ja vermutlich 'n Datenflatter gebaut in Flatterflow so, ja. Ja. Wieso seid ihr dann davon quasi weg, wo ihr dann Also da könnt ihr ja noch weniger recyceln, sag ich mal. +Till Schneider +Ja. Ja. Also ich finde da da auch lange lange recherchiert, mich reingefuchst irgendwie und die Es ist, glaub ich, relativ egal. Also es gibt in in Europa auch die größere Flatter Community. Ich kenn sehr viele Flatter Devs, die produktive Apps haben in Flatter gebaut. Und ich glaube, von der Größe der Community her, von der Menge der Packages her und dem Support her schenkt sich das wenig. Ich fand ich fand rig Native einfach deutlich interessanter, weil sie's jetzt erst, glaub ich, seit 'nem halben Jahr auch auch geschafft haben, aus einer zentralen Codebasis eine performante Web App hinzubekommen. Und Flatter, da es ja aus vom Canvas Model herkommt und eigentlich nicht aus der Webentwicklungsseite her, wird das wahrscheinlich nie schaffen, dass die wirklich performant wird, die Webseite. Wirklich eine Codebasis haben und damit auf alle Plattformen schippen können. Und da ist Reggnative meiner Ansicht nach jetzt 'n guten Schritt weiter plus Tobi und ich haben mehr Erfahrung von der vom Web Development Seite her. Deswegen fällt uns auch da die die, sag ich mal, die Sprache leichter. Es kommt kommt uns bekannter vor. Plus natürlich von Microsoft, Microsoft nutzt es immer mehr. Jetzt kam erst vor zwei, drei Wochen einen einen Riesenupdate. Wir haben eine neue neue Architektur geschipp, geschipped, weil Meta ist ja von Meta React auch in den in den Quests nutzt, also in ihren VR Headsets, das dort für die Oberfläche nutzt. Heißt, sie mussten extrem jetzt die Performance hochschrauben, dann Drop Frame in VR natürlich zu Motion Sickness führt und geht überhaupt nicht. Heißt, da ist einfach 'n megades Backing dahinter. Und vor allem, das war eigentlich der der entscheidende Punkt, man bekommt alle Plattformen performant bespielt. +Jan +Und als wie realistisch hat sich der Plan herausgestellt zu sagen, ihr habt eine stabile Version in Flatterflow, die ihr nicht mehr anfassen müsst. Und parallel arbeitet ihr quasi an dem an dem. Aber ich mein, das ist sone Situation, die haben wir da draußen, glaub ich, alle schon mal erlebt. Der hat's da also, hey, dieses alte Produkt fassen wir jetzt nicht mehr an. Und wir bauen hier einfach nebenbei das Neue. Ja. Und dann kommt halt doch irgendwie jemand die Ecke und sagt, könnt ihr können wir dich noch mal da eine Kleinigkeit ändern? Und da ist noch 'n Bug, der müsste irgendwie noch mal gemacht werden, weil bis das Neue kommt, dauert's irgendwie noch so lange. Und dann ist ja doch dieses Ziel, auf das man hinarbeitet, doch deutlich beweglicher, als man vorher gedacht hat, sag ich mal. +Tobias Müller +Ja, man muss die Sachen, glaub ich, einfach 'n bisschen runterschlucken und sich drauf fokussieren, dass man ja grade daran arbeitet. Weil dann hört man eben, ja hey, das und das geht nicht. Zu den Leuten, die eingeweiht sind, denen sagt man dann so, ja, das wissen wir. Die neue Version kommt bald. Zu den Externen sagt man, oh, vielen Dank, das nehmen wir uns zu Herzen, da arbeiten wir direkt dran. Also ja, wir arbeiten direkt dran, aber halt an 'ner komplett neuen App. +Till Schneider +Es war wirklich, dass wir's geschafft haben, die App noch an den Stand zu bekommen, an der sie jetzt ist. Und wir haben sie jetzt wirklich seit 'n paar Monaten nicht mehr angefasst. Und der letzte Schritt für uns war die Monetarisierung, dass wir dass wir's monetarisiert haben, dass die Leute da sich 'n Abo ziehen können. Diese letzte Versionsnummer war wirklich, also war wie im waren wir. Das, weil gefühlt ist ist uns mehr auseinandergefallen von jeder neuen Version, als dass es wieder funktioniert hat. Und es war wirklich, würd ich sagen, Glück, dass wir das noch mal rausgeschoben bekommen haben und es einfach auf 'nem guten Stand ist. Weil wie gesagt, Flatterflow auf einmal entschieden hat, hier, wir löschen euch Übersetzungen, wir löschen machen irgendwelche Hintergründe kaputt oder so. Das das war wirklich 'n Kampf dann am Ende gegen gegen gegen das Framework, sag ich mal. +Tobias Müller +Ja, es war es war, glaub ich, so gefühlt 'n bisschen Glück, dass wir genau jetzt zum wichtigen Zeitpunkt den Absprung geplant haben, weil's wirklich schlimm wurde jetzt. +Till Schneider +Ja, ja. Knappe Kiste. Okay. +Jan +Das forciert natürlich son bisschen die spannende Frage. Wenn ihr jetzt Memoro noch mal bauen würdet oder in ein paar Jahren ein anderes Start-up macht oder was auch immer, ein anderes Produkt im im selben Space, würdet ihr das noch mal genauso machen? +Garrelt +Zu Anfang drüber. +Tobias Müller +Ja, wir haben 'n bisschen differenzierte Meinung. Also mal allgemein find ich immer noch, ist valide. Für jemanden, der keine Codingerfahrung hat, eine eigene eine einfache App machen will, keine Ahnung, Du machst Schmuck und willst dann unbedingt auf eine App anbieten. Hey, dann, damit wirst Du glücklich, denk ich, mit flatterflow. Das wirst Du damit gut hinkriegen. Aber so was wie mit Moreo grade heute würd ich nicht mehr mit Flatterflow starten. Und genau, ich glaub, da kann ich Herrn Till übergeben zu seiner Meinung. +Till Schneider +Ich find das eine eine unglaublich spannende Frage, weil ich mach mir da viel Gedanken drüber, was wir denn jetzt eigentlich die letzten eineinhalb Jahre so gemacht haben und dann was wir so gearbeitet haben. Und vor allem haben wir uns, weil wir son kleines Team sind und eigentlich auch nicht das Budget haben, groß zu wachsen in dem Sinne. Und ich halte auch viel von organischem Wachstum, ich halte relativ wenig von riesenschnellen VC und wir sammeln viel Geld und wir machen hier. Das das find ich alles 'n bisschen eigentlich kontraproduktiv. Wir haben uns extrem drauf konzentriert, wie werden wir besser? Wie wie können wir noch schneller bauen? Also Tobi und ich, wie arbeiten wir besser zusammen und wie enhanzen wir uns noch mehr mit KI? Meine persönliche Meinung ist, ich würde nicht mehr mit Flatterflow starten, weil die Lernkurve eben so steil ist, dass meiner Ansicht nach heute, wenn man sich 'n Cursor nimmt oder 'n Windsurf nimmt, wenn man sich diese IDIs hernimmt, die wirklich gepimpt sind mit KI, man eigentlich schneller vom Fleck kommt. Und wir haben jetzt über die letzten drei, vier Monate, glaube ich, fünf, sechs, sieben neue Applikationen geschaffen, die alle in in 'ner guten Alphaphase sind, auch intern. Und wir haben unsere Webseite auch zum Beispiel, die jetzt mit NextJS und so, hab ich komplett mit mit Curcer damals komplett mit KI neu geschrieben und so. Das die liest die Marktdown Dateien aus, baut sich über Marktdown Dateien auf und so weiter und so fort. Man kann die dann recht einfach befüllen. Wir haben sehr viel experimentiert mit verschiedenen Sachen und ich glaube, es macht keinen Sinn mehr, diese Dependency und diese Probleme, die diese Dependencies eigentlich mit sich bringen, einzugehen. Man sollte mehr und mehr sich drauf konzentrieren, den Stack selber im Griff zu haben und grundsätzlich weniger Dependencies, weniger kleine Pakete für für irgendwelche kleinen Helferfunktionen und so weiter nutzen, sondern es die KI schreiben lassen. Und langsam sind die Tools, ich mein, die Zeit arbeitet da ja für uns und die das Tooling wird besser. Ich hab das Gefühl, die KI ist noch nicht so richtig in der Endanwendung angekommen, sondern es es passiert ganz viel im im Tooling, diese Tools zu bauen. Aber es ist 'n ganz neues Medium und wie bei jedem Mediumswechsel, die erste Fernsehsendungen, die liefen, da haben sie halt Radioshows abgefilmt, weil sie wussten ja auch nicht, was sie jetzt erfüllen sollen. Also man nimmt irgendwie so die alten Dinge auch noch mit, auch ausm UI Design und so weiter. Da hinken wir eigentlich hinterher, da gibt's extremes Innovationspotenzial. Meiner Ansicht nach hat hat AI 'n großes UX UI Problem eigentlich, wieder leere leere URL Zeile wie Anfänge des World Wide Webs und wir haben einfach wir haben einen unglaublichen Drang zu bauen. Also Tobi hat dann irgend eine Idee, wo er wo er sich 'n kleines Helfertool bauen will und baut es halt einfach. Und damals haben wir das noch in Bildship zusammengesetzt und und in in Flatterflow tatsächlich haben wir die ersten Prototypen gebaut und jetzt machen wir das einfach in Code. Und es geht meiner Ansicht nach schneller, aber wir wissen ja auch, dass das ist es heißt, es ist langfristiger. Wir haben den Stack im Griff und wir können dieses Tool jetzt mit anderen Tools verbinden. Und jetzt kommen Enterprise Customer und wollen das on prem haben oder unsere Schweizer Kunden wollen das in der Schweiz gehostet haben, die Deutschen in Deutschland und dann ist das möglich. Also wir sind viel, viel flexibler und wir bauen halt gleich was auf auf 'nem guten Fundament. Und langsamer geht es, glaub ich, nicht mehr. +Jan +Aber der Punkt, den Du gemacht hast mit, die die Zeit spielt ja dir in die Karten. Das gilt ja auch für so Tools wie Flatter, Flow oder wahrscheinlich analog auch für vergleichbare Tools. Vielleicht hab mir das eben nur mal so kurz nebenbei aufgemacht. Und da gibt's jetzt auch AI Funktionen, die so, ja, wo Du halt son bisschen schnell was malst und da generiert dir dein dein Widget daraus oder Nicht +Till Schneider +gut. Also leider ist das es ist es wirklich einfach nicht gut. Also Aber +Jan +aber wie Du gesagt hast, das ist ja jetzt das Schlechteste, was es jemals sein wird, so, ja? Also auch da ist ja die Frage, ne, weil Du dieses UI Thema auch so angesprochen hast, also da Also ich versteh deine Abwägung jetzt vollkommen, so. Ich frag mich halt, ob das in fünf Jahren immer noch genauso ist. Weil was ich immer seh, wenn Leute mit AI Code generieren oder UIs generieren oder so, es ist superschnell, solange diese diese menschliche Sprache als und funktioniert, ja? Wenn Du jetzt schreibst, hey, ich brauch 'n Login Screen und ich will hier 'n Userfeld und 'n Passwortfeld und 'n Login Button und 'n Registrieren Button und 'n, weiß ich nicht, Button oder so was, ja? Und dann wird das erstellt und alles cool. Und dann kommt so diese diese kleinen Details, ne. Ich brauch hier mein mein Logo drin und ah, nee, ich brauch's irgendwie anders. Das sitzt noch nicht so ganz richtig, wo man halt eigentlich in sonem grafischen Editor, ne, wenn wenn die Grundlage mal erzeugt, das würde man eigentlich viel lieber mit der mit der Maus mal so hingehen und das son bisschen bewegen oder ziehen oder noch mal noch mal anpassen. Und solang ich aber immer 'n neues prompt irgendwie schreiben muss so, ah, mach's doch noch mal 'n bisschen anders und schiebst's mal mehr dahin oder versuch's mal so und so. Das ist, glaub ich, beim Iterieren noch ziemlich mühselig grade. Was man, glaub ich, eigentlich haben will, ist doch son sone Toolchain, wo der Initialzustand quasi AI generiert wird und ich dann aber trotzdem 'n grafisches Interface hab, die letzten null Komma eins Prozent irgendwie Feintuning noch zu machen, ja. Ich will ja nicht in den Code gehen, nur Margin irgendwie anpassen zu müssen. +Garrelt +Weißt Du, +Jan +was ich mein? +Garrelt +Ich weiß, was Du meinst, aber das Problem der aktuellen No Code No Code No Go Tools ist eben, dass sie diese zusätzliche Abstraktionsebene ein +Till Schneider +aktuellen Tools ist eben, dass sie diese zusätzliche Abstraktionsebene einführen und sie dich nicht an den ganzen Stack ranlassen. Du hast nicht die komplette Kontrolle über deine Tools. Also da braucht's, glaub ich, einfach eine neue Generation von Tools, die Ja, vielleicht. Im Grunde genommen auf voll mit KI gebaut sind und diese Abstraktionsebene einfach weglassen. Die können mir das ja dann anzeigen visuell und ich kann meinen Button verschieben, wobei das auch 'n bisschen 'n Wunschtraum ist, dass man dann die Margin anpasst, weil wie verhält sich das dann in der responsivness? Also das ist dann alles gar nicht so einfach. Son visueller Editor bringt auch Probleme mit sich auf der anderen Seite, aber ich glaube, da braucht's, man braucht die Kontrolle über den Stack. Und diese flatte Flow Ebene bringt halt Performance Issues mit sich, bringt eben mit sich und so weiter und so fort. Also zum jetzigen, ist wär schön, wenn wir das hätten, dass jetzt quasi, aber das das muss dann 'n VS Code bringt dann halt irgendwie eine eine Möglichkeit, dass man da auch visuell 'n bisschen rumschieben kann. Da seh ich das dann eher, dass dass sich eigentlich die Code Editoren in die Richtung bewegen, anstatt dass diese Low Code No Code Tools mehr wie die Code Editoren werden. Mhm. Mhm. +Garrelt +Ich +Tobias Müller +glaub, das ist dann auch son bisschen unsere differenziertere Ansicht, weil ich finde, es gibt trotzdem noch valide Argumente für die Low Code No Code Editoren. Ja, es würd, denk ich, schon einiges sich noch ändern. Aber wie gesagt, hast Du diesen ganz ganz einfache Anforderungen oder hast auch überhaupt keine Ahnung von Design oder so. Oder willst einfach nur schnell mal 'n Prototypenen zusammenklicken, find ich, hattest Du durchaus schon noch seine Existenzberechtigung, ich persönlich. +Jan +Also ich muss auch sagen, in meinem vorherigen Job hatten wir auch No Code No Code Tools benutzt, so interne Sachen einfach zusammenzubauen, ja? Und ich sag mal, für so das interne Dashboard zur Eventplanung, ja, wo jetzt auch Design mal komplett zweitrangig ist und es eigentlich mehr son glorifiziertes ist für so einfache Update Delete Operationen und 'n bisschen Auswertungen oder so. Dafür wird es, glaub ich, auch auf sehr lange Sicht nicht zu schlagen sein. So ja, weil das halt noch Weiten mächtiger ist als son Excel Spreadsheet so und deutlich billiger zu machen als irgend eine Custom Softwarelösung, so. Ja, ich glaube, das wird vielleicht die Nische, die Sie da so lang- und mittelfristig besetzen können. Aber wahrscheinlich hast Du recht, Tobias, ja, es gibt gibt auch genug andere Use Cases, wo's sich vielleicht dann dann auch nicht lohnt. +Till Schneider +Sehe ich genauso. Aber sobald's, glaub ich, Customer Facing wird und man will man will in eine gewisse Skalierung reingehen und man will einfach das beste die beste User Experience bieten, was man ja für interne Tools nicht muss. Da geht's nicht drum. Da weiß jeder irgendwie, was er da jetzt machen soll, das Sommerfest zu planen und dann dann ist das super. Aber ja, vielleicht geht dann da so, trennt sich da die Spreu vom Weizen. Als Tobi und ich uns dann irgendwann, das war recht spät in dem Prozess, mal angeguckt haben, was sind denn so die ganz tollen Flatterflow Apps, die so die live sind, waren wir dann auch sehr ernüchtert, weil also die die das, was die da alles showkasen an an den den erfolgreichen Flatterflow Apps, die sind auch gar nicht so gut und und gar nicht so erfolgreich und und und. Also ich glaub auch, es ist sone Frage von von für wen ist es dann gedacht am Ende. +Garrelt +Ich glaube, ihr habt das schon beantwortet, aber das heißt, am Anfang seid ihr eingestiegen mit, es ist cool für Prototyping und wenn's schnell gehen soll. Aber wenn ich das richtig verstehe, wenn ihr jetzt in dem Hacker fahren seid, würdet ihr auch eher auf cursor a I gehen und das so erstellen. Hab ich das richtig verstanden? +Tobias Müller +Ich glaub, das war dann tatsächlich der Punkt, wo wir anfangen zu diskutieren, wo Till wahrscheinlich ironischerweise Till, der nicht Und dann ist der Herr der Podcast hat +Jan +vorbei nach zwei Tagen diskutiert. +Garrelt +Das ist +Jan +schon fatal. +Till Schneider +Genau. Ich ich +Tobias Müller +glaub, das ist 'n bisschen die Ironie. Till eben, der nicht dann Code schreibt, der wird dann vielleicht eher zu Cursor oder Windsurf tendieren und ich vielleicht eher dann tatsächlich zu Flatterflow, der eigentlich coaten kann. Keine Ahnung, aber ja. +Till Schneider +Ich glaube für für Leute, die, also ich persönlich würde es so machen, dass ich dann mit Cursor oder oder Wünsdorf anfange, aber anderen Menschen würde ich da vielleicht schon noch 'n Low Code No Code Tool empfehlen. An der Stelle vielleicht jetzt nicht Flatterflow, weil's einfach noch komplexer ist. Da gibt's eben andere Low Code No Code Tools, die deutlich simpler daherkommen und für einen Hacker Thorn dann besser geeignet sind. Aber ja, für uns persönlich macht es keinen Sinn mehr, in diese Abstraktion reinzugehen, weil wir jetzt weil wir uns halt auskennen und schneller sind, wenn wir es selber coden. Aber für jemand ohne Erfahrung hat seine Tobi, ich +Jan +stell mal eine These in den Raum, weil ich würde mich, glaub ich, dir auch anschließen und auch eher zu dieser zu sonem grafischen Interface Code, nicht Nicht Code Editor tendieren für son Setting. Aber weil ich bei mir persönlich auch immer so das Risiko seh, wenn ich das jetzt in sonem HTML mache und egal, ob das auto generiert ist und oder weiß nicht, Du nimmst fertiges Template ausm Netz oder so was, keine Ahnung, ne. Man verliert sich dann halt doch so ganz schnell in diesen kleinen Sachen so, oh, hier, das könnt ich schon mal weg abstrahieren und hier könnt ich irgendwie noch mal das 'n bisschen eleganter machen als diese Vorlage oder dieses Generierte. Und das ist natürlich sone Falle, in die man in 'nem No Code, in einer No Code Umgebung halt nicht so reinfällt. Ja, man hat jetzt okay, ich klick jetzt mein Interface zusammen und vielleicht ist der Button noch nicht perfekt, aber für mich, dann mein ich jetzt mich, Jan, ja, der im Prinzip null grafischen Gestaltungsanspruch hat, weil ich zwei linke Hände hab, was so was angeht, da reicht das halt. Dann bin ich halt viel, viel schneller so, mal zusammen zu klicken sozusagen, anstatt irgendwie eine IDI aufzumachen und nach zwei Stunden zu merken, dass diese eine CSS Klasse irgendwie immer noch nicht so will, wie ich will, ja. Und halt wieder nur viel Zeit verschenkt hab. Ach, Ja, ich ich ich glaube, +Tobias Müller +das ist 'n valides Argument, ja. Der da da triggert dann son bisschen der Perfektionismusdrang. Den sieht man halt nicht bei sonem Editor. +Till Schneider +Deswegen, das ist sone spannende Diskussion, weil ich glaub auch nur, dass dass dass das meine Meinung ist, weil ich eben keine Erfahrung hab im Coden. Also Ja, Verständnis ist. Ja und das ist aber eine ganz spannende Herangehensweise. Also ich bin ich bin ich bin fest davon überzeugt, ich kann die ich kann die Dinge anders entwickeln. Ich denk anders über sie nach, weil mir ist es egal, wie gut der Code aussieht, da jetzt eben der Wechsel von Curser to Windsurve, vielleicht für meinen schon for shadowing für meinen Pick des mein mein Pick des Tages, finde ich ganz spannend, weil mich Ich les den Code auch nicht mehr. Wirklich, das interessiert mich nicht, was im Coach steht. Ich les den nicht. Ich ich drück aufn Knopf und dann drück ich zehnmal noch mal aufn Knopf. Und wenn ein Ärger rauskommt, +Jan +dann Zu viel Lehrer aus dem sehen gerade nicht, wie hochgaret die Augenbrauen gezogen hat. Die sind quasi zum zum Hinterkopf. +Till Schneider +Ich weiß, Du weißt die. Aber es hängt halt davon ab, was man baut. Und wir bauen deswegen sehr viel neue Tools, damit wir die die vom vom Funktionsumfang sehr abgespeckt halten können. Und mir geht es nicht die Codequalität. Mir geht's darum, dass ich 'n fertiges Produkt hab, was ich den Leuten schippen kann. Und und was da drunter für 'n Müll ist, da ist ja Code ist ja 'n Code nicht perfekt, ist nie perfekt. Das ist eher Gerade nur eine größere war, sonst läuft der irgendwann ausm Ruder. So, das das das entbrase ich von vornherein, das Chaos. Ich weiß noch nicht, wie Oh, okay. Wo das denn gelang ist? Schwierig. Ich +Jan +ich kann da ganz kurz eine eine Anekdote zu erzählen aus Ja. Aus 'nem Consultingprojekt von jemandem, die mal ein ein Interface, ein Frontend in Auftrag gegeben haben, irgendwo offshoring mäßig. Mhm. Und das haben wir dann zurückbekommen in irgend sonem Javascape Framework und und und bla. Und dann mussten da noch 'n paar Änderungen gemacht werden, weil es halt nicht so ganz dem Design entsprochen hat in allen Details. Und dann sind wir da 'n paar Wochen vergangen und das gab eine neue Auslieferung von dem Interface. Und wenn man da mal den Code reingeholt hat, hat man halt gemerkt so, das hat kein Stein auf dem anderen geblieben. Ja. Und dann wurde da so nachgefragt, so, wieso habt ihr denn das ganz? Also wieso ist denn da so viel geändert und so, ja, weil wir das halt komplett neu gebaut haben so. Also bevor wir uns die Mühe machen, hier den Leuten quasi zu erklären, das ist der Stand, das muss geändert werden, gibst Du halt dieselben Specs Ja. Noch mal fünfzig anderen Offshore Arbeitern so. Und das ist halt einfach noch mal komplett neu iterieren, So. +Till Schneider +Keine Techdap, keine Legacy Code, wie +Jan +Ja, genau, einfach komplett bei null an. +Garrelt +Das ist +Jan +halt viel einfacher für die, wenn Arbeitskraft quasi nix kostet. Und im Prinzip ist ja das mit AI genauso, ja? Wenn Arbeitskraft nix kostet, ist ja jede Iteration umsonst sozusagen. Und kannst das dir dann halt auch erlauben. Aber meine meine abschließende Frage an Gareth wär eigentlich gewesen, nach dieser Diskussion jetzt hier, Gareth, ja. Wenn Du jetzt bei 'nem Hackathon anfangen müsstest, ja, für was würdest Du dich denn entscheiden? +Garrelt +Ja, also ich glaub, mein Fazit ist so, das kommt aufs Team an, weil ich hab schon das Gefühl zum Beispiel, es ist es ist eine arge Zielgruppenfrage, dieses Thema, nutzt man dieses Tool oder nicht? Und mein Gefühl war eigentlich, Till und das wollte ich dich auch noch fragen, dass es für dich gefühlt eigentlich 'n cooles Tool war, weil Du so von Design supergut auch ins Entwickeln einsteigen kannst, so. Und so der Weg darüber in meinem Kopf halt 'n schöner ist und 'n guter, weil man eben so viel schon sieht und an manchen Stellen einsteigen kann, technisch, wenn man will, es aber auch erst mal weglassen kann oder von anderen machen lassen kann. Und deswegen wär, glaub ich, meine Entscheidung so, hab ich jetzt nur Entwickler in meinem Hacker vom Team? Dann wahrscheinlich nicht. Aber hab ich auch Designer, die auch Lust haben, nicht nur zu designen, sondern das auch direkt zu bauen, was ja auch dann den Entwicklern erst mal auch Arbeit abnimmt und eine bessere Zusammenarbeit ermöglicht. Auf jeden Fall, find ich's total sinnvoll. Und ja, wahrscheinlich und mein zweites Learning wär dann, haben wir gewonnen, ist es eine coole App, wollen wir daraus 'n Produkt machen, direkt weg von No Code. Das wär, glaub ich, so mein mein Learning. +Jan +Das ist, glaub ich, 'n sehr schönes Fazit. Das das kann man so stehen lassen als letzten Satz. Aber das ist ja noch nicht ganz der letzte Satz, weil wir haben ja noch unsere. Und Gerald, weil Du grade son schönes Fazit gezogen hast, ja, darfst Du auch anfangen mit deinem. Sehr gerne. +Garrelt +Ich habe einen Artikel mitgebracht, den ein Kollege vor Kurzem geteilt hat mit uns, der heißt von Iven Smith. Und das ist im Prinzip ein sehr ausführlicher Artikel, gibt auch einen Talk dazu, also kann man sich auch anschauen, wen man möchte. Darüber, wie er Kenntnis als Entwickler lebt sozusagen. Also wie kann man, ja, Kenntnis, Freundlichkeit, nee, wie übersetzt man das Deutsch? Ja, ja doch. Freundlichkeit. Wie kann man das als Entwickler in seinem Arbeitsplatz ausleben und fördern und andere Leute da so auch mitreißen? Er beleuchtet das von verschiedenen Sichtweisen und ich find es sehr schön gemacht. Es ist 'n sehr schöner. +Jan +Nice. Wunderbar. Tausend Dank. Dann Tobi, was hast Du im Gepäck? +Tobias Müller +Ich hab 'n kleines Open Source Tool, das heißt Local Sand. Da bin ich drauf gestoßen, das hilft mir hin und wieder, weil ich eben durch die Crossplattformenentwicklung aufn Mac wechseln musste, bin aber eigentlich eher von der Android Linux Site und so weiter gekommen. Ich hab also immer noch 'n Android Phone und da kann ich halt keinen Airdrop nutzen. Und Local Sand ist 'n Open Source Tool, womit man einfach wirklich auf allen Plattformen schnell mal Dateien oder Texte auch verschlüsselt im lokalen Netzwerk senden kann. Also ich hab irgendwie 'n, ich bau grad eine APK, nämlich Local Sand, Shicks aufm Handy. Auch hab ich 'n iPhone hier liegen, kann ich auch darüber schnell was schicken. Also superpraktisch und vor allem die Oberfläche ist super minimalistisch und intuitiv. +Jan +Wenn das Open Source ist, wie krieg ich das auf mein iPhone drauf? Gibt's auch im Store oder muss ich mir das selber bauen und kompilieren oder wie wie wie wie komm ich da dran? +Tobias Müller +Ich vermute, dass es im Store ist. Ich kann's dir grade nicht sagen. Doch, ich ich hab grad geguckt. +Garrelt +Entweder im Store oder Package Manager haben, das gibt's auf verschiedenen Plattformen verschiedene Links. +Till Schneider +Okay. Cool. +Jan +Wunderbar. Till, was hast Du im Gepäck? +Till Schneider +Genau, ich habe Windsurf mitgebracht. Ich eben als als Nichtcoder bin dann von Cursor gewechselt zu Windsurf jetzt vor, also ich benutz beide Tools noch parallel, aber primär jetzt eigentlich Windsurf, ist eben eine neue IDE, die viel ist als Cursor, also die die dreht selber Denkschleifen, schlägt selber die nächsten Schritte vor und ist gefährlich. Also ich würde das wirklich in 'nem neuen Projekt testen, da die wie son, es ist wie 'n breiter Pinsel. Das ist wie man wirft Farbe auf die Leinwand, wo man mit Curseern noch noch zielgerichteter an Probleme rangehen kann, fängt es immer an, gleich die ganze Codebasis in in in Hinblick zu haben. Man muss zum einen keine Dateien mehr referenzieren, sondern der sucht sich die selber raus. Es ist von der von meiner Developer Experience her, wenn man davon sprechen kann, ist es komisch, weil man drückt drauf und dann geht's drei Minuten und man weiß jetzt nicht, was man in den drei Minuten machen soll. Mit mit Cursor kam ich noch schön in einen Tunnel rein und konnte konnte mir überlegen, während der das diesen eine Änderung jetzt macht, okay, wo könnten wir jetzt weiter weitermachen? Also es erfordert 'n bisschen ein Umdenken. Ich find's aber extrem spannend, weil es einfach so der, find ich, die noch mal der nächste Schritt ist in Richtung, KI kann auch in größeren Code Codebasen gut unterwegs sein, kann einem Dinge, kann Probleme finden, die die irgendwo versteckt sind und hat mich jetzt noch mal ermöglicht, Dinge noch schneller zu bauen eigentlich und auch andere Sachen noch besser zu referenzieren. Also fand ich jetzt eine 'n tolles Tool und zeigt son bisschen, find ich, die die die deutet die Zukunft an, wo's hingeht. +Jan +Ich hab bei diesen ganzen AI Tools immer so zwei Fragen. So a, was was ist das Pricing? Und gibt's die Möglichkeit, das auch irgendwie zumindest teilweise lokal bei mir laufend zu haben? +Till Schneider +Tatsächlich weiß ich bei CursOR nicht, ob man seine eigenen API Keys einbinden kann. Ich weiß, dass es also bei kostet ja zwanzig Euro im Monat, wenn man das pro Tier hat. Ich hab da aber ständig auch neue dazukaufen müssen, also ich zahl da eher achtzig Euro im Monat für. Und bei, ja, aber das genau, also das ich hab da überhaupt keinen Schmerz, weil anders könnt ich ja gar nicht arbeiten. Also für mich ist es ja quasi essenziell notwendig, Nein, +Jan +nein, nein, alles cool. +Garrelt +Ich ich +Jan +bin nur drüber gestolpert, weil wir hatten, ich hatte vor Kurzem 'n Gespräch mit Fabi, als wir 'n Gastauftritt im Engineering Kiosk, kleines Shoutout, gehabt haben, da haben wir nämlich auch über Cursor gesprochen. Dann hab ich nämlich Also Fabi hat Cursor gepitcht. Und das war nämlich genau meine Frage, so dieses Pricing Model, wenn ich's für mich so sehe, ne. Null, zwanzig und Teamplan, so ohne Preis sozusagen, was wir da auf der Webseite haben. Und dann stand da so dabei, Du hast irgendwie, weiß ich nicht, zweihundert Autokomplicions im Monat oder keine Ahnung, was Du da für zwanzig Euro kriegst. Und das war für mich als als Entwickler, als Endkunde gefühlt vollkommen intransparent, weil wenn Du mich fragen müsstest, wie viel Autokomplicions brauchst Du denn überhaupt? Also ich Keine Ahnung. Keine Ahnung, so, ja. Vielleicht fünfzig, vielleicht fünftausend? Keine Ahnung. +Till Schneider +Ja. Also ich brauch dann dementsprechend achthundert im Monat oder so circa. Und dann zahl ich meine achtzig dafür. +Jan +Ja, das ist schon schlauer als ich +Till Schneider +Und also Windsurf kostet weniger. Windsurf kostet 'n Zehner im Monat, aber Du hast da wieder, ich bin jetzt, was ist, zwei Wochen Free Trial. Jetzt haben sie mir für Thanksgiving noch mal 'n Monat, glaub ich, oder zwei Monate Free Trial geschenkt, also wie's halt so ist. Sie sie hocken dich, aber sie haben mich schon lange gehook, also an der Stelle hätten sie mich jetzt eigentlich auch schon zur Kasse bitten können. Aber ich bin ja bin ich froh drum. Genau, heißt, es ist 'n bisschen günstiger. Ich bin mir nicht sicher, ob Du deine eigenen API Keys einbauen kannst. Genau, aber fand ich fand ich 'n sehr, sehr schönes Tool. Gleich gleiche Geschichte, Fork von VS Code. Genau. +Jan +Aber auch mit 'nem API Key wär's ja immer noch bei irgendjemand anderem gelaufen. Ich so, weißt Du, ich such noch son Editor, wo ich auch so lokale Modelle irgendwie am Start hab. Ja. Und das gibt's gefühlt +Till Schneider +noch nicht. Noch nicht. +Jan +Wirklich so. +Garrelt +Ja. Cool. Da aber +Tobias Müller +'n Tipp, wenn Du Cursor nutzt, kannst Du zum Beispiel auf auf eine eigene deine eigene GPT Instanz hosten. Das ist ja dann auch DSGVO konform. Das kommt zumindest mal 'n bisschen Richtung Datenschutz und Sicherheit in die Richtung. Und dann kannst Du auch den API key dafür nutzen. +Jan +Tatsächlich, so blöd es jetzt klingt, aber Datenschutz ist nicht mein Main Console. Mein Main Console ist eher so, ich sitz im Zug oder im Flugzeug oder keine Ahnung, Du hast ja aber einfach kein Netz braucht, weißte? Ja. Was die die perfekte Überleitung zu meinem Pick ist, mein Pick ist nämlich Air Fly. Ich weiß nicht, wer von euch Air Fly kennt, das ist son kleines Hardware Ding. Wer öfter mal im Flugzeug sitzt, so wie ich das in letzter Zeit viel getan habe, ärgert sich vielleicht manchmal, dass wenn er irgendwie son Onboard Film gucken will, diese komischen Headsets von irgendwelchen Airlines oder so benutzen muss. Und die coolen Kopfhörer, die man selber dabei +Till Schneider +hat, meistens nicht funktionieren, weil das ja in der Regel alles Bluetooth Sachen sind, die wir so haben. +Jan +Mhm. Und Airfait, in der Regel alles Bluetooth Sachen sind, die wir +Tobias Müller +so haben. Mhm. +Jan +Und Air Fly ist, ich hab jetzt mal kurz, ich hab mal son kleines Dongle, was man im Prinzip an son Audioanschluss hängen kann. Und das macht im Prinzip einen kleinen Bluetooth Anschluss da dran und man kann seine Lieblings Kopfhörer, AirPods, meine Bose Kopfhörer, was auch immer kann ich dann eben benutzen, ohne dass da irgendwie Bluetooth schon dabei sein muss. Und warum grade der Airflight irgendwie so cool ist, Weil er zwei besondere Features hat, die mir immer noch weiterhelfen. Zum einen kann ich mehrere Kopfhörer dadran koppeln. Das heißt, wenn meine Frau, meine Tochter, wie auch immer im Flugzeug mitgucken wollen, können sie im Prinzip auch ihre Kopfhörer dranhängen. Und ich hab wie im Prinzip wie son Splitter dann noch drin. Und man kann dasselbe Gerät nehmen und nicht nur als Transmitter für Bluetooth benutzen, sondern auch als Receiver. Das heißt, wenn ihr am im Urlaub angekommen seid und ihr habt dann quasi einen Mietwagen, der vielleicht auch kein Bluetooth hat und dann steckt ihr das Ding in den Kopfhörer oder in den AUX Eingang von dem Auto, könnt ihr euer Handy quasi so rum daran connecten und habt auf einmal Freisprecheinrichtungen und könnt eure Musik hören und keine Ahnung was. Also es ist einfach 'n superkleines praktisches Gerät. Das ist immer bei mir in der Reisetasche drin und es ist auch in eigentlich jedem Urlaub irgendwie in in Verwendung so. Es ist leider 'n bisschen teuer, ich glaub, es kostet so sechs sechs oder siebzig Euro. Gibt auch eine kleinere Variante davon, die ist 'n bisschen billiger, wenn ich alle Features brauch, aber so oder so. Kann ich nur empfehlen, hat mir schon viel Urlaub gerettet, weil es einem auf einmal ermöglicht, mit Noise Cancelling im Flugzeug zu sitzen und trotzdem 'n coolen Film zu +Garrelt +gucken. Ja. Das ist bei denen sehr spannend. Die haben ein Pro für fünfundfünfzig Euro und ein Duo, was alles kann wie der Pro plus mehr Akkulaufzeit für achtundvierzig Euro. +Jan +Und es gibt noch 'n s e, was irgend 'n Feature nicht hat, was auch noch mal 'n bisschen billiger ist. Aber ja, Ich ich hab den Pro, keine Ahnung, schon vor fünf Jahren oder so gekauft oder was weiß ich, als er rauskam. Bin bis dahin eigentlich sehr, sehr zufrieden. War damals ganz happy, weil das der Erste war, der auch so USB-c charging hatte und Und schon seit Längerem dabei bin, so mein ganzes Travel Equipment auf USB-c umzustellen und nicht mehr mit irgendwelchen komischen Adaptern rumreisen zu müssen, nur noch eine Sorte von Kabel dabei zu haben. Ist auch sehr befreiend. Ja. +Tobias Müller +Das das fühl ich mit den Anschlüssen. Das ist fast schon das größte Argument von allen. +Jan +Ja, also das war auch damals, als ich mein mein iPhone fünfzehn gekauft hab, war so, weißt Du, Kamera, Action Buttons, komplett egal. Hauptsache gib mir halt son USB-c-Anschluss. Endlich einheitliche und +Till Schneider +ich kann Lightning wegschmeißen, ja. +Jan +Ja, genau. Cool. Dann Till, Tobias, danke für die Zeit. Danke, dass ihr da wart. Danke, dass ihr uns viel über Low Code und No Code und Vor- und Nachteile erzählt habt. Ich glaub, das war superspannend auch für alle, die da draußen dazu gehört haben. Danke, Garelt, für die Zeit. Danke an euch da draußen fürs Zuhören. Wenn ihr Fragen, Anmerkungen, Kritik, wie auch immer habt, dann immer gerne an Podcast at Programmier Punkt bar eine E-Mail schicken oder das Kontaktformular auf der Webseite benutzen oder Kommentare hinterlassen auf Youtube, Spotify oder irgendwelchen anderen Plattformen. Lesen immer fleißig mit und wir freuen uns auch immer über Bewertungen über den Podcast bei iTunes zum Beispiel, wenn ihr uns da hört, findet, gefunden habt, wie auch immer. Ansonsten bleibt nicht mehr viel zu sagen, außer dass wir uns hier bald wiederhören. Und tschau tschau. Tschau. diff --git a/apps/memoro/apps/landing/context/press/suedkurier-artikel-2024.md b/apps/memoro/apps/landing/context/press/suedkurier-artikel-2024.md new file mode 100644 index 000000000..06e96e235 --- /dev/null +++ b/apps/memoro/apps/landing/context/press/suedkurier-artikel-2024.md @@ -0,0 +1,27 @@ +Eine App, die Zeit spart + +Konstanzer Gründer bringen Memoro auf den Weg +Damit gewinnen sie ein gut dotiertes Stipendium + +VON JÖRG-PETER RAU +joerg-peter.rau@suedkurier.de + +Konstanz – Die Besprechung dauert eine gute halbe Stunde, und für Smalltalk ist kein Platz. Wo steht das Projekt? Was sind die nächsten Schritte? Und was haben wir beim letzten Mal vereinbart? Es ist eine Situation, wie sie allein in Konstanz wohl hunderte Male jeden Tag vorkommt. Auch beim Besuch des Pflegedienstes muss es konzentriert zugehen. Jede Hilfsleistung, jeder Handgriff ist mit einem Minuten-Budget versehen. Und dann muss das alles penibel dokumentiert werden. Aber in der Zeit des Ausfüllens kann die Pflegekraft genau das nicht machen, was eigentlich ihre wichtigste Aufgabe ist: Pflegen. + +Zwei von vielen Beispielen, in denen Künstliche Intelligenz (KI) keine Bedrohung, sondern geradezu eine Verheißung ist. Im ersten Fall erstellt sie ein Besprechungsprotokoll. So knapp, dass es noch gelesen wird – aber so ausführlich, dass nichts Wichtiges fehlt. Im zweiten Fall diktiert die Pflegekraft die erbrachten Leistungen ins Handy, und eine App trägt die Informationen automatisch in die richtigen Formularfelder ein. Wenn nötig, übersetzt sie es gleich auch noch ins Deutsche, wenn der App-Nutzer sich mit einer anderen Sprache leichter tut. + +Zukunftsmusik? Nein, sagen Tobias Müller und Till Schneider. Denn die App, die sie soeben auf den Markt bringen, kann genau das. Memoro heißt sie und wird so beworben: „Nie mehr ein Gespräch vergessen. Memoro erinnert sich für Dich.“ + +Einen dritten Anwendungsfall haben die Gründer auch schon vorgesehen: Handwerker, die ebenfalls dazu verpflichtet sind, bestimmte Arbeitsschritte zu dokumentieren, von der Elektroinstallation bis zur Feuerstättenschau. + +Was die beiden jungen Konstanzer da an den Start bringen, überzeugt auch andere: Soeben haben sie das mit bis zu 100.000 Euro dotierte Mindelsee-Stipendium der Initiative „Unternehmer für Gründer“ (UfG) gewonnen. Zusammen mit weiteren Förderprogrammen schafft das die Basis dafür, Memoro richtig groß machen zu können. + +Bereits beim Gründungs- und Ideenwettbewerb „Hack and Harvest“ 2023 hatten Müller und Schneider ihre Pläne vor- und zur Diskussion gestellt. Siegfried Wagner von UfG, sagen die beiden Gründer, habe dann als Mentor enorm geholfen, aus einer Idee eine funktionierende App zu machen. + +Memoro nutzt dabei in erster Linie bereits bestehende KI-Technologien und Software-Lösungen, macht sie aber über eine sehr einfache Benutzeroberfläche zugänglich. „Und mit der Datenschutzgrundverordnung ist es natürlich auch kompatibel“, betont Schneider – immerhin geht es ja, je nach Anwendungsfall, zum Beispiel um Patienten- oder Kundendaten. + +In den nächsten Monaten wollen die Gründer ihre App weiterentwickeln und vor allem Nutzer gewinnen. Denn neben der Idee muss auch das Geschäftsmodell funktionieren. Tobias Müller sagt, man wolle Abonnements oder auch vorausbezahlte Zeitkontingente anbieten. Memoro könne aber auch in andere Umgebungen integriert werden, sodass neben End- auch alle Arten von Geschäftskunden angesprochen werden können. + +Ob bald auf vielen Handys von Handwerkern, Pflege- oder Führungskräften Memoro läuft, ist dennoch nicht gewiss. In jeder Gründung steckt auch das Risiko, dass sie vom Markt nicht angenommen wird. Müller und Schneider wissen dabei, wovon sie reden, denn Memoro ist nicht ihr erstes Start-up. Doch der Gewinn des Mindelsee-Stipendiums hat ihre Zuversicht noch verstärkt, betonen sie. + +Die beiden Gründer – Schneider ist von Haus aus Mediendesigner, Müller Software-Entwickler – machen eine einfache Rechnung auf: Eine Pflegekraft müsse im Schnitt acht Stunden pro Woche für die Dokumentation aufwenden. Memoro könnte das auf die Hälfte reduzieren. Wenn von dieser eingesparten Zeit ein guter Teil bei den Patienten ankommt und ein anderer in Form von Benutzungsgebühren bei Memoro, „dann könnte das ein großer Erfolg werden.“ \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/prices.md b/apps/memoro/apps/landing/context/prices.md new file mode 100644 index 000000000..85e47d6fd --- /dev/null +++ b/apps/memoro/apps/landing/context/prices.md @@ -0,0 +1,426 @@ +{ + "subscriptions": [ + { + "id": "free", + "name": "Mana Tropfen", + "nameEn": "Mana Drop", + "nameIt": "Mana Goccia", + "price": 0, + "priceUnit": "", + "initialMana": 150, + "dailyMana": 5, + "maxMana": 150, + "canGiftMana": false, + "popular": false, + "billingCycle": "monthly", + "available": true + }, + { + "id": "Mana_Stream_Small_v1", + "name": "Mana Fluss", + "nameEn": "Mana Stream", + "nameIt": "Mana Corrente", + "price": 5.99, + "priceUnit": "/ Monat", + "initialMana": 600, + "dailyMana": 20, + "maxMana": 1200, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": true + }, + { + "id": "Mana_Stream_Small_Yearly_v1", + "name": "Mana Fluss", + "nameEn": "Mana Stream", + "nameIt": "Mana Corrente", + "price": 47.99, + "priceUnit": "/ Jahr", + "priceBreakdown": "(entspricht 3,99€ / Monat, 33% Rabatt)", + "initialMana": 600, + "dailyMana": 20, + "maxMana": 1200, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 3.99, + "available": true + }, + { + "id": "Mana_Stream_Medium_v1", + "name": "Mana Strom", + "nameEn": "Mana River", + "nameIt": "Mana Fiume", + "price": 14.99, + "priceUnit": "/ Monat", + "initialMana": 1500, + "dailyMana": 50, + "maxMana": 3000, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": true + }, + { + "id": "Mana_Stream_Medium_Yearly_v1", + "name": "Mana Strom", + "nameEn": "Mana River", + "nameIt": "Mana Fiume", + "price": 119.99, + "priceUnit": "/ Jahr", + "priceBreakdown": "(entspricht 9,99€ / Monat, 33% Rabatt)", + "initialMana": 1500, + "dailyMana": 50, + "maxMana": 3000, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 9.99, + "available": true + }, + { + "id": "Mana_Stream_Large_v1", + "name": "Mana See", + "nameEn": "Mana Lake", + "nameIt": "Mana Lago", + "price": 29.99, + "priceUnit": "/ Monat", + "initialMana": 3000, + "dailyMana": 100, + "maxMana": 6000, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": true + }, + { + "id": "Mana_Stream_Large_Yearly_v1", + "name": "Mana See", + "nameEn": "Mana Lake", + "nameIt": "Mana Lago", + "price": 239.99, + "priceUnit": "/ Jahr", + "priceBreakdown": "(entspricht 19,99€ / Monat, 33% Rabatt)", + "initialMana": 3000, + "dailyMana": 100, + "maxMana": 6000, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 19.99, + "available": true + }, + { + "id": "Mana_Stream_Giant_v1", + "name": "Mana Meer", + "nameEn": "Mana Ocean", + "nameIt": "Mana Mare", + "price": 49.99, + "priceUnit": "/ Monat", + "initialMana": 5000, + "dailyMana": 200, + "maxMana": 10000, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": true + }, + { + "id": "Mana_Stream_Giant_Yearly_v1", + "name": "Mana Meer", + "nameEn": "Mana Ocean", + "nameIt": "Mana Mare", + "price": 399.99, + "priceUnit": "/ Jahr", + "priceBreakdown": "(entspricht 33,33€ / Monat, 33% Rabatt)", + "initialMana": 5000, + "dailyMana": 200, + "maxMana": 10000, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 33.33, + "available": true + } + ], + "legacySubscriptions": [ + { + "id": "Mini_1m_v1", + "name": "Mini", + "nameEn": "Mini", + "nameIt": "Mini", + "price": 2.99, + "priceUnit": "/ Monat", + "initialMana": 600, + "dailyMana": 20, + "maxMana": 1200, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": false, + "isLegacy": true + }, + { + "id": "Plus_7E_1m_v1", + "name": "Plus", + "nameEn": "Plus", + "nameIt": "Plus", + "price": 7.99, + "priceUnit": "/ Monat", + "initialMana": 1500, + "dailyMana": 50, + "maxMana": 3000, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": false, + "isLegacy": true + }, + { + "id": "Pro_23E_1m_v1", + "name": "Pro", + "nameEn": "Pro", + "nameIt": "Pro", + "price": 23.99, + "priceUnit": "/ Monat", + "initialMana": 3000, + "dailyMana": 100, + "maxMana": 6000, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": false, + "isLegacy": true + }, + { + "id": "Ultra_47E_1m_v1", + "name": "Ultra", + "nameEn": "Ultra", + "nameIt": "Ultra", + "price": 47.99, + "priceUnit": "/ Monat", + "initialMana": 5000, + "dailyMana": 200, + "maxMana": 10000, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": false, + "isLegacy": true + }, + { + "id": "Mini_1y_v1", + "name": "Mini", + "nameEn": "Mini", + "nameIt": "Mini", + "price": 29.99, + "priceUnit": "/ Jahr", + "priceBreakdown": "(entspricht 2,50€ / Monat)", + "initialMana": 600, + "dailyMana": 20, + "maxMana": 1200, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 2.50, + "available": false, + "isLegacy": true + }, + { + "id": "Plus_70E_1y_v1", + "name": "Plus", + "nameEn": "Plus", + "nameIt": "Plus", + "price": 79.99, + "priceUnit": "/ Jahr", + "priceBreakdown": "(entspricht 6,67€ / Monat)", + "initialMana": 1500, + "dailyMana": 50, + "maxMana": 3000, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 6.67, + "available": false, + "isLegacy": true + }, + { + "id": "Pro_230E_1y_v1", + "name": "Pro", + "nameEn": "Pro", + "nameIt": "Pro", + "price": 239.99, + "priceUnit": "/ Jahr", + "priceBreakdown": "(entspricht 20,00€ / Monat)", + "initialMana": 3000, + "dailyMana": 100, + "maxMana": 6000, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 20.00, + "available": false, + "isLegacy": true + }, + { + "id": "Ultra_470E_1y_v1", + "name": "Ultra", + "nameEn": "Ultra", + "nameIt": "Ultra", + "price": 479.99, + "priceUnit": "/ Jahr", + "priceBreakdown": "(entspricht 40,00€ / Monat)", + "initialMana": 5000, + "dailyMana": 200, + "maxMana": 10000, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 40.00, + "available": false, + "isLegacy": true + }, + { + "id": "rc_plus_monthly_8e_play:plus-monthly-8e-autorenewing", + "name": "Plus (Android)", + "nameEn": "Plus (Android)", + "nameIt": "Plus (Android)", + "price": 7.99, + "priceUnit": "/ month", + "initialMana": 1500, + "dailyMana": 50, + "maxMana": 3000, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": false, + "isLegacy": true + }, + { + "id": "rc_pro_monthly_23e_play:rc-pro-monthly-8e-play-renewel", + "name": "Pro (Android)", + "nameEn": "Pro (Android)", + "nameIt": "Pro (Android)", + "price": 23.99, + "priceUnit": "/ month", + "initialMana": 3000, + "dailyMana": 100, + "maxMana": 6000, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": false, + "isLegacy": true + }, + { + "id": "rc_ultra_monthly_play:rc-ultra-monthly-play", + "name": "Ultra (Android)", + "nameEn": "Ultra (Android)", + "nameIt": "Ultra (Android)", + "price": 47.99, + "priceUnit": "/ month", + "initialMana": 5000, + "dailyMana": 200, + "maxMana": 10000, + "canGiftMana": true, + "popular": false, + "billingCycle": "monthly", + "available": false, + "isLegacy": true + }, + { + "id": "rc_plus_yearly_80e_play:rc-plus-yearly-80e-play-renewel", + "name": "Plus (Android)", + "nameEn": "Plus (Android)", + "nameIt": "Plus (Android)", + "price": 79.99, + "priceUnit": "/ year", + "priceBreakdown": "(equals 6.67€ / month)", + "initialMana": 1500, + "dailyMana": 50, + "maxMana": 3000, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 6.67, + "available": false, + "isLegacy": true + }, + { + "id": "rc_pro_yearly_229e_play:rc-pro-yearly-229e-play-renewel", + "name": "Pro (Android)", + "nameEn": "Pro (Android)", + "nameIt": "Pro (Android)", + "price": 239.99, + "priceUnit": "/ year", + "priceBreakdown": "(equals 20.00€ / month)", + "initialMana": 3000, + "dailyMana": 100, + "maxMana": 6000, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 20.00, + "available": false, + "isLegacy": true + }, + { + "id": "rc_ultra_yearly_play:rc-ultra-yearly-play", + "name": "Ultra (Android)", + "nameEn": "Ultra (Android)", + "nameIt": "Ultra (Android)", + "price": 479.99, + "priceUnit": "/ year", + "priceBreakdown": "(equals 40.00€ / month)", + "initialMana": 5000, + "dailyMana": 200, + "maxMana": 10000, + "canGiftMana": true, + "popular": false, + "billingCycle": "yearly", + "monthlyEquivalent": 40.00, + "available": false, + "isLegacy": true + } + ], + "packages": [ + { + "id": "Mana_Potion_Small_v1", + "name": "Kleiner Mana Trank", + "nameEn": "Small Mana Potion", + "nameIt": "Piccola Pozione di Mana", + "manaAmount": 350, + "price": 4.99, + "popular": false + }, + { + "id": "Mana_Potion_Medium_v1", + "name": "Mittlerer Mana Trank", + "nameEn": "Medium Mana Potion", + "nameIt": "Media Pozione di Mana", + "manaAmount": 700, + "price": 9.99, + "popular": false + }, + { + "id": "Mana_Potion_Large_v1", + "name": "Großer Mana Trank", + "nameEn": "Large Mana Potion", + "nameIt": "Grande Pozione di Mana", + "manaAmount": 1400, + "price": 19.99, + "popular": false + }, + { + "id": "Mana_Potion_Giant_v2", + "name": "Riesiger Mana Trank", + "nameEn": "Giant Mana Potion", + "nameIt": "Pozione di Mana Gigante", + "manaAmount": 2800, + "price": 39.99, + "popular": false + } + ] +} + diff --git a/apps/memoro/apps/landing/context/prompts/SORT-ORDER-IMPLEMENTATION.md b/apps/memoro/apps/landing/context/prompts/SORT-ORDER-IMPLEMENTATION.md new file mode 100644 index 000000000..a20021a63 --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/SORT-ORDER-IMPLEMENTATION.md @@ -0,0 +1,109 @@ +# Sort Order Implementation für Blueprint Prompts + +## Übersicht +Diese Dokumentation beschreibt die Implementierung der `sort_order` Funktionalität für die Reihenfolge von Memories, die durch Blueprints generiert werden. + +## Problem +Die Memories wurden in zufälliger oder unerwünschter Reihenfolge angezeigt. Die Kurzzusammenfassung erschien als letztes, obwohl sie als Übersicht an erster Stelle stehen sollte. + +## Lösung +Hinzufügen einer `sort_order` Spalte zur `prompt_blueprints` Tabelle, die explizit die Anzeigereihenfolge der generierten Memories steuert. + +## Technische Details + +### Neue Spalte +```sql +ALTER TABLE prompt_blueprints +ADD COLUMN sort_order INTEGER DEFAULT 999; +``` + +- **Typ**: INTEGER +- **Default**: 999 (für nicht explizit sortierte Einträge) +- **Logik**: Niedrigere Zahlen werden zuerst angezeigt + +### Standard Blueprint Sortierung + +| Position | Memory Typ | sort_order | Begründung | +|----------|------------|------------|------------| +| 1 | Kurzzusammenfassung | 1 | Schneller Überblick für den Nutzer | +| 2 | Aufgaben | 2 | Konkrete Action Items | +| 3 | Ausführliche Zusammenfassung | 3 | Detaillierte Informationen bei Bedarf | + +## Backend Implementierung + +### Query für Memory-Generierung +```sql +SELECT pb.prompt_id, p.prompt_text, p.memory_title +FROM prompt_blueprints pb +JOIN prompts p ON pb.prompt_id = p.id +WHERE pb.blueprint_id = $1 +ORDER BY pb.sort_order ASC; +``` + +### Edge Function Anpassung +Die Edge Functions müssen die Memories in der durch `sort_order` definierten Reihenfolge zurückgeben: + +```javascript +// In der blueprint Edge Function +const promptsQuery = await supabaseClient + .from('prompt_blueprints') + .select(` + prompt_id, + sort_order, + prompts!inner( + id, + prompt_text, + memory_title + ) + `) + .eq('blueprint_id', blueprintId) + .order('sort_order', { ascending: true }); +``` + +## Frontend Implementierung + +### Memory Display Component +```typescript +// Die Memories sollten bereits in der richtigen Reihenfolge vom Backend kommen +// Falls nicht, kann im Frontend nachsortiert werden: + +const sortedMemories = memories.sort((a, b) => { + // Nutze sort_order falls vorhanden + if (a.sort_order && b.sort_order) { + return a.sort_order - b.sort_order; + } + // Fallback auf created_at + return new Date(a.created_at) - new Date(b.created_at); +}); +``` + +## Migration für bestehende Daten + +Alle bestehenden Blueprint-Prompt Verknüpfungen erhalten automatisch `sort_order = 999`. Die wichtigsten Blueprints wurden explizit gesetzt: + +- **Standard-Analyse**: Sortierung 1-2-3 +- **Meeting-Analyse**: Sortierung 1-2-3 +- **Feedback**: Sortierung 1-2-3 + +## Vorteile dieser Lösung + +1. **Explizite Kontrolle**: Vollständige Kontrolle über die Anzeigereihenfolge +2. **Flexibilität**: Jeder Blueprint kann seine eigene Sortierung haben +3. **Erweiterbarkeit**: Neue Prompts können einfach eingeordnet werden +4. **Performance**: Index auf (blueprint_id, sort_order) für schnelle Queries +5. **Rückwärtskompatibilität**: Default-Wert 999 für nicht explizit sortierte Einträge + +## Testing + +Nach der Migration sollte getestet werden: + +1. ✅ Neue Memo-Aufnahme mit Standard Blueprint erstellen +2. ✅ Überprüfen ob Kurzzusammenfassung oben erscheint +3. ✅ Überprüfen ob Reihenfolge konsistent bleibt +4. ✅ Andere Blueprints testen (Meeting, Feedback) + +## Zukünftige Erweiterungen + +- UI für Blueprint-Editor könnte Drag & Drop für Sortierung bieten +- Nutzer-spezifische Sortierung pro Blueprint möglich +- Conditional Sorting basierend auf Memo-Typ oder -Länge \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/prompts/aufgaben-prompt.md b/apps/memoro/apps/landing/context/prompts/aufgaben-prompt.md new file mode 100644 index 000000000..2ccd2a893 --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/aufgaben-prompt.md @@ -0,0 +1,20 @@ +# Aufgaben & Termine Prompt + +## Prompt ID +`7a6cac9a-5a34-4fe5-a8f6-23f8165b0e48` + +## Prompt Text (direkt für Supabase kopierbar) +```json +{"de": "Bitte lies das folgende Transkript durch und extrahiere: 1) AUFGABEN: Alle erwähnten Aufgaben, Aktionspunkte und nächsten Schritte. 2) TERMINE: Alle genannten Termine, Meetings, Deadlines und zeitlichen Vereinbarungen. Ordne beide Kategorien nach Dringlichkeit (hoch/mittel/niedrig). Erfasse für Aufgaben: Beschreibung, verantwortliche Person (falls genannt), Deadline, Kontext und erwartete Ergebnisse. Erfasse für Termine: Datum/Uhrzeit, Art des Termins, Teilnehmer (falls genannt), Ort/Format (falls erwähnt) und Zweck. Markiere unklare Punkte mit [UNKLAR]. Verwende eine strukturierte Listenform ohne zusätzliche Kommentare. Hier das Transkript:", "en": "Please read through the following transcript and extract: 1) TASKS: All mentioned tasks, action items, and next steps. 2) APPOINTMENTS: All mentioned appointments, meetings, deadlines and time agreements. Order both categories by urgency (high/medium/low). For tasks capture: description, responsible person (if mentioned), deadline, context and expected results. For appointments capture: date/time, type of appointment, participants (if mentioned), location/format (if mentioned) and purpose. Mark unclear points with [UNCLEAR]. Use a structured list format without additional comments. Here is the transcript:", "it": "Per favore leggi la seguente trascrizione ed estrai: 1) COMPITI: Tutti i compiti menzionati, punti d'azione e prossimi passi. 2) APPUNTAMENTI: Tutti gli appuntamenti menzionati, riunioni, scadenze e accordi temporali. Ordina entrambe le categorie per urgenza (alta/media/bassa). Per i compiti cattura: descrizione, persona responsabile (se menzionata), scadenza, contesto e risultati attesi. Per gli appuntamenti cattura: data/ora, tipo di appuntamento, partecipanti (se menzionati), luogo/formato (se menzionato) e scopo. Contrassegna i punti poco chiari con [NON CHIARO]. Usa un formato di lista strutturato senza commenti aggiuntivi. Ecco la trascrizione:", "fr": "Veuillez lire la transcription suivante et extraire : 1) TÂCHES : Toutes les tâches mentionnées, points d'action et prochaines étapes. 2) RENDEZ-VOUS : Tous les rendez-vous mentionnés, réunions, échéances et accords temporels. Classez les deux catégories par urgence (élevée/moyenne/faible). Pour les tâches, capturez : description, personne responsable (si mentionnée), échéance, contexte et résultats attendus. Pour les rendez-vous, capturez : date/heure, type de rendez-vous, participants (si mentionnés), lieu/format (si mentionné) et objectif. Marquez les points peu clairs avec [PEU CLAIR]. Utilisez un format de liste structuré sans commentaires supplémentaires. Voici la transcription :", "es": "Por favor, lee la siguiente transcripción y extrae: 1) TAREAS: Todas las tareas mencionadas, puntos de acción y próximos pasos. 2) CITAS: Todas las citas mencionadas, reuniones, plazos y acuerdos temporales. Ordena ambas categorías por urgencia (alta/media/baja). Para las tareas captura: descripción, persona responsable (si se menciona), fecha límite, contexto y resultados esperados. Para las citas captura: fecha/hora, tipo de cita, participantes (si se mencionan), lugar/formato (si se menciona) y propósito. Marca los puntos poco claros con [POCO CLARO]. Usa un formato de lista estructurado sin comentarios adicionales. Aquí está la transcripción:"} +``` + +## Memory Title (direkt für Supabase kopierbar) +```json +{"de": "Aufgaben & Termine", "en": "Tasks & Appointments", "it": "Compiti e Appuntamenti", "fr": "Tâches et Rendez-vous", "es": "Tareas y Citas"} +``` + +## Description (direkt für Supabase kopierbar) +```json +{"de": "Extrahiert alle im Gespräch erwähnten Aufgaben, Aktionspunkte, nächsten Schritte sowie Termine, Meetings und zeitliche Vereinbarungen. Erfasst Details wie Kontext, Verantwortlichkeiten, Deadlines und Teilnehmer.", "en": "Extracts all tasks, action items, next steps as well as appointments, meetings and time agreements mentioned in the conversation. Captures details such as context, responsibilities, deadlines and participants.", "it": "Estrae tutti i compiti, punti d'azione, prossimi passi così come appuntamenti, riunioni e accordi temporali menzionati nella conversazione. Cattura dettagli come contesto, responsabilità, scadenze e partecipanti.", "fr": "Extrait toutes les tâches, points d'action, prochaines étapes ainsi que les rendez-vous, réunions et accords temporels mentionnés dans la conversation. Capture des détails tels que le contexte, les responsabilités, les échéances et les participants.", "es": "Extrae todas las tareas, puntos de acción, próximos pasos, así como citas, reuniones y acuerdos temporales mencionados en la conversación. Captura detalles como contexto, responsabilidades, plazos y participantes."} +``` + diff --git a/apps/memoro/apps/landing/context/prompts/ausfuehrliche-zusammenfassung-prompt.md b/apps/memoro/apps/landing/context/prompts/ausfuehrliche-zusammenfassung-prompt.md new file mode 100644 index 000000000..b9ad22f54 --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/ausfuehrliche-zusammenfassung-prompt.md @@ -0,0 +1,19 @@ +# Ausführliche Zusammenfassung Prompt + +## Prompt ID +`4370cb68-d676-4b93-8afd-2fb7c4ad78c4` + +## Prompt Text (direkt für Supabase kopierbar) +```json +{"de": "Erstelle eine ausführliche Zusammenfassung des folgenden Transkripts (ca. 30-50% der Originallänge). Strukturiere die Zusammenfassung wie folgt: 1. Überblick (Kernthema in 2-3 Sätzen), 2. Hauptthemen (die wichtigsten diskutierten Punkte), 3. Details (vorgebrachte Argumente und relevante Beispiele), 4. Fazit (zentrale Erkenntnisse). Verzichte auf zusätzliche Kommentare und fokussiere dich auf die reine Inhaltswiedergabe. Hier das Transkript:", "en": "Create a detailed summary of the following transcript (approx. 30-50% of the original length). Structure the summary as follows: 1. Overview (core topic in 2-3 sentences), 2. Main topics (the most important points discussed), 3. Details (arguments presented and relevant examples), 4. Conclusion (key insights). Avoid additional comments and focus on pure content reproduction. Here is the transcript:", "it": "Crea un riassunto dettagliato della seguente trascrizione (circa 30-50% della lunghezza originale). Struttura il riassunto come segue: 1. Panoramica (tema centrale in 2-3 frasi), 2. Temi principali (i punti più importanti discussi), 3. Dettagli (argomenti presentati ed esempi rilevanti), 4. Conclusione (intuizioni chiave). Evita commenti aggiuntivi e concentrati sulla pura riproduzione del contenuto. Ecco la trascrizione:", "fr": "Créez un résumé détaillé de la transcription suivante (environ 30-50% de la longueur originale). Structurez le résumé comme suit : 1. Aperçu (thème central en 2-3 phrases), 2. Thèmes principaux (les points les plus importants discutés), 3. Détails (arguments présentés et exemples pertinents), 4. Conclusion (idées clés). Évitez les commentaires supplémentaires et concentrez-vous sur la pure reproduction du contenu. Voici la transcription :", "es": "Crea un resumen detallado de la siguiente transcripción (aprox. 30-50% de la longitud original). Estructura el resumen de la siguiente manera: 1. Visión general (tema central en 2-3 oraciones), 2. Temas principales (los puntos más importantes discutidos), 3. Detalles (argumentos presentados y ejemplos relevantes), 4. Conclusión (ideas clave). Evita comentarios adicionales y concéntrate en la pura reproducción del contenido. Aquí está la transcripción:"} +``` + +## Memory Title (direkt für Supabase kopierbar) +```json +{"de": "Ausführliche Zusammenfassung", "en": "Detailed Summary", "it": "Riassunto Dettagliato", "fr": "Résumé Détaillé", "es": "Resumen Detallado"} +``` + +## Description (direkt für Supabase kopierbar) +```json +{"de": "Gibt eine detailliertere Wiedergabe der Inhalte, inklusive wichtiger Argumente, Beispiele und Kontextinformationen. Geeignet für ein tiefergehendes Verständnis ohne das gesamte Transkript lesen zu müssen.", "en": "Provides a more detailed reproduction of the content, including important arguments, examples and contextual information. Suitable for a deeper understanding without having to read the entire transcript.", "it": "Fornisce una riproduzione più dettagliata del contenuto, inclusi argomenti importanti, esempi e informazioni contestuali. Adatto per una comprensione più profonda senza dover leggere l'intera trascrizione.", "fr": "Fournit une reproduction plus détaillée du contenu, incluant des arguments importants, des exemples et des informations contextuelles. Convient pour une compréhension approfondie sans avoir à lire l'intégralité de la transcription.", "es": "Proporciona una reproducción más detallada del contenido, incluyendo argumentos importantes, ejemplos e información contextual. Adecuado para una comprensión más profunda sin tener que leer toda la transcripción."} +``` \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/prompts/beantwortete-fragen-prompt.md b/apps/memoro/apps/landing/context/prompts/beantwortete-fragen-prompt.md new file mode 100644 index 000000000..f52f90f60 --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/beantwortete-fragen-prompt.md @@ -0,0 +1,19 @@ +# Beantwortete Fragen & Antworten Prompt + +## Prompt ID +`47ce3340-e8c6-437c-928d-854c55589491` + +## Prompt Text (direkt für Supabase kopierbar) +```json +{"de": "Identifiziere alle Fragen aus dem folgenden Transkript, die gestellt und beantwortet wurden. Verwende folgendes nummeriertes Format: 1. FRAGE: [Originalfrage mit relevantem Kontext] ANTWORT: [Vollständige Antwort] STATUS: [Direkt beantwortet/Indirekt beantwortet/Teilweise beantwortet]. Gib bei jeder Frage-Antwort-Kombination den relevanten Kontext mit an. Verzichte auf zusätzliche Kommentare. Hier das Transkript:", "en": "Identify all questions from the following transcript that were asked and answered. Use the following numbered format: 1. QUESTION: [Original question with relevant context] ANSWER: [Complete answer] STATUS: [Directly answered/Indirectly answered/Partially answered]. Include relevant context for each question-answer combination. Avoid additional comments. Here is the transcript:", "it": "Identifica tutte le domande dalla seguente trascrizione che sono state poste e hanno ricevuto risposta. Usa il seguente formato numerato: 1. DOMANDA: [Domanda originale con contesto rilevante] RISPOSTA: [Risposta completa] STATO: [Risposta diretta/Risposta indiretta/Risposta parziale]. Includi il contesto rilevante per ogni combinazione domanda-risposta. Evita commenti aggiuntivi. Ecco la trascrizione:", "fr": "Identifiez toutes les questions de la transcription suivante qui ont été posées et ont reçu une réponse. Utilisez le format numéroté suivant : 1. QUESTION : [Question originale avec contexte pertinent] RÉPONSE : [Réponse complète] STATUT : [Réponse directe/Réponse indirecte/Réponse partielle]. Incluez le contexte pertinent pour chaque combinaison question-réponse. Évitez les commentaires supplémentaires. Voici la transcription :", "es": "Identifica todas las preguntas de la siguiente transcripción que fueron formuladas y respondidas. Usa el siguiente formato numerado: 1. PREGUNTA: [Pregunta original con contexto relevante] RESPUESTA: [Respuesta completa] ESTADO: [Respuesta directa/Respuesta indirecta/Respuesta parcial]. Incluye el contexto relevante para cada combinación pregunta-respuesta. Evita comentarios adicionales. Aquí está la transcripción:"} +``` + +## Memory Title (direkt für Supabase kopierbar) +```json +{"de": "Beantwortete Fragen & Antworten", "en": "Answered Questions & Answers", "it": "Domande e Risposte", "fr": "Questions et Réponses", "es": "Preguntas y Respuestas"} +``` + +## Description (direkt für Supabase kopierbar) +```json +{"de": "Extrahiert Fragen, die im Transkript gestellt und beantwortet wurden, und listet sie zusammen mit ihren jeweiligen Antworten auf.", "en": "Extracts questions that were asked and answered in the transcript, and lists them together with their respective answers.", "it": "Estrae le domande che sono state poste e hanno ricevuto risposta nella trascrizione, e le elenca insieme alle rispettive risposte.", "fr": "Extrait les questions qui ont été posées et ont reçu une réponse dans la transcription, et les liste avec leurs réponses respectives.", "es": "Extrae las preguntas que fueron formuladas y respondidas en la transcripción, y las enumera junto con sus respectivas respuestas."} +``` \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/prompts/blogbeitrag-prompt.md b/apps/memoro/apps/landing/context/prompts/blogbeitrag-prompt.md new file mode 100644 index 000000000..a90a5b3f4 --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/blogbeitrag-prompt.md @@ -0,0 +1,19 @@ +# Blogbeitrag Prompt + +## Prompt ID +`2c6a6e47-1d0c-441f-9449-b5d908bffba2` + +## Prompt Text (direkt für Supabase kopierbar) +```json +{"de": "Verwandle das folgende Transkript in einen gut strukturierten Blogbeitrag (800-1200 Wörter) für eine professionelle Zielgruppe. Der Beitrag sollte die Hauptthemen und wichtigsten Erkenntnisse detailliert behandeln. Formuliere eine SEO-optimierte Hauptüberschrift, verwende aussagekräftige Zwischenüberschriften zur Gliederung, schreibe eine einleitende Zusammenfassung (2-3 Sätze) sowie einen Schlussteil mit konkretem Handlungsaufruf. Integriere relevante Keywords natürlich im Text. Der Stil sollte informativ, lesefreundlich und ohne zusätzliche Kommentare sein. Hier das Transkript:", "en": "Transform the following transcript into a well-structured blog post (800-1200 words) for a professional audience. The post should cover the main topics and key insights in detail. Formulate an SEO-optimized main heading, use meaningful subheadings for structure, write an introductory summary (2-3 sentences) as well as a conclusion with a concrete call to action. Integrate relevant keywords naturally in the text. The style should be informative, reader-friendly and without additional comments. Here is the transcript:", "it": "Trasforma la seguente trascrizione in un post di blog ben strutturato (800-1200 parole) per un pubblico professionale. Il post dovrebbe coprire in dettaglio i temi principali e le intuizioni chiave. Formula un titolo principale ottimizzato per SEO, usa sottotitoli significativi per la struttura, scrivi un riassunto introduttivo (2-3 frasi) e una conclusione con un chiaro invito all'azione. Integra naturalmente le parole chiave rilevanti nel testo. Lo stile dovrebbe essere informativo, di facile lettura e senza commenti aggiuntivi. Ecco la trascrizione:", "fr": "Transformez la transcription suivante en un article de blog bien structuré (800-1200 mots) pour un public professionnel. L'article devrait couvrir en détail les thèmes principaux et les idées clés. Formulez un titre principal optimisé pour le SEO, utilisez des sous-titres significatifs pour la structure, écrivez un résumé introductif (2-3 phrases) ainsi qu'une conclusion avec un appel à l'action concret. Intégrez naturellement les mots-clés pertinents dans le texte. Le style doit être informatif, facile à lire et sans commentaires supplémentaires. Voici la transcription :", "es": "Transforma la siguiente transcripción en una publicación de blog bien estructurada (800-1200 palabras) para una audiencia profesional. La publicación debe cubrir en detalle los temas principales y las ideas clave. Formula un título principal optimizado para SEO, usa subtítulos significativos para la estructura, escribe un resumen introductorio (2-3 oraciones) así como una conclusión con una llamada a la acción concreta. Integra naturalmente las palabras clave relevantes en el texto. El estilo debe ser informativo, fácil de leer y sin comentarios adicionales. Aquí está la transcripción:"} +``` + +## Memory Title (direkt für Supabase kopierbar) +```json +{"de": "Blogbeitrag", "en": "Blog Post", "it": "Post del Blog", "fr": "Article de Blog", "es": "Publicación de Blog"} +``` + +## Description (direkt für Supabase kopierbar) +```json +{"de": "Erstellt einen ausführlichen Blogartikel basierend auf den Inhalten des Transkripts. Der Artikel soll gut strukturiert sein, eine einleitende Überschrift, Zwischenüberschriften und einen abschliessenden Absatz enthalten.", "en": "Creates a detailed blog article based on the contents of the transcript. The article should be well structured, with an introductory heading, subheadings and a concluding paragraph.", "it": "Crea un articolo di blog dettagliato basato sui contenuti della trascrizione. L'articolo dovrebbe essere ben strutturato, con un titolo introduttivo, sottotitoli e un paragrafo conclusivo.", "fr": "Crée un article de blog détaillé basé sur le contenu de la transcription. L'article doit être bien structuré, avec un titre introductif, des sous-titres et un paragraphe de conclusion.", "es": "Crea un artículo de blog detallado basado en el contenido de la transcripción. El artículo debe estar bien estructurado, con un título introductorio, subtítulos y un párrafo de conclusión."} +``` \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/prompts/edge-functions-sort-order-update.md b/apps/memoro/apps/landing/context/prompts/edge-functions-sort-order-update.md new file mode 100644 index 000000000..c3661f279 --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/edge-functions-sort-order-update.md @@ -0,0 +1,149 @@ +# Edge Functions Update für sort_order + +## Betroffene Edge Functions: + +Die folgenden Edge Functions müssen angepasst werden, um die `sort_order` aus der `prompt_blueprints` Tabelle an die neu erstellten Memories zu übergeben: + +### 1. `/supabase/functions/blueprint/index.ts` + +**Aktuelle Implementation (ca. Zeile 410):** +```typescript +const { data: newMemory, error: newMemoryError } = await memoro_sb.from('memories').insert({ + memo_id: memo_id, + title: memoryTitle || 'Blueprint-Antwort', + content: answer, + metadata: { ... } +}).select().single(); +``` + +**Neue Implementation:** +```typescript +// Zuerst die sort_order aus prompt_blueprints holen +const { data: promptBlueprint } = await memoro_sb + .from('prompt_blueprints') + .select('sort_order') + .eq('blueprint_id', blueprint_id) + .eq('prompt_id', prompt.id) + .single(); + +const { data: newMemory, error: newMemoryError } = await memoro_sb.from('memories').insert({ + memo_id: memo_id, + title: memoryTitle || 'Blueprint-Antwort', + content: answer, + sort_order: promptBlueprint?.sort_order || 999, // Neue Zeile! + metadata: { ... } +}).select().single(); +``` + +### 2. `/supabase/functions/auto-blueprint/index.ts` + +**Ähnliche Anpassung bei Zeile 479:** +```typescript +const { data: newMemory, error: newMemoryError } = await memoro_sb.from('memories').insert({ + memo_id: memo_id, + title: memoryTitle || 'Auto-Blueprint-Antwort', + content: answer, + sort_order: promptBlueprint?.sort_order || 999, // Neue Zeile! + metadata: { ... } +}).select().single(); +``` + +### 3. Andere Edge Functions (ohne Blueprint) + +Für Edge Functions die Memories ohne Blueprint erstellen, sollte ein sinnvoller Default-Wert gesetzt werden: + +#### `/supabase/functions/create-memory/index.ts` +```typescript +const { data: newMemory, error: newMemoryError } = await memoro_sb.from('memories').insert({ + memo_id: memo_id, + title: memoryTitle || 'Memory', + content: answer, + sort_order: 100, // Default für manuelle Memories + metadata: { ... } +}).select().single(); +``` + +#### `/supabase/functions/question-memo/index.ts` +```typescript +const { data: newMemory, error: newMemoryError } = await memoro_sb.from('memories').insert({ + memo_id: memo_id, + title: `Frage: ${question.substring(0, 50)}...`, + content: answer, + sort_order: 200, // Fragen erscheinen nach Blueprint-Memories + metadata: { ... } +}).select().single(); +``` + +#### `/supabase/functions/combine-memos/index.ts` +```typescript +const { error: memoryError } = await memoro_sb + .from('memories') + .insert({ + ...memoryData, + sort_order: 300 // Kombinierte Memos erscheinen am Ende + }); +``` + +#### `/supabase/functions/translate/index.ts` +```typescript +const { error: memoryCreateError } = await memoro_sb.from('memories').insert({ + memo_id: newMemoId, + title: translatedTitle, + content: translatedContent, + sort_order: memory.sort_order || 999, // Behalte sort_order vom Original + metadata: { ... } +}); +``` + +## Optimierte Blueprint Function Implementation + +Für bessere Performance sollten die Prompts mit ihrer sort_order gleich am Anfang geladen werden: + +```typescript +// In blueprint/index.ts und auto-blueprint/index.ts + +// Lade alle Prompts mit sort_order für diesen Blueprint +const { data: blueprintPrompts, error: blueprintPromptsError } = await memoro_sb + .from('prompt_blueprints') + .select(` + prompt_id, + sort_order, + prompts!inner( + id, + prompt_text, + memory_title + ) + `) + .eq('blueprint_id', blueprint_id) + .order('sort_order', { ascending: true }); + +// Dann beim Erstellen der Memories: +for (const blueprintPrompt of blueprintPrompts) { + // ... AI Processing ... + + const { data: newMemory, error: newMemoryError } = await memoro_sb.from('memories').insert({ + memo_id: memo_id, + title: memoryTitle, + content: answer, + sort_order: blueprintPrompt.sort_order, + metadata: { ... } + }).select().single(); +} +``` + +## Sort Order Konvention: + +| Bereich | sort_order | Verwendung | +|---------|------------|------------| +| 1-99 | Blueprint Memories | Hauptanalysen (Zusammenfassung, Aufgaben, etc.) | +| 100-199 | Manuelle Memories | Vom Nutzer erstellte Memories | +| 200-299 | Frage-Antwort | Memories aus Fragen an Memos | +| 300-399 | Kombinierte Memos | Aus mehreren Memos generiert | +| 999 | Default | Fallback für nicht kategorisierte Memories | + +## Testing nach Implementation: + +1. Neue Aufnahme mit Standard Blueprint erstellen +2. Prüfen ob Memories in richtiger Reihenfolge erscheinen +3. Manuelle Memory hinzufügen - sollte nach Blueprint-Memories erscheinen +4. Frage an Memo stellen - sollte nach manuellen Memories erscheinen \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/prompts/frontend-memory-sorting-patches.md b/apps/memoro/apps/landing/context/prompts/frontend-memory-sorting-patches.md new file mode 100644 index 000000000..4fe075f30 --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/frontend-memory-sorting-patches.md @@ -0,0 +1,133 @@ +# Frontend Patches für Memory Sorting + +## Dateien die angepasst werden müssen: + +### 1. `/features/memos/hooks/useMemoState.ts` + +**Aktuelle Zeile 281:** +```typescript +.order('created_at', { ascending: false }) +``` + +**Ändern zu:** +```typescript +.order('sort_order', { ascending: true }) +.order('created_at', { ascending: false }) +``` + +### 2. `/features/memos/hooks/useOptimizedMemoData.ts` + +**Aktuelle Zeile 95:** +```typescript +.order('created_at', { ascending: false }) +``` + +**Ändern zu:** +```typescript +.order('sort_order', { ascending: true }) +.order('created_at', { ascending: false }) +``` + +### 3. `/app/(protected)/(memo)/actions/memoActions.ts` + +**Aktuelle Zeile 468:** +```typescript +.eq('memo_id', memoId) +.order('created_at', { ascending: false }); +``` + +**Ändern zu:** +```typescript +.eq('memo_id', memoId) +.order('sort_order', { ascending: true }) +.order('created_at', { ascending: false }); +``` + +### 4. `/app/(protected)/memories.tsx` + +**Aktuelle Zeile 157:** +```typescript +.order('created_at', { ascending: false }); +``` + +**Ändern zu:** +```typescript +.order('sort_order', { ascending: true }) +.order('created_at', { ascending: false }); +``` + +### 5. `/app/(protected)/(tabs)/index.tsx` + +**Aktuelle Zeile 999:** +```typescript +.eq('memo_id', memo.id) +.order('created_at', { ascending: false }); +``` + +**Ändern zu:** +```typescript +.eq('memo_id', memo.id) +.order('sort_order', { ascending: true }) +.order('created_at', { ascending: false }); +``` + +### 6. `/app/(protected)/(tabs)/memos.tsx` + +**Aktuelle Zeile 603:** +```typescript +.eq('memo_id', memo.id) +.order('created_at', { ascending: false }); +``` + +**Ändern zu:** +```typescript +.eq('memo_id', memo.id) +.order('sort_order', { ascending: true }) +.order('created_at', { ascending: false }); +``` + +### 7. `/components/molecules/MemoList.tsx` + +**Aktuelle Zeile 550:** +```typescript +.eq('memo_id', memo.id) +.order('created_at', { ascending: false }); +``` + +**Ändern zu:** +```typescript +.eq('memo_id', memo.id) +.order('sort_order', { ascending: true }) +.order('created_at', { ascending: false }); +``` + +### 8. `/components/molecules/MemoPreview.tsx` + +**Aktuelle Zeile 306:** +```typescript +.eq('memo_id', memo.id) +.order('created_at', { ascending: false }); +``` + +**Ändern zu:** +```typescript +.eq('memo_id', memo.id) +.order('sort_order', { ascending: true }) +.order('created_at', { ascending: false }); +``` + +## Erklärung der Änderung: + +Die doppelte Sortierung funktioniert so: +1. **Primär nach `sort_order` (aufsteigend)**: Memories mit niedrigerer sort_order erscheinen zuerst +2. **Sekundär nach `created_at` (absteigend)**: Falls mehrere Memories die gleiche sort_order haben (z.B. 999 für alte/manuelle Memories), werden sie nach Erstellungsdatum sortiert + +## Testing: + +Nach den Änderungen sollte getestet werden: +1. ✅ Neue Memo-Aufnahme mit Standard Blueprint +2. ✅ Überprüfen ob Reihenfolge stimmt: + - Kurzzusammenfassung (sort_order: 1) - OBEN + - Aufgaben (sort_order: 2) - MITTE + - Ausführliche Zusammenfassung (sort_order: 3) - UNTEN +3. ✅ Alte Memories ohne sort_order sollten weiterhin funktionieren \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/prompts/gesammelte-ideen-prompt.md b/apps/memoro/apps/landing/context/prompts/gesammelte-ideen-prompt.md new file mode 100644 index 000000000..84a200ffa --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/gesammelte-ideen-prompt.md @@ -0,0 +1,19 @@ +# Gesammelte Ideen & Vorschläge Prompt + +## Prompt ID +`8cdc89a5-2f76-4d50-a93d-0c177c3e73ab` + +## Prompt Text (direkt für Supabase kopierbar) +```json +{"de": "Sammle alle im folgenden Transkript geäusserten neuen Ideen, kreativen Vorschläge oder Brainstorming-Punkte. Gruppiere sie nach Themenbereichen und markiere besonders innovative Ideen mit ⭐. Kategorisiere jede Idee nach Umsetzbarkeit: [KURZFRISTIG] (innerhalb 1 Monat), [MITTELFRISTIG] (1-6 Monate), [LANGFRISTIG] (über 6 Monate). Liste sie übersichtlich ohne zusätzliche Kommentare auf. Hier das Transkript:", "en": "Collect all new ideas, creative suggestions, or brainstorming points expressed in the following transcript. Group them by topic areas and mark particularly innovative ideas with ⭐. Categorize each idea by feasibility: [SHORT-TERM] (within 1 month), [MEDIUM-TERM] (1-6 months), [LONG-TERM] (over 6 months). List them clearly without additional comments. Here is the transcript:", "it": "Raccogli tutte le nuove idee, suggerimenti creativi o punti di brainstorming espressi nella seguente trascrizione. Raggruppali per aree tematiche e contrassegna le idee particolarmente innovative con ⭐. Categorizza ogni idea per fattibilità: [BREVE TERMINE] (entro 1 mese), [MEDIO TERMINE] (1-6 mesi), [LUNGO TERMINE] (oltre 6 mesi). Elencali chiaramente senza commenti aggiuntivi. Ecco la trascrizione:", "fr": "Collectez toutes les nouvelles idées, suggestions créatives ou points de brainstorming exprimés dans la transcription suivante. Regroupez-les par domaines thématiques et marquez les idées particulièrement innovantes avec ⭐. Catégorisez chaque idée selon sa faisabilité : [COURT TERME] (dans 1 mois), [MOYEN TERME] (1-6 mois), [LONG TERME] (plus de 6 mois). Listez-les clairement sans commentaires supplémentaires. Voici la transcription :", "es": "Recopila todas las nuevas ideas, sugerencias creativas o puntos de lluvia de ideas expresados en la siguiente transcripción. Agrúpalas por áreas temáticas y marca las ideas particularmente innovadoras con ⭐. Categoriza cada idea por viabilidad: [CORTO PLAZO] (dentro de 1 mes), [MEDIO PLAZO] (1-6 meses), [LARGO PLAZO] (más de 6 meses). Listálas claramente sin comentarios adicionales. Aquí está la transcripción:"} +``` + +## Memory Title (direkt für Supabase kopierbar) +```json +{"de": "Gesammelte Ideen & Vorschläge", "en": "Collected Ideas & Suggestions", "it": "Idee e Suggerimenti Raccolti", "fr": "Idées et Suggestions Collectées", "es": "Ideas y Sugerencias Recopiladas"} +``` + +## Description (direkt für Supabase kopierbar) +```json +{"de": "Listet alle im Gespräch geäusserten neuen Ideen, Brainstorming-Punkte oder konkreten Vorschläge auf.", "en": "Lists all new ideas, brainstorming points or concrete suggestions expressed in the conversation.", "it": "Elenca tutte le nuove idee, punti di brainstorming o suggerimenti concreti espressi nella conversazione.", "fr": "Liste toutes les nouvelles idées, points de brainstorming ou suggestions concrètes exprimés dans la conversation.", "es": "Enumera todas las nuevas ideas, puntos de lluvia de ideas o sugerencias concretas expresadas en la conversación."} +``` \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/prompts/kurzzusammenfassung-prompt.md b/apps/memoro/apps/landing/context/prompts/kurzzusammenfassung-prompt.md new file mode 100644 index 000000000..b81a14860 --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/kurzzusammenfassung-prompt.md @@ -0,0 +1,19 @@ +# Kurzzusammenfassung Prompt + +## Prompt ID +`c4009bef-4504-4af7-86f5-f896a2412a0a` + +## Prompt Text (direkt für Supabase kopierbar) +```json +{"de": "Erstelle ein TLDR (Too Long; Didn't Read) des folgenden Transkripts. Fasse die maximal 3 wichtigsten handlungsrelevanten Kernaussagen in insgesamt 3-5 Sätzen zusammen. Fokussiere ausschließlich auf die absoluten Hauptpunkte, die für Entscheidungen oder nächste Schritte relevant sind. Verzichte auf zusätzliche Kommentare. Hier das Transkript:", "en": "Create a TLDR (Too Long; Didn't Read) of the following transcript. Summarize the maximum 3 most important action-relevant key messages in a total of 3-5 sentences. Focus exclusively on the absolute main points that are relevant for decisions or next steps. Avoid additional comments. Here is the transcript:", "it": "Crea un TLDR (Troppo Lungo; Non L'ho Letto) della seguente trascrizione. Riassumi i massimo 3 messaggi chiave più importanti rilevanti per l'azione in totale 3-5 frasi. Concentrati esclusivamente sui punti principali assoluti che sono rilevanti per decisioni o passi successivi. Evita commenti aggiuntivi. Ecco la trascrizione:", "fr": "Créez un TLDR (Trop Long ; Pas Lu) de la transcription suivante. Résumez les 3 messages clés maximum les plus importants et pertinents pour l'action en 3-5 phrases au total. Concentrez-vous exclusivement sur les points principaux absolus qui sont pertinents pour les décisions ou les prochaines étapes. Évitez les commentaires supplémentaires. Voici la transcription :", "es": "Crea un TLDR (Demasiado Largo; No Lo Leí) de la siguiente transcripción. Resume los máximo 3 mensajes clave más importantes y relevantes para la acción en un total de 3-5 oraciones. Concéntrate exclusivamente en los puntos principales absolutos que son relevantes para decisiones o próximos pasos. Evita comentarios adicionales. Aquí está la transcripción:"} +``` + +## Memory Title (direkt für Supabase kopierbar) +```json +{"de": "Kurzzusammenfassung", "en": "Executive Summary", "it": "Riassunto Esecutivo", "fr": "Résumé Exécutif", "es": "Resumen Ejecutivo"} +``` + +## Description (direkt für Supabase kopierbar) +```json +{"de": "Erstellt eine knappe Übersicht der wichtigsten Inhalte und Kernaussagen des Gesprächs oder Vortrags. Ideal für einen schnellen Überblick.", "en": "Creates a brief overview of the most important content and key statements of the conversation or presentation. Ideal for a quick overview.", "it": "Crea una panoramica concisa dei contenuti più importanti e delle dichiarazioni chiave della conversazione o presentazione. Ideale per una rapida panoramica.", "fr": "Crée un aperçu concis du contenu le plus important et des déclarations clés de la conversation ou de la présentation. Idéal pour un aperçu rapide.", "es": "Crea una visión general concisa del contenido más importante y las declaraciones clave de la conversación o presentación. Ideal para una visión rápida."} +``` diff --git a/apps/memoro/apps/landing/context/prompts/offene-fragen-prompt.md b/apps/memoro/apps/landing/context/prompts/offene-fragen-prompt.md new file mode 100644 index 000000000..572a81c33 --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/offene-fragen-prompt.md @@ -0,0 +1,19 @@ +# Offene Fragen Prompt + +## Prompt ID +`c576e875-5a52-4f6a-abb7-0c62c945af78` + +## Prompt Text (direkt für Supabase kopierbar) +```json +{"de": "Liste alle wichtigen Fragen auf, die im folgenden Transkript gestellt wurden und unbeantwortet blieben. Sortiere sie nach Wichtigkeit/Dringlichkeit (HOCH/MITTEL/NIEDRIG). Gib für jede Frage kurzen Kontext, warum sie offen blieb und schlage konkrete nächste Schritte zur Klärung vor. Verwende folgendes Format: [PRIORITÄT] Frage: ... | Kontext: ... | Nächster Schritt: ... Verzichte auf zusätzliche Kommentare. Hier das Transkript:", "en": "List all important questions that were asked in the following transcript and remained unanswered. Sort them by importance/urgency (HIGH/MEDIUM/LOW). For each question, provide brief context on why it remained open and suggest concrete next steps for clarification. Use the following format: [PRIORITY] Question: ... | Context: ... | Next step: ... Avoid additional comments. Here is the transcript:", "it": "Elenca tutte le domande importanti che sono state poste nella seguente trascrizione e sono rimaste senza risposta. Ordinale per importanza/urgenza (ALTA/MEDIA/BASSA). Per ogni domanda, fornisci un breve contesto sul perché è rimasta aperta e suggerisci passi successivi concreti per il chiarimento. Usa il seguente formato: [PRIORITÀ] Domanda: ... | Contesto: ... | Prossimo passo: ... Evita commenti aggiuntivi. Ecco la trascrizione:", "fr": "Listez toutes les questions importantes qui ont été posées dans la transcription suivante et sont restées sans réponse. Triez-les par importance/urgence (ÉLEVÉE/MOYENNE/FAIBLE). Pour chaque question, fournissez un bref contexte expliquant pourquoi elle est restée ouverte et suggérez des prochaines étapes concrètes pour clarification. Utilisez le format suivant : [PRIORITÉ] Question : ... | Contexte : ... | Prochaine étape : ... Évitez les commentaires supplémentaires. Voici la transcription :", "es": "Enumera todas las preguntas importantes que se hicieron en la siguiente transcripción y quedaron sin respuesta. Ordénalas por importancia/urgencia (ALTA/MEDIA/BAJA). Para cada pregunta, proporciona un breve contexto sobre por qué quedó abierta y sugiere próximos pasos concretos para la aclaración. Usa el siguiente formato: [PRIORIDAD] Pregunta: ... | Contexto: ... | Próximo paso: ... Evita comentarios adicionales. Aquí está la transcripción:"} +``` + +## Memory Title (direkt für Supabase kopierbar) +```json +{"de": "Offene Fragen", "en": "Open Questions", "it": "Domande Aperte", "fr": "Questions Ouvertes", "es": "Preguntas Abiertas"} +``` + +## Description (direkt für Supabase kopierbar) +```json +{"de": "Identifiziert alle Fragen, die während des Gesprächs aufgeworfen, aber nicht oder nicht vollständig beantwortet wurden.", "en": "Identifies all questions that were raised during the conversation but were not or not fully answered.", "it": "Identifica tutte le domande che sono state sollevate durante la conversazione ma non hanno ricevuto risposta o non sono state completamente risposte.", "fr": "Identifie toutes les questions qui ont été soulevées pendant la conversation mais n'ont pas reçu de réponse ou n'ont pas été complètement répondues.", "es": "Identifica todas las preguntas que surgieron durante la conversación pero no fueron respondidas o no fueron completamente respondidas."} +``` \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/prompts/social-media-posts-prompt.md b/apps/memoro/apps/landing/context/prompts/social-media-posts-prompt.md new file mode 100644 index 000000000..06dbfa633 --- /dev/null +++ b/apps/memoro/apps/landing/context/prompts/social-media-posts-prompt.md @@ -0,0 +1,19 @@ +# Social Media Posts Prompt + +## Prompt ID +`b2e39e0a-ec1f-4d0e-813d-f1a08493332b` + +## Prompt Text (direkt für Supabase kopierbar) +```json +{"de": "Erstelle basierend auf dem folgenden Transkript Social Media Posts für verschiedene Plattformen. Formatiere jeden Post plattformgerecht: 1) X/TWITTER: Max. 280 Zeichen, prägnant, 2-3 Hashtags. 2) LINKEDIN: Professionell, 150-300 Wörter, 5-8 Hashtags. 3) INSTAGRAM: Visuell ansprechend beschrieben, Emoji-Nutzung, 10-15 Hashtags. 4) FACEBOOK: Storytelling-Ansatz, 100-200 Wörter, 3-5 Hashtags. 5) TIKTOK: Video-Idee mit Hook, Ablauf und Trending-Hashtags. Verzichte auf zusätzliche Kommentare. Hier das Transkript:", "en": "Create social media posts for different platforms based on the following transcript. Format each post platform-specifically: 1) X/TWITTER: Max. 280 characters, concise, 2-3 hashtags. 2) LINKEDIN: Professional, 150-300 words, 5-8 hashtags. 3) INSTAGRAM: Visually described, emoji usage, 10-15 hashtags. 4) FACEBOOK: Storytelling approach, 100-200 words, 3-5 hashtags. 5) TIKTOK: Video idea with hook, sequence and trending hashtags. Avoid additional comments. Here is the transcript:", "it": "Crea post per social media per diverse piattaforme basati sulla seguente trascrizione. Formatta ogni post specificamente per la piattaforma: 1) X/TWITTER: Max. 280 caratteri, conciso, 2-3 hashtag. 2) LINKEDIN: Professionale, 150-300 parole, 5-8 hashtag. 3) INSTAGRAM: Descritto visivamente, uso di emoji, 10-15 hashtag. 4) FACEBOOK: Approccio narrativo, 100-200 parole, 3-5 hashtag. 5) TIKTOK: Idea video con hook, sequenza e hashtag di tendenza. Evita commenti aggiuntivi. Ecco la trascrizione:", "fr": "Créez des publications pour les réseaux sociaux pour différentes plateformes basées sur la transcription suivante. Formatez chaque publication spécifiquement pour la plateforme : 1) X/TWITTER : Max. 280 caractères, concis, 2-3 hashtags. 2) LINKEDIN : Professionnel, 150-300 mots, 5-8 hashtags. 3) INSTAGRAM : Description visuelle, utilisation d'emojis, 10-15 hashtags. 4) FACEBOOK : Approche narrative, 100-200 mots, 3-5 hashtags. 5) TIKTOK : Idée vidéo avec accroche, séquence et hashtags tendance. Évitez les commentaires supplémentaires. Voici la transcription :", "es": "Crea publicaciones para redes sociales para diferentes plataformas basadas en la siguiente transcripción. Formatea cada publicación específicamente para la plataforma: 1) X/TWITTER: Máx. 280 caracteres, conciso, 2-3 hashtags. 2) LINKEDIN: Profesional, 150-300 palabras, 5-8 hashtags. 3) INSTAGRAM: Descripción visual, uso de emojis, 10-15 hashtags. 4) FACEBOOK: Enfoque narrativo, 100-200 palabras, 3-5 hashtags. 5) TIKTOK: Idea de video con gancho, secuencia y hashtags de tendencia. Evita comentarios adicionales. Aquí está la transcripción:"} +``` + +## Memory Title (direkt für Supabase kopierbar) +```json +{"de": "Social Media Posts", "en": "Social Media Posts", "it": "Post per Social Media", "fr": "Publications Réseaux Sociaux", "es": "Publicaciones en Redes Sociales"} +``` + +## Description (direkt für Supabase kopierbar) +```json +{"de": "Generiert optimierte Social Media Posts für X/Twitter, LinkedIn, Instagram, Facebook und TikTok. Jeder Post ist plattformspezifisch formatiert mit passender Länge, Tonalität und Hashtag-Anzahl. Enthält konkrete Textvorschläge und bei TikTok Video-Konzepte.", "en": "Generates optimized social media posts for X/Twitter, LinkedIn, Instagram, Facebook and TikTok. Each post is platform-specifically formatted with appropriate length, tone and hashtag count. Includes concrete text suggestions and video concepts for TikTok.", "it": "Genera post ottimizzati per social media per X/Twitter, LinkedIn, Instagram, Facebook e TikTok. Ogni post è formattato specificamente per la piattaforma con lunghezza, tono e numero di hashtag appropriati. Include suggerimenti di testo concreti e concetti video per TikTok.", "fr": "Génère des publications optimisées pour les réseaux sociaux pour X/Twitter, LinkedIn, Instagram, Facebook et TikTok. Chaque publication est formatée spécifiquement pour la plateforme avec une longueur, un ton et un nombre de hashtags appropriés. Inclut des suggestions de texte concrètes et des concepts vidéo pour TikTok.", "es": "Genera publicaciones optimizadas para redes sociales para X/Twitter, LinkedIn, Instagram, Facebook y TikTok. Cada publicación está formateada específicamente para la plataforma con longitud, tono y cantidad de hashtags apropiados. Incluye sugerencias de texto concretas y conceptos de video para TikTok."} +``` \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/team/Dennis-Bauer-LinkedIn.md b/apps/memoro/apps/landing/context/team/Dennis-Bauer-LinkedIn.md new file mode 100644 index 000000000..20325a200 --- /dev/null +++ b/apps/memoro/apps/landing/context/team/Dennis-Bauer-LinkedIn.md @@ -0,0 +1,206 @@ +test +Dennis Bauer +Direkter Kontakt1. +"Wer immer tut was er schon kann bleibt immer das was er schon ist." Zitat Henry Ford + +Vari-tech GmbH + +ZHAW School of Management and Law +Stockach, Baden-Württemberg, Deutschland Kontaktinfo +500+ Kontakte + +Tobias Müller, Alex Vasileva und 19 weitere gemeinsame KontakteTobias Müller, Alex Vasileva und 19 weitere gemeinsame Kontakte + +Nachricht + +Mehr +HighlightsHighlights +Till Schneider +Dennis Bauer hat sich in den vergangenen 90 Tagen Ihr Profil angesehen +Dennis Bauer hat sich in den vergangenen 90 Tagen Ihr Profil angesehen +Kontaktieren Sie Dennis Bauer, um die Anforderungen zu verstehen.Kontaktieren Sie Dennis Bauer, um die Anforderungen zu verstehen. +Kostenlose Insights von LinkedIn Sales Navigator +Kostenlose Insights von LinkedIn Sales Navigator +Schalten Sie mehr Einblicke in Leads frei +Bessere Kontaktaufnahme mit vertriebsrelevanten Insights + +Sales Navigator für 0 CHF erneut testen +1 Probemonat mit Support rund um die Uhr. Sie können jederzeit kündigen. Sie erhalten 7 Tage vor Ablauf der Probeversion eine entsprechende Erinnerung. + +Alle 2 Highlights anzeigen +InfoInfo +Machen ist wie wollen, nur viel krasser! Machen ist wie wollen, nur viel krasser! +ServiceleistungenServiceleistungen +Projektmanagement • IT-Beratung • Filing • Dateiverwaltung • Dateneingabe • Technischer Support • InformationsmanagementProjektmanagement • IT-Beratung • Filing • Dateiverwaltung • Dateneingabe • Technischer Support • Informationsmanagement +Serviceleistungen anfordern +Alle Serviceleistungen anzeigen +AktivitätenAktivitäten +564 Follower:innen564 Follower:innen + +Beiträge + +Kommentare +9 „Beiträge“-Beiträge wurden geladen +Link zur Grafik von Dennis Bauer anzeigen +Dennis BauerDennis Bauer +• 1.1. +"Wer immer tut was er schon kann bleibt immer das was er schon ist." Zitat Henry Ford"Wer immer tut was er schon kann bleibt immer das was er schon ist." Zitat Henry Ford +10 Monate • vor 10 Monaten • Alle Mitglieder und Nicht-Mitglieder von LinkedIn + +Gestern verfolgten wir aufmerksam den AV Vortrag und die Herangehensweise unserer Kunden. + +Es ist immer interessant ein Thema auch von der gegenüberliegenden Seiten zu betrachten. +… mehr + +Vari-tech GmbHVari-tech GmbH +180 Follower:innen180 Follower:innen +Tag 3 der archivistica 2024 + +Auch finden wieder interessante Vorträge statt, wie zum Beispiel "Umgang mit emotional belastenden Beständen und diskriminierender Sprache". Ein Thema das durchaus auch uns Dienstleister betrifft. + +Unterstützung bringt unser Recording Secretary" +… mehr +Größere Bilddarstellung aktivieren, +Keine alternative Textbeschreibung für dieses Bild vorhanden +Größere Bilddarstellung aktivieren, +like +3 + +Link zur Grafik von Dennis Bauer anzeigen +Dennis BauerDennis Bauer +• 1.1. +"Wer immer tut was er schon kann bleibt immer das was er schon ist." Zitat Henry Ford"Wer immer tut was er schon kann bleibt immer das was er schon ist." Zitat Henry Ford +10 Monate • vor 10 Monaten • Alle Mitglieder und Nicht-Mitglieder von LinkedIn + +2ter Tag auf der archivistica 2024 in Suhl. + +Memoro kommt auch unter Archivaren gut an. +1 Repost + +Link zur Grafik von Dennis Bauer anzeigen +Dennis BauerDennis Bauer +• 1.1. +"Wer immer tut was er schon kann bleibt immer das was er schon ist." Zitat Henry Ford"Wer immer tut was er schon kann bleibt immer das was er schon ist." Zitat Henry Ford +10 Monate • vor 10 Monaten • Alle Mitglieder und Nicht-Mitglieder von LinkedIn + +Vari-tech GmbHVari-tech GmbH +180 Follower:innen180 Follower:innen +91.Archivtag in Suhl + +Treffen Sie uns auf dem Stand 31. +Größere Bilddarstellung aktivieren, +Keine alternative Textbeschreibung für dieses Bild vorhanden +Größere Bilddarstellung aktivieren, +likecelebrate +4 + +Alle Beiträge anzeigen +BerufserfahrungBerufserfahrung +Logo von Vari-tech GmbH +Geschäftsführender Gesellschafter +Geschäftsführender Gesellschafter +Vari-tech GmbH · SelbstständigVari-tech GmbH · Selbstständig +Juli 2021–Heute · 4 Jahre 2 MonateJuli 2021–Heute · 4 Jahre 2 Monate +Digitalisierung, Beratung, Konzepterarbeitung, Scanning, Datenaufbereitung, Datenkonvertierung und ErschließungsarbeitenDigitalisierung, Beratung, Konzepterarbeitung, Scanning, Datenaufbereitung, Datenkonvertierung und Erschließungsarbeiten +Workflow-Entwickler +Workflow-Entwickler +Workflow-Entwickler +K&B GbR · SelbstständigK&B GbR · Selbstständig +Okt. 2019–Heute · 5 Jahre 11 MonateOkt. 2019–Heute · 5 Jahre 11 Monate +Radolfzell am Bodensee, Baden-Württemberg, DeutschlandRadolfzell am Bodensee, Baden-Württemberg, Deutschland +Logo von Zahnmedizin Zentrum Dr. Basset +Manager COO +Manager COO +Zahnmedizin Zentrum Dr. BassetZahnmedizin Zentrum Dr. Basset +Jan. 2019–Dez. 2021 · 3 JahreJan. 2019–Dez. 2021 · 3 Jahre +Radolfzell am BodenseeRadolfzell am Bodensee +Logo von GBL Gubler AG +GBL Gubler AG +GBL Gubler AG +17 Jahre 9 Monate17 Jahre 9 Monate +Produktionsleiter und Mitglied der Geschäftsleitung +Produktionsleiter und Mitglied der Geschäftsleitung +Apr. 2011–Sept. 2018 · 7 Jahre 6 MonateApr. 2011–Sept. 2018 · 7 Jahre 6 Monate +Frauenfeld, Kanton Thurgau, SchweizFrauenfeld, Kanton Thurgau, Schweiz +wöchentliche Prozess- und Effizienzanalyse +Einbinden von Menschen mit Beeinträchtigung und sensibilisieren aller Mitarbeiter +Berater im Bereich der digitalen Archivierung, Prozessanalyse und neue Produkte. +Aufbau und Betrieb von abgesetzten Systemen vor Ort beim Kunden. +Business Analyst in mehreren Projekten Bsp. "Vecteur" des Schweizerischen Bundesarchiv. +Aufbau der Softwareentwicklungsabteilung in der neben kleineren Programmen auch ein neues WorkflowManagementSystem für Audio Visuelle Daten entickelt wurde. +Ausbau der Produktion in 4 Fachbereiche +Einführung eines WorkflowManagementSystems für Bilddatenwöchentliche Prozess- und Effizienzanalyse Einbinden von Menschen mit Beeinträchtigung und sensibilisieren aller Mitarbeiter Berater im Bereich der digitalen Archivierung, Prozessanalyse und neue Produkte. Aufbau und Betrieb von abgesetzten Systemen vor Ort beim Kunden. Business Analyst in mehreren Projekten Bsp. "Vecteur" des Schweizerischen Bundesarchiv. Aufbau der Softwareentwicklungsabteilung in der neben kleineren Programmen auch ein neues WorkflowManagementSystem für Audio Visuelle Daten entickelt wurde. Ausbau der Produktion in 4 Fachbereiche Einführung eines WorkflowManagementSystems für Bilddaten… mehr anzeigen +Produktionsleiter +Produktionsleiter +2008–März 2011 · 3 Jahre 3 Monate2008–März 2011 · 3 Jahre 3 Monate +85528552 +wöchentliche Prozess- und Effizienzanalyse +permanente Prozess- und Workflowoptimierung +stetige Ausbildung der Mitarbeiter (Operator, Team- und Projektleiter) +Unterstützung bei Entwicklung des Laserbelichters Eternity E105 zur Produktionsreife am Standort Felben-Wellhausen. Softwareentwicklung in Zusammenarbeit mit dem IML der Univesität Basel. +Unterstützung in mehreren Teilbereichen bei der Entwicklung des Laserbelichters Archivlaser am Frauenhofer Institut in Freiburg und an der Universität Stuttgart zur Produktreife. +Mitarbeit in div. Arbeitsgruppen der e-CH und VSAwöchentliche Prozess- und Effizienzanalyse permanente Prozess- und Workflowoptimierung stetige Ausbildung der Mitarbeiter (Operator, Team- und Projektleiter) Unterstützung bei Entwicklung des Laserbelichters Eternity E105 zur Produktionsreife am Standort Felben-Wellhausen. Softwareentwicklung in Zusammenarbeit mit dem IML der Univesität Basel. Unterstützung in mehreren Teilbereichen bei der Entwicklung des Laserbelichters Archivlaser am Frauenhofer Institut in Freiburg und an der Universität Stuttgart zur Produktreife. Mitarbeit in div. Arbeitsgruppen der e-CH und VSA… mehr anzeigen +Projektleiter +Projektleiter +2001–2008 · 7 Jahre2001–2008 · 7 Jahre +85608560 +Ausbelichtung und Finishing von Grossformatfotos. +Ausbau und Prozessoptimierung der Digitalisierungsabteilung +Aufbau Fachbereich Metadaten StandardAusbelichtung und Finishing von Grossformatfotos. Ausbau und Prozessoptimierung der Digitalisierungsabteilung Aufbau Fachbereich Metadaten Standard… mehr anzeigen +Logo von ammdoppleb +Monteur +Monteur +ammdopplebammdoppleb +1995–2001 · 6 Jahre1995–2001 · 6 Jahre +7826978269 +Vom einfachen kleinen bis hin zum individuellen mehrstöckigen Messstand produzierte, fuhr und baute ich die Stände allein und in kleinen Teams in weiten Teilen Europas auf. +Aufgaben: Schreiner, Lackierer, Folien- und PrintverarbeitungVom einfachen kleinen bis hin zum individuellen mehrstöckigen Messstand produzierte, fuhr und baute ich die Stände allein und in kleinen Teams in weiten Teilen Europas auf. Aufgaben: Schreiner, Lackierer, Folien- und Printverarbeitung… mehr anzeigen +AusbildungAusbildung +ZHAW School of Management and Law +ZHAW School of Management and Law +ZHAW School of Management and Law +HERMES 5 FoundationHERMES 5 Foundation +2017–20172017–2017 +Berufsschulzentrum Radolfzell +Berufsschulzentrum Radolfzell +Berufsschulzentrum Radolfzell +1992–19951992–1995 +Ausbildung zum Bau- und MöbeltischlerAusbildung zum Bau- und Möbeltischler +KenntnisseKenntnisse +Microsoft Office +Microsoft Office +Johannes Paul Kauert (John)s Profilfoto +verifiziert von Johannes Paul Kauert (John) (gemeinsamer Kontakt)verifiziert von Johannes Paul Kauert (John) (gemeinsamer Kontakt) +Johannes Paul Kauert (John)s Profilfoto +In den vergangenen 6 Monaten von 1 Person bestätigtIn den vergangenen 6 Monaten von 1 Person bestätigt +3 Kenntnisbestätigungen +3 Kenntnisbestätigungen + +Bestätigen +Management +Management +Johannes Paul Kauert (John)s Profilfoto +verifiziert von Johannes Paul Kauert (John) (gemeinsamer Kontakt)verifiziert von Johannes Paul Kauert (John) (gemeinsamer Kontakt) +Johannes Paul Kauert (John)s Profilfoto +In den vergangenen 6 Monaten von 1 Person bestätigtIn den vergangenen 6 Monaten von 1 Person bestätigt +3 Kenntnisbestätigungen +3 Kenntnisbestätigungen + +Bestätigen +Alle 14 Kenntnisse anzeigen +OrganisationenOrganisationen +VSA-AAS +VSA-AAS +Jan. 2014–HeuteJan. 2014–Heute +Der Verein Schweizerischer Archivarinnen und Archivare (VSA) repräsentiert Archivarinnen und Archivare, Records Manager und Informationsfachleute in der Schweiz. Als nationaler Berufsverband unterstützt er die Zusammenarbeit der professionellen Archive und das Ziel, den Zugang zum Archivgut benutzungsfreundlich zu gestalten.Der Verein Schweizerischer Archivarinnen und Archivare (VSA) repräsentiert Archivarinnen und Archivare, Records Manager und Informationsfachleute in der Schweiz. Als nationaler Berufsverband unterstützt er die Zusammenarbeit der professionellen Archive und das Ziel, den Zugang zum Archivgut benutzungsfreundlich zu gestalten.… mehr anzeigen +Kost ceco +Kost ceco +Mitglied in der Arbeitsgruppe "digitale Archivierung" · März 2010–HeuteMitglied in der Arbeitsgruppe "digitale Archivierung" · März 2010–Heute + + + +eCH-Fachgruppe Digitale Archivierung + +Die eCH-Fachgruppe Digitale Archivierung ist das Standardisierungsgremium, das sämtliche Akteure der digitalen Archivierung (staatliche Archive, öffentliche Verwaltungen, Software- und Dienstleistungsanbieter) vereinigt. Sie bietet somit einen Rahmen für breit abgestützte Standardisierungsvorhaben. + +Mehr Informationen zur eCH-Fachgruppe Digitale Archivierung finden Sie auf der eCH-Website sowie auf dem eCH-Share (für registrierte User). Untenstehend dokumentiert sind die bisherigen Sitzungen der Fachgruppe. diff --git a/apps/memoro/apps/landing/context/team/Dirk-Zimanky-LinkedIn.md b/apps/memoro/apps/landing/context/team/Dirk-Zimanky-LinkedIn.md new file mode 100644 index 000000000..89349298a --- /dev/null +++ b/apps/memoro/apps/landing/context/team/Dirk-Zimanky-LinkedIn.md @@ -0,0 +1,285 @@ +Dirk Zimanky + Direkter Kontakt1. +Gründer von adCura - Wir beratenEigentümer, Investoren und Verwaltungsräte. Experte in Electronic Manufacturing Services (EMS) + +edisconet + +University of Konstanz +Zürich Metropolitan Area Kontaktinfo +1.195 Follower:innen +500+ Kontakte + + +Alex Vasileva, Tobias Müller und 19 weitere gemeinsame KontakteAlex Vasileva, Tobias Müller und 19 weitere gemeinsame Kontakte + +Nachricht + +Ihre Serviceleistungen anzeigen + +Mehr + +Profil mit Premium verbessert +HighlightsHighlights +Unternehmenslogo +Dirk Zimanky folgt Ihrem Unternehmen auf LinkedIn +Dirk Zimanky folgt Ihrem Unternehmen auf LinkedIn +Dirk Zimanky kennt Ihre Marke und ist möglicherweise empfänglicher für eine Kontaktaufnahme.Dirk Zimanky kennt Ihre Marke und ist möglicherweise empfänglicher für eine Kontaktaufnahme. +Kostenlose Insights von LinkedIn Sales Navigator +Kostenlose Insights von LinkedIn Sales Navigator +Schalten Sie mehr Einblicke in Leads frei +Bessere Kontaktaufnahme mit vertriebsrelevanten Insights + +Sales Navigator für 0 CHF erneut testen +1 Probemonat mit Support rund um die Uhr. Sie können jederzeit kündigen. Sie erhalten 7 Tage vor Ablauf der Probeversion eine entsprechende Erinnerung. + +InfoInfo +Ich arbeite jeden Tag daran, Menschen und Unternehmen erfolgreich zu machen, die positiv auf unsere Umwelt und das sozioökonomische Leben einwirken und damit die Welt zu einem besseren Ort machen. + +Mit mehr als 30 Jahren Erfahrung in den Bereichen Business, Technologie und Marktentwicklung, insbesondere auf dem globalen Markt für Elektronik, habe ich folgende Kernkompetenzen: + +- Starke Expertise in den Bereichen Business, Technologie und Marktentwicklung, insbesondere auf dem globalen Markt für Elektronik. +- Menschen zu herausfordernden Zielen motivieren und sich selbst tragende Leistungseinheiten bilden. +- Aufbau langjähriger strategischer Partnerschaften und dauerhafter persönlicher Geschäfts-Beziehungen. +- Breites internationales Kontakt Netzwerk. +- Fähigkeit von Organisationsstrukturen auf lokaler und internationaler Ebene zu bewerten und zu optimieren. +- Sicher in multikulturellen und internationalen Vertragsverhandlungen. +- Vertraut mit der Führung und Geschäftsdynamik in Asien, Europa und den USA. +- Fusionen und Übernahmen auf Käufer- & Verkäuferseite. + +Meine Werte sind ethisches Verhalten, Vertrauen, ehrlicher Respekt, Zusammenarbeit, greifbare Innovation und Wertschätzung des Lebens.Ich arbeite jeden Tag daran, Menschen und Unternehmen erfolgreich zu machen, die positiv auf unsere Umwelt und das sozioökonomische Leben einwirken und damit die Welt zu einem besseren Ort machen. Mit mehr als 30 Jahren Erfahrung in den Bereichen Business, Technologie und Marktentwicklung, insbesondere auf dem globalen Markt für Elektronik, habe ich folgende Kernkompetenzen: - Starke Expertise in den Bereichen Business, Technologie und Marktentwicklung, insbesondere auf dem globalen Markt für Elektronik. - Menschen zu herausfordernden Zielen motivieren und sich selbst tragende Leistungseinheiten bilden. - Aufbau langjähriger strategischer Partnerschaften und dauerhafter persönlicher Geschäfts-Beziehungen. - Breites internationales Kontakt Netzwerk. - Fähigkeit von Organisationsstrukturen auf lokaler und internationaler Ebene zu bewerten und zu optimieren. - Sicher in multikulturellen und internationalen Vertragsverhandlungen. - Vertraut mit der Führung und Geschäftsdynamik in Asien, Europa und den USA. - Fusionen und Übernahmen auf Käufer- & Verkäuferseite. Meine Werte sind ethisches Verhalten, Vertrauen, ehrlicher Respekt, Zusammenarbeit, greifbare Innovation und Wertschätzung des Lebens.… mehr anzeigen +ServiceleistungenServiceleistungen +Advising founders, owners, investors & boards through: + +ACTIVE INVOLVEMENT +for founders and key members through business challenges, board work, business idea feasibility & strategy formulation. + +PROFITABLE GROWTH +by implementing new dynamics, including strategy review, interim management, cultural & change management. + +MAXIMIZING THE VALUE +of investments through active board work, investment opportunities, add-on acquisitions, divestitures & more.Advising founders, owners, investors & boards through: ACTIVE INVOLVEMENT for founders and key members through business challenges, board work, business idea feasibility & strategy formulation. PROFITABLE GROWTH by implementing new dynamics, including strategy review, interim management, cultural & change management. MAXIMIZING THE VALUE of investments through active board work, investment opportunities, add-on acquisitions, divestitures & more.… mehr anzeigen +Veränderungsmanagement • Unternehmensberatung • Preispolitik • Strategische Planung • Führungskräftecoaching • VerhandlungsführungVeränderungsmanagement • Unternehmensberatung • Preispolitik • Strategische Planung • Führungskräftecoaching • Verhandlungsführung +Alle Serviceleistungen anzeigen +Im FokusIm Fokus +Bild +Bild +Bild für What Others Say +What Others SayWhat Others Say +https://adcura.comhttps://adcura.com +Bild +Bild +Bild für What Others Say +What Others SayWhat Others Say +https://adcura.comhttps://adcura.com +Bild +Bild +Bild für The 4 D's of Organizational Transformation +The 4 D's of Organizational TransformationThe 4 D's of Organizational Transformation +“I’m convinced that about half of what separates successful entrepreneurs from the non-successful ones is pure perseverance.” – Steve Jobs, Co-Founder of Apple.“I’m convinced that about half of what separates successful entrepreneurs from the non-successful ones is pure perseverance.” – Steve Jobs, Co-Founder of Apple. +Bild +Bild +Bild für 4 step roadmap to ensure business growth +4 step roadmap to ensure business growth4 step roadmap to ensure business growth +At adCura, we have developed a straight forward 4 step roadmap to ensure our customers’ strategic business growth that is designed to boost agility and flexibility in their organizations and strengthen their long-term strategies in this new global business environment. In our upcoming posts we will take a closer look on these steps and their impact. +https://adcura.comAt adCura, we have developed a straight forward 4 step roadmap to ensure our customers’ strategic business growth that is designed to boost agility and flexibility in their organizations and strengthen their long-term strategies in this new global business environment. In our upcoming posts we will take a closer look on these steps and their impact. https://adcura.com +Link +Link + +adCura - advising founders, owners, investors & boardsadCura - advising founders, owners, investors & boards +adCuraadCura +with our network of international experts, adCura strives to make the world a better place +by empowering people and organizations having a positive impact on our environment and social-economic life. +adCura’s highly experienced team & associates with deep understanding about leading global operating entities and small local units have held top management & CEO positions, and driven the growth of multinational half a billion worth companies.with our network of international experts, adCura strives to make the world a better place by empowering people and organizations having a positive impact on our environment and social-economic life. adCura’s highly experienced team & associates with deep understanding about leading global operating entities and small local units have held top management & CEO positions, and driven the growth of multinational half a billion worth companies. + +AktivitätenAktivitäten +1.195 Follower:innen1.195 Follower:innen + + +Follower:in + +Beiträge + +Kommentare +9 „Beiträge“-Beiträge wurden geladen +Profilfoto von Dirk Zimanky +Dirk Zimanky hat dies repostet + +Link zur Grafik von Stephanie Kaudela-Baum, Prof. Dr. anzeigen +Stephanie Kaudela-Baum, Prof. Dr.Stephanie Kaudela-Baum, Prof. Dr. + • 2.Verifiziert • 2. +Professor of Leadership and Innovation I Co-Head Competence Center Business Development, Leadership and HR I Lecturer I Speaker I #leadership I #innovation I #creativityProfessor of Leadership and Innovation I Co-Head Competence Center Business Development, Leadership and HR I Lecturer I Speaker I #leadership I #innovation I #creativity +2 Wochen • vor 2 Wochen • Alle Mitglieder und Nicht-Mitglieder von LinkedIn +In Lucerne, you definitely get into the continuous innovation flow. Late summer, lake, mountains, chocolate and great conference participants :) - register now and network with the CINet community! HSLU Hochschule Luzern HSLU – Institut für Betriebs- und Regionalökonomie IBR HSLU – Lucerne School of Business – International +… mehr + +Continuous Innovation Network (CINet)Continuous Innovation Network (CINet) +597 Follower:innen597 Follower:innen +The 26th CINet conference is coming! +Join us on 7-9 September 2025 in the beautiful Lucerne! +Registrations are still open: https://lnkd.in/dQ_ix6FG + + +CINet Board +Maria Carmela Annosi, Harry Boer, Tim Schweisfurth, Luca Gastaldi, René Chester Goduscheit, Katharina Hölzle, Nicolette Lakemond, Mats Magnusson, Luisa Pellegrini, Magnus Persson, Daniel Trabucchi, Jeannette Visser-Groeneveld, Melanie Wiener, Patricia Wolf + +Local Organizers Committee +Patricia Wolf, Stephanie Kaudela-Baum, Prof. Dr., Julien Alain Nussbaum, Christian Hohmann, Shaun West, Prof. Dr. Petra Müller-Csernetzky +… mehr + +Übersetzung anzeigen +Größere Bilddarstellung aktivieren, +Keine alternative Textbeschreibung für dieses Bild vorhanden +Größere Bilddarstellung aktivieren, +like +16 +2 Reposts + + + + +Link zur Grafik von Dirk Zimanky anzeigen +Dirk ZimankyDirk Zimanky + • 1.Premium • 1. +Gründer von adCura - Wir beratenEigentümer, Investoren und Verwaltungsräte. Experte in Electronic Manufacturing Services (EMS)Gründer von adCura - Wir beratenEigentümer, Investoren und Verwaltungsräte. Experte in Electronic Manufacturing Services (EMS) +Ihre Serviceleistungen anzeigen +2 Monate • vor 2 Monaten • Alle Mitglieder und Nicht-Mitglieder von LinkedIn + +Most companies don't have a knowledge deficit – they just don't know what they already know. How can generative AI help to make hidden knowledge visible – without overwhelming employees? +We have published a new article at edisconet, take a look: +… mehr + +edisconetedisconet +240 Follower:innen240 Follower:innen +Die meisten Unternehmen haben kein Wissensdefizit – sie wissen nur nicht, was sie schon wissen. +Wie kann generative KI helfen, verborgenes Wissen sichtbar zu machen – ganz ohne Ihre Mitarbeitenden zu überfordern? + +In unserem neuen Artikel zeigen wir, +- wie KI stilles Wissen in nutzbares Wissen verwandelt +- was das mit CRM-Daten, Vertriebsprozessen und Transparenz zu tun hat +- und warum wir gemeinsam mit der HSLU eine Sandbox für wissensintensive Organisationen entwickeln. + +👉 Jetzt lesen: „Wissensgenerierung neu denken: Wie KI zum stillen Teammitglied Ihrer Organisation wird“ +🔗 https://lnkd.in/eu6iEB4a + +🎯 Neugierig geworden? Am Ende des Artikels erfahren Sie, wie Sie Teil unseres spannenden Forschungsprojekts werden können. + + +Hashtag#Wissensmanagement Hashtag#GenerativeAI Hashtag#TacitKnowledge Hashtag#AIimUnternehmen Hashtag#edisconetCommons Hashtag#FutureOfWork + +Prof. Dr. Petra Müller-Csernetzky, Stephanie Kaudela-Baum, Prof. Dr., HSLU Hochschule Luzern, Dirk Zimanky +… mehr +Größere Bilddarstellung aktivieren, +Wissensmanagement und KI_edisconet, HSLU +Größere Bilddarstellung aktivieren, +likecelebrate +15 + + + + + +Alle Beiträge anzeigen +BerufserfahrungBerufserfahrung +Logo von edisconet +Board Member +Board Member +edisconet · Vollzeitedisconet · Vollzeit +Jan. 2022–Heute · 3 Jahre 8 MonateJan. 2022–Heute · 3 Jahre 8 Monate +edisconet is one platform for multiple learning paths. It allows to engage your employees with meaningful & rewarding learning experiences by integrating your internal systems with the universe of trainings, trainers & learning paths.edisconet is one platform for multiple learning paths. It allows to engage your employees with meaningful & rewarding learning experiences by integrating your internal systems with the universe of trainings, trainers & learning paths.… mehr anzeigen +Logo von adCura +Gründer und Geschäftsführender Partner +Gründer und Geschäftsführender Partner +adCuraadCura +2020–Heute · 5 Jahre 8 Monate2020–Heute · 5 Jahre 8 Monate +SchweizSchweiz +adCura berät Gründer, Eigentümer, Investoren und Verwaltungsräte. Mit unserem Netzwerk internationaler Experten arbeiten wir mit Start-ups, Unternehmen des Mittelstands und Risikokapitalunternehmen zusammen. Bei adCura konzentriert sich Alles darauf den Wert der Unternehmung für die verschiedenen Stakeholder zu erhöhen.adCura berät Gründer, Eigentümer, Investoren und Verwaltungsräte. Mit unserem Netzwerk internationaler Experten arbeiten wir mit Start-ups, Unternehmen des Mittelstands und Risikokapitalunternehmen zusammen. Bei adCura konzentriert sich Alles darauf den Wert der Unternehmung für die verschiedenen Stakeholder zu erhöhen.… mehr anzeigen +Logo von Enics +Enics +Enics +16 Jahre 1 Monat16 Jahre 1 Monat +Senior Vice President, Market Execution +Senior Vice President, Market Execution +2013–2020 · 7 Jahre2013–2020 · 7 Jahre +Enics ist ein Unternehmen, das Fertigungs-, Entwicklungs- und Kundendienstleistungen für elektronische Schaltungen und komplette elektronische Systeme für industrielle OEMs weltweit anbietet. Leitung des Inbound-Geschäfts mit mehr als 500 MEUR Jahresumsatz aus globalen OEM-Kunden. Verantwortlich für Gewinn und Verlust, Budgets und Planung für die verschiedenen Dienstleistungen in den Bereichen Fertigung, Engineering und After-Sales.Enics ist ein Unternehmen, das Fertigungs-, Entwicklungs- und Kundendienstleistungen für elektronische Schaltungen und komplette elektronische Systeme für industrielle OEMs weltweit anbietet. Leitung des Inbound-Geschäfts mit mehr als 500 MEUR Jahresumsatz aus globalen OEM-Kunden. Verantwortlich für Gewinn und Verlust, Budgets und Planung für die verschiedenen Dienstleistungen in den Bereichen Fertigung, Engineering und After-Sales.… mehr anzeigen +President and CEO +President and CEO +2010–2013 · 3 Jahre2010–2013 · 3 Jahre +Diese Position nach der Finanzkrise im Jahr 2009 einzunehmen, führte das Unternehmen zurück auf einen Weg des organischen Wachstums. Wir haben unsere europäischen Einheiten auf Mehrwertdienste umgestellt, den Ausbau des zweiten Standorts in China erfolgreich abgeschlossen und das Unternehmen mit einer durchschnittlichen jährlichen Wachstumsrate von 8 Prozent organisch ausgebaut. Ich führte die strategische und operative Leitung der Gruppe und deren acht Geschäftseinheiten mit insgesamt 3500 Mitarbeitern. Wir waren in Estland, der Schweiz, Finnland, der Slowakei, Schweden, China und Hongkong tätig.Diese Position nach der Finanzkrise im Jahr 2009 einzunehmen, führte das Unternehmen zurück auf einen Weg des organischen Wachstums. Wir haben unsere europäischen Einheiten auf Mehrwertdienste umgestellt, den Ausbau des zweiten Standorts in China erfolgreich abgeschlossen und das Unternehmen mit einer durchschnittlichen jährlichen Wachstumsrate von 8 Prozent organisch ausgebaut. Ich führte die strategische und operative Leitung der Gruppe und deren acht Geschäftseinheiten mit insgesamt 3500 Mitarbeitern. Wir waren in Estland, der Schweiz, Finnland, der Slowakei, Schweden, China und Hongkong tätig.… mehr anzeigen +Co-founder and Senior Vice President +Co-founder and Senior Vice President +2004–2010 · 6 Jahre2004–2010 · 6 Jahre +Enics wurde 2004 als Leveraged Management-Buy-Out ausgewählter Teile von Elcoteq gegründet. Von 2006 bis 2010 war ich als Senior Vice President, CRM sowie von 2004 bis 2006 als Vice President Business Development tätig. In diesem Zeitraum haben wir mehrere Akquisitionen und operative Erweiterungen durchgeführt und das Unternehmen erfolgreich von 116 Mio. EUR auf über 300 Mio. EUR ausgebaut. Berichterstattung an den CEO.Enics wurde 2004 als Leveraged Management-Buy-Out ausgewählter Teile von Elcoteq gegründet. Von 2006 bis 2010 war ich als Senior Vice President, CRM sowie von 2004 bis 2006 als Vice President Business Development tätig. In diesem Zeitraum haben wir mehrere Akquisitionen und operative Erweiterungen durchgeführt und das Unternehmen erfolgreich von 116 Mio. EUR auf über 300 Mio. EUR ausgebaut. Berichterstattung an den CEO.… mehr anzeigen +Logo von Elcoteq +Elcoteq +Elcoteq +5 Jahre 1 Monat5 Jahre 1 Monat +Director, Business Development Business Area Communication Network Equipment & Industrial Electronic +Director, Business Development Business Area Communication Network Equipment & Industrial Electronic +2002–2004 · 2 Jahre2002–2004 · 2 Jahre +Elcoteq trat dem Unternehmen auf dem Höhepunkt des Wachstums des Mobilfunkmarktes bei und erbrachte elektronische Fertigungsdienstleistungen für Mobiltelefone und Basisstationen von Kommunikationsnetzwerken. Wir verzeichneten ein sehr schnelles Umsatzwachstum, haben in einigen Jahren den Umsatz mehr als verdoppelt und betrieben ein Fertigungsnetzwerk in 16 Geschäftsbereichen in Japan, China, Estland, USA, Mexiko, Russland und Finnland. Während meiner Zeit bei Elcoteq hatte ich von 2002 bis 2004 verschiedene Positionen als Director, Business Development Business Area Communication Network Equipment und Industrial Electronic inne. Von 2000 bis 2002 Director, Sales Industrial Electronic und von 1999 bis 2000 als Account Manager, Geographical Area Europe.Elcoteq trat dem Unternehmen auf dem Höhepunkt des Wachstums des Mobilfunkmarktes bei und erbrachte elektronische Fertigungsdienstleistungen für Mobiltelefone und Basisstationen von Kommunikationsnetzwerken. Wir verzeichneten ein sehr schnelles Umsatzwachstum, haben in einigen Jahren den Umsatz mehr als verdoppelt und betrieben ein Fertigungsnetzwerk in 16 Geschäftsbereichen in Japan, China, Estland, USA, Mexiko, Russland und Finnland. Während meiner Zeit bei Elcoteq hatte ich von 2002 bis 2004 verschiedene Positionen als Director, Business Development Business Area Communication Network Equipment und Industrial Electronic inne. Von 2000 bis 2002 Director, Sales Industrial Electronic und von 1999 bis 2000 als Account Manager, Geographical Area Europe.… mehr anzeigen +Director, Sales Industrial Electronic +Director, Sales Industrial Electronic +2000–2002 · 2 Jahre2000–2002 · 2 Jahre +Account Manager, Geographical Area Europe +Account Manager, Geographical Area Europe +1999–2000 · 1 Jahr1999–2000 · 1 Jahr +Stephan Elektronik +Stephan Elektronik +Stephan Elektronik +13 Jahre 1 Monat13 Jahre 1 Monat +Head of Administration (Finance, HR, Legal, IT, Sales) +Head of Administration (Finance, HR, Legal, IT, Sales) +1991–1999 · 8 Jahre1991–1999 · 8 Jahre +Der Start in einem kleineren Familienunternehmen bot mir die großartige Gelegenheit, in allen Fachgebieten zu arbeiten, von Design, Beschaffung, Betrieb bis hin zu Recht und Finanzen. Wir haben das Unternehmen auf drei Standorte mit mehr als 1300 Mitarbeitern ausgebaut.Der Start in einem kleineren Familienunternehmen bot mir die großartige Gelegenheit, in allen Fachgebieten zu arbeiten, von Design, Beschaffung, Betrieb bis hin zu Recht und Finanzen. Wir haben das Unternehmen auf drei Standorte mit mehr als 1300 Mitarbeitern ausgebaut.… mehr anzeigen +Head Of Department, Operations and supply chain +Head Of Department, Operations and supply chain +1986–1991 · 5 Jahre1986–1991 · 5 Jahre +Holding several positions as department head within operations and supply chain in Germany, Poland and Switzerland. Reporting to the owner.Holding several positions as department head within operations and supply chain in Germany, Poland and Switzerland. Reporting to the owner. +AusbildungAusbildung +Logo von Universität Konstanz +Universität Konstanz +Universität Konstanz +Master's degree, Administration ScienceMaster's degree, Administration Science +1985–19911985–1991 +Collegium Mehrerau-Bernardi +Collegium Mehrerau-Bernardi +Collegium Mehrerau-Bernardi +1976–19841976–1984 +KenntnisseKenntnisse +Leadership +Leadership +Logo von Enics +Bestätigt von 5 Kolleg:innen bei EnicsBestätigt von 5 Kolleg:innen bei Enics +10 Kenntnisbestätigungen +10 Kenntnisbestätigungen + +Bestätigen +Business Development +Business Development +Logo von Enics +Bestätigt von 5 Kolleg:innen bei EnicsBestätigt von 5 Kolleg:innen bei Enics +10 Kenntnisbestätigungen +10 Kenntnisbestätigungen + +Bestätigen +Alle 12 Kenntnisse anzeigen +EmpfehlungenEmpfehlungen +Dirk Zimanky empfehlen +ErhaltenErhalten +ErteiltErteilt +Noch keine Informationen verfügbar +Noch keine Informationen verfügbar +Empfehlungen, die Dirk Zimanky erhält, erscheinen hier.Empfehlungen, die Dirk Zimanky erhält, erscheinen hier. +KurseKurse +Introduction to Digital Facilitation +Introduction to Digital Facilitation +Unternehmenslogo +Assoziiert mit adCura +Assoziiert mit adCura +SprachenSprachen +English +English +FließendFließend +French +French +Grundkenntnisse \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/team/Florian-König-LinkedIn.md b/apps/memoro/apps/landing/context/team/Florian-König-LinkedIn.md new file mode 100644 index 000000000..e095db89f --- /dev/null +++ b/apps/memoro/apps/landing/context/team/Florian-König-LinkedIn.md @@ -0,0 +1,143 @@ +Florian König +Direkter Kontakt1. +So geht Marketing-Automation - Quentn.com + +Quentn.com GmbH +Pfinztal, Baden-Württemberg, Deutschland Kontaktinfo +247 Kontakte + +Alex Vasileva, Tobias Müller und 1 weiterer gemeinsamer KontaktAlex Vasileva, Tobias Müller und 1 weiterer gemeinsamer Kontakt + +Nachricht + +Mehr +HighlightsHighlights +Unternehmenslogo +Florian König folgt Ihrem Unternehmen auf LinkedIn +Florian König folgt Ihrem Unternehmen auf LinkedIn +Florian König kennt Ihre Marke und ist möglicherweise empfänglicher für eine Kontaktaufnahme.Florian König kennt Ihre Marke und ist möglicherweise empfänglicher für eine Kontaktaufnahme. +Kostenlose Insights von LinkedIn Sales Navigator +Kostenlose Insights von LinkedIn Sales Navigator +Schalten Sie mehr Einblicke in Leads frei +Bessere Kontaktaufnahme mit vertriebsrelevanten Insights + +Sales Navigator für 0 CHF erneut testen +1 Probemonat mit Support rund um die Uhr. Sie können jederzeit kündigen. Sie erhalten 7 Tage vor Ablauf der Probeversion eine entsprechende Erinnerung. + +AktivitätenAktivitäten +246 Follower:innen246 Follower:innen + +4 „Beiträge“-Beiträge wurden geladen +Profilfoto von Florian König +Florian König hat dies repostet + +edisconetedisconet +240 Follower:innen240 Follower:innen +2 Monate • vor 2 Monaten • Alle Mitglieder und Nicht-Mitglieder von LinkedIn +Silent knowledge that never makes it into meetings or manuals is a part of every organisation. At edisconet, we magnify this hidden knowledge and turn it into useful insights. + +It's time to give your team's unspoken expertise the spotlight it deserves. + +https://edisconet.com +… mehr +Größere Bilddarstellung aktivieren, +Keine alternative Textbeschreibung für dieses Bild vorhanden +Größere Bilddarstellung aktivieren, +like +11 +2 Kommentare +2 Reposts + +Profilfoto von Florian König +Florian König hat dies repostet + +MemoroMemoro +219 Follower:innen219 Follower:innen +1 Jahr • Bearbeitet • vor 1 Jahr • Bearbeitet • Alle Mitglieder und Nicht-Mitglieder von LinkedIn +📢 Unser größtes App-Update bisher 🚀 + +✨ Viele neue Funktionen: + +Spezielle Modi: Verschiedene Modi helfen euch, spezielle Aufgaben zu erledigen, z.B. ein Formular auszufüllen, eine Mail zu formulieren oder eine Besprechung aufzunehmen. Aktuell sind Modi in den Bereichen Pflege, Handwerk und Bau, Büro, Universität, Tagebuch und Journalismus verfügbar. + +Checklisten: Jeder Modus hat seine eigene Checkliste mit Tipps, um strukturierte Aufnahmen zu erstellen und wichtige Punkte nicht zu vergessen. + +26 Sprachen: Memoro kann jetzt in 26 Sprachen zuhören und mitschreiben – inklusive eingebauter Übersetzung. (Deutsch, Schwiizerdütsch, Österreichisch, Englisch, Niederländisch, Französisch, Italienisch, Spanisch, Schwedisch, Norwegisch, Rumänisch, Griechisch, Ägyptisches Arabisch (Masri), Türkisch, Russisch, Ukrainisch, Ungarisch, Hindi, Chinesisch (Zhōngwén), Koreanisch, Indonesisch, Vietnamesisch) + +Wichtig: Um weiterhin alle Funktionen von Memoro nutzen zu können, ist ein Upgrade auf die neueste Version erforderlich. Bitte deinstalliert eure aktuelle Version und installiert die neue Version aus dem Play Store oder App Store. +Hier findet ihr die App für Android im Play Store: https://lnkd.in/dhszvBck +und hier für Apple im App Store: https://lnkd.in/d4Kh58aj + +Wir freuen uns darauf, dass ihr die neuen Funktionen von Memoro ausprobiert und sind gespannt auf euer Feedback! 🌍💬 + +Hashtag#Memoro Hashtag#Update Hashtag#NeueFunktionen Hashtag#Produktivität Hashtag#AppUpdate Hashtag#Sprachübersetzung Hashtag#Dokumentation +… mehr +Größere Bilddarstellung aktivieren, +Memoro, App Version 1.5, Update, Modi, Modes +Größere Bilddarstellung aktivieren, +likelovecelebrate +18 +1 Kommentar +6 Reposts + +Alle Beiträge anzeigen +BerufserfahrungBerufserfahrung +Logo von edisconet +Vertriebsmanager +Vertriebsmanager +edisconet · Selbstständigedisconet · Selbstständig +Sept. 2024–Heute · 1 JahrSept. 2024–Heute · 1 Jahr +Schweiz · RemoteSchweiz · Remote +Logo von Quentn.com GmbH +Partnermanager +Partnermanager +Quentn.com GmbH · SelbstständigQuentn.com GmbH · Selbstständig +Nov. 2023–Heute · 1 Jahr 10 MonateNov. 2023–Heute · 1 Jahr 10 Monate +Logo von OWNERMEETING +Geschäftsführer +Geschäftsführer +OWNERMEETING · SelbstständigOWNERMEETING · Selbstständig +Jan. 2021–Heute · 4 Jahre 8 MonateJan. 2021–Heute · 4 Jahre 8 Monate +Pfinztal, Baden-Württemberg, Deutschland · Vor OrtPfinztal, Baden-Württemberg, Deutschland · Vor Ort +Logo von doxx-on systems GmbH +Vertriebsmitarbeiter +Vertriebsmitarbeiter +doxx-on systems GmbH · Vollzeitdoxx-on systems GmbH · Vollzeit +Jan. 2023–Nov. 2023 · 11 MonateJan. 2023–Nov. 2023 · 11 Monate +Ettlingen, Baden-Württemberg, Deutschland · RemoteEttlingen, Baden-Württemberg, Deutschland · Remote +Logo von Erhardt Gruppe +Key-Account-Manager +Key-Account-Manager +Erhardt BüroweltErhardt Bürowelt +Juli 2007–Dez. 2022 · 15 Jahre 6 MonateJuli 2007–Dez. 2022 · 15 Jahre 6 Monate +Alle 9 Berufserfahrungen anzeigen +KenntnisseKenntnisse +Microsoft Office +Microsoft Office +1 Kenntnisbestätigung +1 Kenntnisbestätigung + +Bestätigen +Kundendienst +Kundendienst +1 Kenntnisbestätigung +1 Kenntnisbestätigung + +Bestätigen +Alle 11 Kenntnisse anzeigen +InteressenInteressen +UnternehmenUnternehmen +GruppenGruppen +NewsletterNewsletter +Logo von Greenpeace +Greenpeace +Greenpeace +635.071 Follower:innen635.071 Follower:innen + +Folgen +Logo von World Wildlife Fund +World Wildlife Fund +World Wildlife Fund +408.972 Follower:innen408.972 Follower:innen + +Folgen diff --git a/apps/memoro/apps/landing/context/team/Lucas-Mag-LinkedIn.md b/apps/memoro/apps/landing/context/team/Lucas-Mag-LinkedIn.md new file mode 100644 index 000000000..f6a87008e --- /dev/null +++ b/apps/memoro/apps/landing/context/team/Lucas-Mag-LinkedIn.md @@ -0,0 +1,164 @@ +Lucas Mag +Direkter Kontakt1. +Informatikspezialist Datensicherung bei Universität Zürich | University of Zurich + +Universität Zürich | University of Zurich + +Elektronikschule Tettnang +Jestetten, Baden-Württemberg, Deutschland Kontaktinfo +186 Kontakte + +Alex Vasileva, Tobias Müller und 5 weitere gemeinsame KontakteAlex Vasileva, Tobias Müller und 5 weitere gemeinsame Kontakte + +Nachricht + +Mehr +HighlightsHighlights +Unternehmenslogo +Lucas Mag folgt Ihrem Unternehmen auf LinkedIn +Lucas Mag folgt Ihrem Unternehmen auf LinkedIn +Lucas Mag kennt Ihre Marke und ist möglicherweise empfänglicher für eine Kontaktaufnahme.Lucas Mag kennt Ihre Marke und ist möglicherweise empfänglicher für eine Kontaktaufnahme. +Kostenlose Insights von LinkedIn Sales Navigator +Kostenlose Insights von LinkedIn Sales Navigator +Schalten Sie mehr Einblicke in Leads frei +Bessere Kontaktaufnahme mit vertriebsrelevanten Insights + +Sales Navigator für 0 CHF erneut testen +1 Probemonat mit Support rund um die Uhr. Sie können jederzeit kündigen. Sie erhalten 7 Tage vor Ablauf der Probeversion eine entsprechende Erinnerung. + +AktivitätenAktivitäten +186 Follower:innen186 Follower:innen + +9 „Beiträge“-Beiträge wurden geladen +Profilfoto von Lucas Mag +Lucas Mag hat dies repostet + +Link zur Grafik von Stephan Lienhard anzeigen +Stephan LienhardStephan Lienhard +• 2.Verifiziert • 2. +ICT Informatikspezialist an der Universität Zürich | IT-Architektur, Server, BackupICT Informatikspezialist an der Universität Zürich | IT-Architektur, Server, Backup +10 Monate • vor 10 Monaten • Alle Mitglieder und Nicht-Mitglieder von LinkedIn +Wir suchen Verstärkung im Team :-) Bei Fragen gerne Melden! + +https://lnkd.in/dPrW3kkD + +UZH: ICT System Engineer +jobs.uzh.ch +like +8 +2 Reposts + +Profilfoto von Lucas Mag +Lucas Mag hat dies repostet + +Link zur Grafik von Marco Fernandez anzeigen +Marco FernandezMarco Fernandez +• 2.Premium • 2. +Manager Presales - ACH Switzerland and Austria @ Veeam Software | Technical SalesManager Presales - ACH Switzerland and Austria @ Veeam Software | Technical Sales +10 Monate • vor 10 Monaten • Alle Mitglieder und Nicht-Mitglieder von LinkedIn +At yesterday's Meet the Architect with our customers and partners, the Universität Zürich | University of Zurich demonstrated how they use Veeam Software and the benefits it provides. We would like to thank all our enterprise customers partners who attended and look forward to the next Meet the Architect in Q1 2025. + +Many thanks to Lucas Mag for the excellent and detailed presentation. +Universität Zürich | University of Zurich, Veeam Software +… mehr +Größere Bilddarstellung aktivieren, +Keine alternative Textbeschreibung für dieses Bild vorhanden +Größere Bilddarstellung aktivieren, +likecelebrate +37 +1 Kommentar +1 Repost + +Alle Beiträge anzeigen +BerufserfahrungBerufserfahrung +Logo von Universität Zürich | University of Zurich +Informatikspezialist Datensicherung +Informatikspezialist Datensicherung +Universität Zürich | University of Zurich · VollzeitUniversität Zürich | University of Zurich · Vollzeit +Nov. 2022–Heute · 2 Jahre 10 MonateNov. 2022–Heute · 2 Jahre 10 Monate +Zürich, SchweizZürich, Schweiz +Inhaber +Inhaber +Inhaber +Haus & Heizungsautomatisierung · SelbstständigHaus & Heizungsautomatisierung · Selbstständig +Aug. 2022–Apr. 2025 · 2 Jahre 9 MonateAug. 2022–Apr. 2025 · 2 Jahre 9 Monate +Stühlingen, Baden-Württemberg, DeutschlandStühlingen, Baden-Württemberg, Deutschland +IT Dienstleistungen im Hausautomatisierungs- und Heizungsautomatisierungsbereich. +Ich helfe Ihnen den effizienten Weg zum autonomen Haus zu realisieren. +Das Thema Energieunabhängigkeit und effizientes Heizen war noch nie gefragter und unübersichtlicher. Profitieren auch Sie.IT Dienstleistungen im Hausautomatisierungs- und Heizungsautomatisierungsbereich. Ich helfe Ihnen den effizienten Weg zum autonomen Haus zu realisieren. Das Thema Energieunabhängigkeit und effizientes Heizen war noch nie gefragter und unübersichtlicher. Profitieren auch Sie.… mehr anzeigen +Erneuerbare Energien und IT-Beratung + 4 Kenntnisse +Logo von Bechtle +Bechtle +Bechtle +4 Jahre 3 Monate4 Jahre 3 Monate +System Engineer Backup +System Engineer Backup +VollzeitVollzeit +Juli 2021–Nov. 2022 · 1 Jahr 5 MonateJuli 2021–Nov. 2022 · 1 Jahr 5 Monate +Fachinformatiker Systemintegration +Fachinformatiker Systemintegration +AzubiAzubi +Sept. 2018–Juli 2021 · 2 Jahre 11 MonateSept. 2018–Juli 2021 · 2 Jahre 11 Monate +Friedrichshafen, Baden-Württemberg, DeutschlandFriedrichshafen, Baden-Württemberg, Deutschland +AusbildungAusbildung +Logo von CVJM +FSJ +FSJ +CVJM · Freiwilliges Soziales JahrCVJM · Freiwilliges Soziales Jahr +Aug. 2017–März 2018 · 8 MonateAug. 2017–März 2018 · 8 Monate +Borkum, Niedersachsen, DeutschlandBorkum, Niedersachsen, Deutschland +AusbildungAusbildung +Elektronikschule Tettnang +Elektronikschule Tettnang +Fachinformatiker Systemintegration, InformatikFachinformatiker Systemintegration, Informatik +2018–20212018–2021 +Naturwissenschaftlich-Technische Akademie Isny +Naturwissenschaftlich-Technische Akademie Isny +Naturwissenschaftlich-Technische Akademie Isny +Assistent für Informations & Kommunikationstechnik, InformatikAssistent für Informations & Kommunikationstechnik, Informatik +2015–20172015–2017 +KenntnisseKenntnisse +IT-Betrieb +IT-Betrieb +LinkedIn Kenntnistest bestandenLinkedIn Kenntnistest bestanden + +Bestätigen +IT-Beratung +IT-Beratung +Inhaber bei Haus & Heizungsautomatisierung Inhaber bei Haus & Heizungsautomatisierung + +Bestätigen +Alle 10 Kenntnisse anzeigen +KurseKurse +VMCEA +VMCEA +Unternehmenslogo +Assoziiert mit Bechtle +Assoziiert mit Bechtle +Veeam Certified Engineer +Veeam Certified Engineer +VMCEVMCE +Unternehmenslogo +Assoziiert mit Bechtle +Assoziiert mit Bechtle +OrganisationenOrganisationen +Feuerwehr +Feuerwehr +Mitglied des Feuerwehrausschuss, Feuerwehrmann · März 2016–Juli 2022Mitglied des Feuerwehrausschuss, Feuerwehrmann · März 2016–Juli 2022 +InteressenInteressen +UnternehmenUnternehmen +GruppenGruppen +NewsletterNewsletter +Hochschulen/BerufsschulenHochschulen/Berufsschulen +Logo von IBM +IBM +IBM +18.555.838 Follower:innen18.555.838 Follower:innen + +Folgen +Logo von Hewlett Packard Enterprise +Hewlett Packard Enterprise +Hewlett Packard Enterprise +3.673.541 Follower:innen3.673.541 Follower:innen + +Folgen diff --git a/apps/memoro/apps/landing/context/team/Nils-Weiser-LinkedIn.md b/apps/memoro/apps/landing/context/team/Nils-Weiser-LinkedIn.md new file mode 100644 index 000000000..60417d898 --- /dev/null +++ b/apps/memoro/apps/landing/context/team/Nils-Weiser-LinkedIn.md @@ -0,0 +1,178 @@ +Nils Weiser +test +Nils Weiser + Direkter Kontakt1. +Co-Founder Codify AG, Software Developer + +Codify + +HTWG Hochschule Konstanz – Technik, Wirtschaft und Gestaltung +Kreuzlingen, Thurgau, Schweiz Kontaktinfo +313 Kontakte + + +Tobias Müller, Jan Kaiser und 20 weitere gemeinsame KontakteTobias Müller, Jan Kaiser und 20 weitere gemeinsame Kontakte + +Nachricht + +Mehr +HighlightsHighlights +Unternehmenslogo +Nils Weiser folgt Ihrem Unternehmen auf LinkedIn +Nils Weiser folgt Ihrem Unternehmen auf LinkedIn +Nils Weiser kennt Ihre Marke und ist möglicherweise empfänglicher für eine Kontaktaufnahme.Nils Weiser kennt Ihre Marke und ist möglicherweise empfänglicher für eine Kontaktaufnahme. +Kostenlose Insights von LinkedIn Sales Navigator +Kostenlose Insights von LinkedIn Sales Navigator +Schalten Sie mehr Einblicke in Leads frei +Bessere Kontaktaufnahme mit vertriebsrelevanten Insights + +Sales Navigator für 0 CHF erneut testen +1 Probemonat mit Support rund um die Uhr. Sie können jederzeit kündigen. Sie erhalten 7 Tage vor Ablauf der Probeversion eine entsprechende Erinnerung. + +AktivitätenAktivitäten +319 Follower:innen319 Follower:innen + + +Beiträge + +Kommentare + +Bilder +9 „Beiträge“-Beiträge wurden geladen +Link zur Grafik von Nils Weiser anzeigen +Nils WeiserNils Weiser + • 1.1. +Co-Founder Codify AG, Software DeveloperCo-Founder Codify AG, Software Developer +3 Monate • Bearbeitet • vor 3 Monaten • Bearbeitet • Alle Mitglieder und Nicht-Mitglieder von LinkedIn + +Just watched an insightful video from Y Combinator with Tom on "vibe coding" with AI tools. Here are the game-changing tips I found most valuable: + +1. Start with a comprehensive plan before diving into code - work section by section +2. Use version control religiously (Git is your friend!) +3. Write high-level integration tests to catch regressions +4. For bugs, simply copy-paste error messages directly to the LLM +5. Create detailed instruction files for your AI coding assistant +6. Choose tech stacks with established conventions (like Rails) for better results + +Personal experience: +when running into error loops of llm calls, stop it, and tell explicitly to use the browser tool to get more context. +We also use a tech stack which is well established (Frontend: react, backend: express). +Writing tests is crucial, especially when Windsurf come with a huge wave of changes to your code base 😃 + +Bonus: for security you can prompt it to act in a red/blue team manner and should audit your code, this should prevent the api key leaks im reading recently on linkedIn with vibe coded products +… mehr +Größere Bilddarstellung aktivieren, +Keine alternative Textbeschreibung für dieses Bild vorhanden +Größere Bilddarstellung aktivieren, +likeinsightful +3 + +like + + + +Link zur Grafik von Nils Weiser anzeigen +Nils WeiserNils Weiser + • 1.1. +Co-Founder Codify AG, Software DeveloperCo-Founder Codify AG, Software Developer +5 Monate • Bearbeitet • vor 5 Monaten • Bearbeitet • Alle Mitglieder und Nicht-Mitglieder von LinkedIn + +MCP Tools für IDE's! + +MCP ist der Standard, um AI zu enhancen, und ist jetzt auch für die meisten IDE's verfügbar! + +Ich habe heute BraveSearch integriert, damit mein LLM immer auf die neuesten Informationen zugreifen kann – ganz ohne lästiges Hin- und Herspringen zwischen Tools. + +Nutzt ihr schon MCP Tools? Welche findet ihr am sinnvollsten? + +Open Source: +https://smithery.ai/ +https://glama.ai/mcp/tools +Hashtag#MCP Hashtag#AI Hashtag#IDEs Hashtag#OpenSource Hashtag#BraveSearch +… mehr +Größere Bilddarstellung aktivieren, +Keine alternative Textbeschreibung für dieses Bild vorhanden +Größere Bilddarstellung aktivieren, +likeinsightful +9 +2 Kommentare + +like + + + + +Alle Beiträge anzeigen +BerufserfahrungBerufserfahrung +Logo von Codify +Co-Founder Codify AG, Software Developer +Co-Founder Codify AG, Software Developer +Codify · VollzeitCodify · Vollzeit +Feb. 2019–Heute · 6 Jahre 7 MonateFeb. 2019–Heute · 6 Jahre 7 Monate +Kreuzlingen, Thurgau, SchweizKreuzlingen, Thurgau, Schweiz +Logo von BMT - Business meets Technology AG +Full-Stack-Entwickler +Full-Stack-Entwickler +BMT Business meets Technology AG · VollzeitBMT Business meets Technology AG · Vollzeit +März 2018–Jan. 2019 · 11 MonateMärz 2018–Jan. 2019 · 11 Monate +Logo von T-Systems Schweiz +Studentischer Entwickler +Studentischer Entwickler +T-Systems Schweiz · TeilzeitT-Systems Schweiz · Teilzeit +Mai 2017–Sept. 2017 · 5 MonateMai 2017–Sept. 2017 · 5 Monate +Logo von timeghost +Web-Entwickler +Web-Entwickler +Köllisch Gesellschaft für Prozessmanagement mbH · PraktikumKöllisch Gesellschaft für Prozessmanagement mbH · Praktikum +Juli 2014–Juni 2015 · 1 JahrJuli 2014–Juni 2015 · 1 Jahr +Konstanz, Baden-Württemberg, DeutschlandKonstanz, Baden-Württemberg, Deutschland +AusbildungAusbildung +Logo von HTWG Hochschule Konstanz – Technik, Wirtschaft und Gestaltung +HTWG Hochschule Konstanz – Technik, Wirtschaft und Gestaltung +HTWG Hochschule Konstanz – Technik, Wirtschaft und Gestaltung +InformatikInformatik +Bescheinigungen und ZertifikateBescheinigungen und Zertifikate +Logo von CrewAI +Multi AI Agent Systems +Multi AI Agent Systems +CrewAICrewAI +Ausgestellt: Aug. 2024Ausgestellt: Aug. 2024 +AI Agent + +Nils Weiser_badge.pdfNils Weiser_badge.pdf +Learn AI Agents +Learn AI Agents +Learn AI Agents +ScrimbaScrimba +Ausgestellt: Juni 2024Ausgestellt: Juni 2024 +Zertifikats-ID: 2QEZCHUJ2DSTZertifikats-ID: 2QEZCHUJ2DST +Nachweis anzeigen +Alle 4 Bescheinigungen und Zertifikate anzeigen +KenntnisseKenntnisse +AI Agent +AI Agent +Unternehmenslogo +Multi AI Agent SystemsMulti AI Agent Systems + +Bestätigen +REST-API +REST-API +LinkedIn Kenntnistest bestandenLinkedIn Kenntnistest bestanden + +Bestätigen +Alle 11 Kenntnisse anzeigen +InteressenInteressen +UnternehmenUnternehmen +GruppenGruppen +NewsletterNewsletter +Hochschulen/BerufsschulenHochschulen/Berufsschulen +Logo von Microsoft +Microsoft +Microsoft +26.068.419 Follower:innen26.068.419 Follower:innen + +Folgen +Logo von Google +Google +Google +38.360.676 Follower:innen \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/team/Till-Schneider-LinkedIn.md b/apps/memoro/apps/landing/context/team/Till-Schneider-LinkedIn.md new file mode 100644 index 000000000..caec10d9e --- /dev/null +++ b/apps/memoro/apps/landing/context/team/Till-Schneider-LinkedIn.md @@ -0,0 +1,183 @@ +Till Schneider + er/ihm Verifizierungs-Badge hinzufügen +Expand your thinking - Memoro.ai + +Memoro +Tägerwilen, Thurgau, Schweiz Kontaktinfo +500+ Kontakte +Offen für + +Profil ergänzen + +Profil verbessern + +Ressourcen +Zeigen Sie, dass Sie offen für Jobangebote sind. Sie bestimmen, wer diesen Hinweis sieht. + +Loslegen + + +Sie stellen ein? Teilen Sie Ihre Stellenanzeigen und ziehen Sie qualifizierte Talente an. + +Loslegen + + +Präsentieren Sie Ihre Serviceleistungen in einem eigenen Abschnitt in Ihrem Profil, damit Ihr Unternehmen leichter zu finden ist. + +Loslegen + + + +Vorschläge für SieVorschläge für Sie + Nur für Sie sichtbar Nur für Sie sichtbar + +Schildern Sie, wer Sie sind, wie Sie ticken und was Sie beruflich auszeichnetSchildern Sie, wer Sie sind, wie Sie ticken und was Sie beruflich auszeichnet +Die Profile von Mitgliedern mit einer Zusammenfassung werden bis zu 3,9 Mal häufiger angesehen. +Die Profile von Mitgliedern mit einer Zusammenfassung werden bis zu 3,9 Mal häufiger angesehen. +Zusammenfassung hinzufügen +AnalysenAnalysen + Nur für Sie sichtbar Nur für Sie sichtbar + +43 Profilansichten +43 Profilansichten +Finden Sie heraus, wer Ihr Profil besucht hat.Finden Sie heraus, wer Ihr Profil besucht hat. +4 Beitrag-Impressions +4 Beitrag-Impressions +Sehen Sie sich an, wer auf Ihre Beiträge reagiert hat.Sehen Sie sich an, wer auf Ihre Beiträge reagiert hat. +Vergangene 7 TageVergangene 7 Tage +48 Mal in Suchen erschienen +48 Mal in Suchen erschienen +Finden Sie heraus, wie oft Sie in Suchen angezeigt wurden.Finden Sie heraus, wie oft Sie in Suchen angezeigt wurden. +Alle Analysen anzeigen +AktivitätenAktivitäten +601 Follower:innen601 Follower:innen + +Beitrag erstellen + +Beiträge + +Kommentare + +Artikel +9 „Beiträge“-Beiträge wurden geladen +Link zur Grafik von Till Schneider anzeigen +Till SchneiderTill Schneider + • SieSie +Expand your thinking - Memoro.aiExpand your thinking - Memoro.ai +3 Monate • vor 3 Monaten • Alle Mitglieder und Nicht-Mitglieder von LinkedIn + +Ist KI der Kreativitätskiller? Warum der Durchschnitt nicht reicht und unsere Zukunft von menschlicher Intuition, Geschmack und ja – sogar Fehlern – abhängt. Ein Plädoyer für weniger digitale Glätte und mehr echtes Leben. +… mehr + +Jenseits des Durchschnitts: Warum unsere digitale Zukunft mehr menschliches Chaos braucht +Till Schneider +like +7 +1 Kommentar + + + + +Profilfoto von Till Schneider +Till Schneider hat dies repostet + + +MemoroMemoro +219 Follower:innen219 Follower:innen +6 Monate • vor 6 Monaten • Alle Mitglieder und Nicht-Mitglieder von LinkedIn +Wir wünschen Euch allen ein frohes Neues 2025 - vor Allem das "Neu" stand bei uns die letzten zwei Monate an erster Stelle, da wir mit Hochdruck an Memoro 2.0 arbeiten. + +Till Schneider und Tobias Müller waren dazu im Podcast programmier.bar zu Besuch und haben über Ihre Entwicklungserfahrungen berichtet: Low Code: Freiheit oder Limitierung? + +Im Gespräch erzählen wir: +Low Code Boost: Wie wir dank Low Code unglaublich schnell prototypen und iterieren konnten. + +Die Grenzen von Low Code: Warum wir uns entschieden haben, auf einen anderen Stack zu wechseln. + +Unsere Erkenntnisse: Low Code ist genial für den schnellen Start – aber wenn es ums Skalieren und eine perfekte User Experience geht, braucht es Flexibilität und Kontrolle. + +Reinhören lohnt sich! Die ganze Folge gibt’s hier: https://lnkd.in/eYcd8kRq + +Neugierig auf Memoro? Hier geht’s zur App: https://lnkd.in/eazwPffG + + Hashtag#LowCode Hashtag#NoCode Hashtag#Startup Hashtag#Produktentwicklung Hashtag#Podcast Hashtag#App Hashtag#Innovation Hashtag#Memoro Hashtag#Tools Hashtag#Digitalisierung +… mehr +168 Ig Fb Low Code Mit Till Schneider & Tobias Müller +Deep Dive 168 – Low Code mit Till Schnei... | programmier.bar +programmier.bar +likecelebrate +22 +1 Repost + + + + + +Alle Beiträge anzeigen +BerufserfahrungBerufserfahrung + +Logo von Memoro +Geschäftsführer, Founder +Geschäftsführer, Founder +Memoro · VollzeitMemoro · Vollzeit +Juli 2023–Heute · 2 Jahre 2 MonateJuli 2023–Heute · 2 Jahre 2 Monate +Konstanz, Baden-Württemberg, Deutschland · HybridKonstanz, Baden-Württemberg, Deutschland · Hybrid +Start-up-Unternehmen und Softwareentwicklung + 7 Kenntnisse +Filmemacher +Filmemacher +Filmemacher +Till Jakob · SelbstständigTill Jakob · Selbstständig +Juli 2011–Heute · 14 Jahre 2 MonateJuli 2011–Heute · 14 Jahre 2 Monate +Tägerwilen, Thurgau, SchweizTägerwilen, Thurgau, Schweiz +Kameramann und Storytelling + 3 Kenntnisse + + +Logo von inlume +inlume +inlume +Vollzeit · 2 Jahre 11 MonateVollzeit · 2 Jahre 11 Monate +Tägerwilen, Thurgau, SchweizTägerwilen, Thurgau, Schweiz +Geschäftsführer +Geschäftsführer +Okt. 2020–Aug. 2023 · 2 Jahre 11 MonateOkt. 2020–Aug. 2023 · 2 Jahre 11 Monate +UX-Design +Co-Founder +Co-Founder +Okt. 2020–Aug. 2023 · 2 Jahre 11 MonateOkt. 2020–Aug. 2023 · 2 Jahre 11 Monate +Start-up-Unternehmen +AusbildungAusbildung +Logo von Duale Hochschule Baden-Württemberg +Duale Hochschule Baden-Württemberg +Duale Hochschule Baden-Württemberg +Bachelor of Arts - BA, MediendesignBachelor of Arts - BA, Mediendesign +2017–20202017–2020 +KenntnisseKenntnisse +Branding +Branding +Unternehmenslogo +Geschäftsführer, Founder bei MemoroGeschäftsführer, Founder bei Memoro +UX-Design +UX-Design +Unternehmenslogo +2 Erfahrungen bei Memoro und 1 weiteren Unternehmen2 Erfahrungen bei Memoro und 1 weiteren Unternehmen +Alle 21 Kenntnisse anzeigen +InteressenInteressen +Top VoicesTop Voices +UnternehmenUnternehmen +GruppenGruppen +NewsletterNewsletter +Hochschulen/BerufsschulenHochschulen/Berufsschulen +Lex Fridman +Lex Fridman +Lex Fridman +· 3.Kontakt 3. Grades +Research Scientist, MITResearch Scientist, MIT +1.709.773 Follower:innen1.709.773 Follower:innen + +Follower:in +Simon Sinek +Simon Sinek +Simon Sinek +· 3.Kontakt 3. Grades +Optimist, New York Times bestselling author of "Start with Why" and "The Infinite Game", and founder of The Optimism CompanyOptimist, New York Times bestselling author of "Start with Why" and "The Infinite Game", and founder of The Optimism Company +8.629.584 Follower:innen \ No newline at end of file diff --git a/apps/memoro/apps/landing/context/team/Tobias-Mueller-LinkedIn.md b/apps/memoro/apps/landing/context/team/Tobias-Mueller-LinkedIn.md new file mode 100644 index 000000000..bd1131d07 --- /dev/null +++ b/apps/memoro/apps/landing/context/team/Tobias-Mueller-LinkedIn.md @@ -0,0 +1,286 @@ +Tobias Müller +er/ihm Direkter Kontakt1. +Expand your thinking - Memoro.ai + +Memoro +Ispringen, Baden-Württemberg, Deutschland Kontaktinfo +82 Kontakte + +Alex Vasileva, Gernot Doriat und 56 weitere gemeinsame KontakteAlex Vasileva, Gernot Doriat und 56 weitere gemeinsame Kontakte + +Nachricht + +Mehr +HighlightsHighlights +Logo von Memoro +Sie sind beide bei Memoro beschäftigt. +Sie sind beide bei Memoro beschäftigt. +Tobias Müller hat 1 Monat vor Ihnen bei Memoro angefangen.Tobias Müller hat 1 Monat vor Ihnen bei Memoro angefangen. +Nachricht +InfoInfo +Hello out there ! +My name is Tobias Müller and I would like to introduce myself briefly. +I define myself as a "full-stack developer" and really have a soft spot for everything new and innovative. +Influenced by my professional past and the foundation of a start-up, I have a strong independent way of thinking and a lot of passion in development. +Two principles are important in my life: +First, always try to broaden your horizon. +And second, always go one step further. + +I'm really looking forward to hearing from you. + +Best regards +Tobias MüllerHello out there ! My name is Tobias Müller and I would like to introduce myself briefly. I define myself as a "full-stack developer" and really have a soft spot for everything new and innovative. Influenced by my professional past and the foundation of a start-up, I have a strong independent way of thinking and a lot of passion in development. Two principles are important in my life: First, always try to broaden your horizon. And second, always go one step further. I'm really looking forward to hearing from you. Best regards Tobias Müller… mehr anzeigen +ServiceleistungenServiceleistungen +Webentwicklung • Entwicklung kundenspezifischer Software • App-Entwicklung • App-Entwicklung für Mobilgeräte • Entwicklung von Cloud-AnwendungenWebentwicklung • Entwicklung kundenspezifischer Software • App-Entwicklung • App-Entwicklung für Mobilgeräte • Entwicklung von Cloud-Anwendungen +Serviceleistungen anfordern +Alle Serviceleistungen anzeigen +AktivitätenAktivitäten +87 Follower:innen87 Follower:innen + +9 „Beiträge“-Beiträge wurden geladen +Link zur Grafik von Tobias Müller anzeigen +Tobias MüllerTobias Müller +• 1.1. +Expand your thinking - Memoro.aiExpand your thinking - Memoro.ai +10 Monate • vor 10 Monaten • Alle Mitglieder und Nicht-Mitglieder von LinkedIn + +🔥♥️🪩🎉🥳🎈🥂🍹🍻🎤 + +MemoroMemoro +219 Follower:innen219 Follower:innen +Ein Jahr Memoro und 32 Jahre Tobi! 🎉🎂 Am Wochenende haben wir den Geburtstag unseres Mitgründers und IT-Zauberers Tobi gefeiert – und gleichzeitig auf ein fantastisches erstes Jahr mit Memoro angestoßen! 🥳✨ +In bester Gesellschaft, umgeben von unseren Lieblingsmenschen, haben wir ein unvergessliches Wochenende voller Spannung, Spaß und positiver Emotionen verbracht. 🤩🥂 Mit Memoro möchten wir mehr Raum für das Wichtigste im Leben schaffen – die kostbare Zeit mit den Menschen, die unsere Arbeit und unser Leben bereichern. 💖🌟 +Alles Gute zum Geburtstag, Tobi! 🎂🎈 Auf viele weitere Jahre voller Magie und gemeinsamer Erinnerungen! 🎊 +Dieses Wochenende haben wir viele tolle Gespräche geführt und natürlich mit Memoro festgehalten. +Lade Dir Memoro kostenlos herunter: https://lnkd.in/eazwPffG + +Tobias Müller Till Schneider Dirk Zimanky Ludwig Kaftan Albashir Mohamed Jose Ignacio Campos Domínguez +… mehr +Größere Bilddarstellung aktivieren, +Keine alternative Textbeschreibung für dieses Bild vorhanden +1/10 +Größere Bilddarstellung aktivieren, +likelovecelebrate +9 +1 Kommentar + +Profilfoto von Tobias Müller +Tobias Müller hat dies repostet + +MemoroMemoro +219 Follower:innen219 Follower:innen +1 Jahr • Bearbeitet • vor 1 Jahr • Bearbeitet • Alle Mitglieder und Nicht-Mitglieder von LinkedIn +🚀 Innovationsschub beim BW Startup Summit in Stuttgart! + +Unser Team tauchte ein in eine Welt voller kreativer Ideen - von Mode bis Medizin. Highlights: + +- Spannender Ideenaustausch mit Gleichgesinnten +- Till's Präsentation von Memoro auf dem Startup Festival +- Networking beim Kicker-Match (Fußballfieber inklusive!) + +Unsere Key Learnings: + +- Alex: "Die Startup-Welt ist bunt. Leidenschaft und Kreativität sind der Schlüssel zum Erfolg." +- Till: "Faszinierende Spezialisierungen. Mehr Vernetzung könnte Synergien schaffen." +- Tobi: "Startup-Events sind Magneten für inspirierende Gespräche." + +Danke an alle Inspirationsquellen, besonders Julia Zimmermann, Timon Sutter (Eversion), Antje Freyth, Robert Rapp, Jan Marvin Wickert, Simone Harr, Dmitriy Sevkovych, Ira Romy Alice Stoll, Jürke Hartz (myScribe) und Kolja B.. +, mit denen wir tolle Gespräche führen konnten, die natürlich mit Memoro festgehalten wurden. Lade Memoro kostenlos herunter: https://lnkd.in/eazwPffG + +Was war euer letztes Startup-Event? Teilt eure Erfahrungen in den Kommentaren! + +Hashtag#BWStartupSummit Hashtag#Innovation Hashtag#Networking Hashtag#StartupLife +… mehr +Größere Bilddarstellung aktivieren, +Keine alternative Textbeschreibung für dieses Bild vorhanden +1/4 +Größere Bilddarstellung aktivieren, +likecelebrate +51 +3 Kommentare +4 Reposts + +Alle Beiträge anzeigen +BerufserfahrungBerufserfahrung +Logo von Memoro +Founder +Founder +Memoro · VollzeitMemoro · Vollzeit +Juni 2023–Heute · 2 Jahre 3 MonateJuni 2023–Heute · 2 Jahre 3 Monate +Founder bei MemoroFounder bei Memoro +Freelancer +Freelancer +Freelancer +Tobias Müller Software Entwicklung · SelbstständigTobias Müller Software Entwicklung · Selbstständig +Juni 2021–Juni 2023 · 2 Jahre 1 MonatJuni 2021–Juni 2023 · 2 Jahre 1 Monat +UI und JavaScript + 22 Kenntnisse +Backend Entwicklung +Backend Entwicklung +Backend Entwicklung +DEKRA SE · FreiberuflichDEKRA SE · Freiberuflich +Nov. 2021–Aug. 2022 · 10 MonateNov. 2021–Aug. 2022 · 10 Monate +Global CMS Redesign + +• Projekt Initiierung mit npm workspaces +• Backend Entwicklung mit Nest.js +• Planung der Azure Cloud Architektur +• Cloud Migration zu Azure Cloud +• Einrichten der Azure DevOps Pipeline (CI/CD) + +Knowledge: +TypeScript, Node.js, npm workspaces, Docker, Cloud, Agile, CI/CD + +Products: +JetBrains, Docker, Azure Cloud, Azure DevOps, Conflunce, JIRA, Bitbucket, Git, Notion, Ubuntu, Nest.js, Elastic Search, OWASPGlobal CMS Redesign • Projekt Initiierung mit npm workspaces • Backend Entwicklung mit Nest.js • Planung der Azure Cloud Architektur • Cloud Migration zu Azure Cloud • Einrichten der Azure DevOps Pipeline (CI/CD) Knowledge: TypeScript, Node.js, npm workspaces, Docker, Cloud, Agile, CI/CD Products: JetBrains, Docker, Azure Cloud, Azure DevOps, Conflunce, JIRA, Bitbucket, Git, Notion, Ubuntu, Nest.js, Elastic Search, OWASP… mehr anzeigen +JavaScript und Databases + 15 Kenntnisse +Startup Founder +Startup Founder +Startup Founder +compan.one · Vollzeitcompan.one · Vollzeit +Sept. 2020–Okt. 2021 · 1 Jahr 2 MonateSept. 2020–Okt. 2021 · 1 Jahr 2 Monate +Software für Führungskräfte + +• Begleitung des Projektes als Teil des Design-Thinking-Teams +• Einsatz verschiedenster Prototypen (Mockup, Click-Dummy, concierge MVP) +• Entwicklung einer Progressiv Web App inkl. CMS Backend +• Fokus auf UsablititySoftware für Führungskräfte • Begleitung des Projektes als Teil des Design-Thinking-Teams • Einsatz verschiedenster Prototypen (Mockup, Click-Dummy, concierge MVP) • Entwicklung einer Progressiv Web App inkl. CMS Backend • Fokus auf Usablitity… mehr anzeigen +UI und JavaScript + 26 Kenntnisse +Teamleader - Software Development +Teamleader - Software Development +Teamleader - Software Development +Conecpt Hero · TeilzeitConecpt Hero · Teilzeit +Apr. 2019–Okt. 2020 · 1 Jahr 7 MonateApr. 2019–Okt. 2020 · 1 Jahr 7 Monate +Heilbronn (Landkreis), Baden-Württemberg, DeutschlandHeilbronn (Landkreis), Baden-Württemberg, Deutschland +Software Entwicklung mit dem Schwerpunkt Prototyping +Teamleitung, Projektmanagement und Optimierung der Unternehmensprozesse + +Knowledge: +JavaScript, TypeScript, Node.js, Express.js, Vue.js, React.js, HTML, CSS, SQL, PWA, Java, Python, C#, UI, UX, Usability Testing, Unity3D, VR, AR, Agile, Prototyping, Capacitor, Ionic, Cordova + +Products: +Docker, Gitlab, Bitbucket, Git, Ubuntu, ARKit, ARCore, AR Foundation, JetBrains IDE, VS Code, Android, iOS, WearOS, Firebase, Google CloudSoftware Entwicklung mit dem Schwerpunkt Prototyping Teamleitung, Projektmanagement und Optimierung der Unternehmensprozesse Knowledge: JavaScript, TypeScript, Node.js, Express.js, Vue.js, React.js, HTML, CSS, SQL, PWA, Java, Python, C#, UI, UX, Usability Testing, Unity3D, VR, AR, Agile, Prototyping, Capacitor, Ionic, Cordova Products: Docker, Gitlab, Bitbucket, Git, Ubuntu, ARKit, ARCore, AR Foundation, JetBrains IDE, VS Code, Android, iOS, WearOS, Firebase, Google Cloud… mehr anzeigen +UI und JavaScript + 35 Kenntnisse +Alle 12 Berufserfahrungen anzeigen +AusbildungAusbildung +Logo von Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Bachelor of Science - BS, Software EngineeringBachelor of Science - BS, Software Engineering +März 2016–Sept. 2020März 2016–Sept. 2020 +Bachelor of Science Software Engineering mit der Vertiefung Games EngineeringBachelor of Science Software Engineering mit der Vertiefung Games Engineering +Linux und Full-Stack + 37 Kenntnisse +Logo von Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Bachelor of Science - BS, Software EngineeringBachelor of Science - BS, Software Engineering +2016–20202016–2020 +Linux und Full-Stack + 37 Kenntnisse +Alle 7 Ausbildungen anzeigen +Bescheinigungen und ZertifikateBescheinigungen und Zertifikate +Logo von Vabro.ai and VMEdu.com (Scrum/Kanban/AI/Business Analysis/OKRs/Six Sigma/Sales and Marketing etc.) +Scrum Fundamentals Certified +Scrum Fundamentals Certified +Vabro.ai and VMEdu.com (Scrum/Kanban/AI/Business Analysis/OKRs/Six Sigma/Sales and Marketing etc.)Vabro.ai and VMEdu.com (Scrum/Kanban/AI/Business Analysis/OKRs/Six Sigma/Sales and Marketing etc.) +Ausgestellt: Juni 2017Ausgestellt: Juni 2017 +Zertifikats-ID: 582311Zertifikats-ID: 582311 +Scrum +Startup Simulation - STARTUP | edu +Startup Simulation - STARTUP | edu +Startup Simulation - STARTUP | edu +STARTUP | eduSTARTUP | edu +Ausgestellt: Mai 2017Ausgestellt: Mai 2017 +Prototyping und Startup +Alle 5 Bescheinigungen und Zertifikate anzeigen +EhrenamtEhrenamt +Voluntary assistant for logistics and transport +Voluntary assistant for logistics and transport +Voluntary assistant for logistics and transport +DreamCenter Sozialwerk e.V.DreamCenter Sozialwerk e.V. +März 2016–Feb. 2017 · 1 JahrMärz 2016–Feb. 2017 · 1 Jahr +ArmutsbekämpfungArmutsbekämpfung +Voluntary assistant for logistics and transportVoluntary assistant for logistics and transport +Logo von Lernstiftung Hück +Voluntary position as Information Technology Coordinator +Voluntary position as Information Technology Coordinator +Lernstiftung HückLernstiftung Hück +Feb. 2016 · 1 MonatFeb. 2016 · 1 Monat +Unterstützung benachteiligter GruppenUnterstützung benachteiligter Gruppen + +- Voluntary position as Information Technology Coordinator +- System Installation +- Rights Management +- Network Configuration +- Support in the supervision- Voluntary position as Information Technology Coordinator - System Installation - Rights Management - Network Configuration - Support in the supervision… mehr anzeigen + KenntnisseKenntnisse + Git + Git + 7 Erfahrungen bei Tobias Müller Software Entwicklung und 6 weiteren Unternehmen7 Erfahrungen bei Tobias Müller Software Entwicklung und 6 weiteren Unternehmen + Unternehmenslogo + 7 Ausbildungserfahrungen bei Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik und 4 weiteren Ausbildungsstätten7 Ausbildungserfahrungen bei Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik und 4 weiteren Ausbildungsstätten + +Bestätigen +Frontend +Frontend +6 Erfahrungen bei Tobias Müller Software Entwicklung und 5 weiteren Unternehmen6 Erfahrungen bei Tobias Müller Software Entwicklung und 5 weiteren Unternehmen +Unternehmenslogo +7 Ausbildungserfahrungen bei Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik und 4 weiteren Ausbildungsstätten7 Ausbildungserfahrungen bei Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik und 4 weiteren Ausbildungsstätten + +Bestätigen +Alle 48 Kenntnisse anzeigen +Auszeichnungen/PreiseAuszeichnungen/Preise +Letter of Recommendation for Tobias Müller +Letter of Recommendation for Tobias Müller +Von: Prof. Dr. Tim Reichert · Juni 2020Von: Prof. Dr. Tim Reichert · Juni 2020 +Logo von Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Assoziiert mit Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Assoziiert mit Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Letter of Recommendation for Tobias Müller +Heilbronn, 17 June 2020 + +Prof. Dr. Tim Reichert +Games Engineering +Heilbronn University +of Applied Sciences +Max-Planck-Str. 39 +74081 Heilbronn, Germany + +> > Document only on request<>Document only on request<<… mehr anzeigen +> > Winner - Fujitsu Botathon +> > Winner - Fujitsu Botathon +> > Juli 2019Juli 2019 + +1. Place at Automation Inspiration University Botathon + +Topic: Explore Anywhere1. Place at Automation Inspiration University Botathon Topic: Explore Anywhere… mehr anzeigen +Alle 4 Auszeichnungen/Preise anzeigen +OrganisationenOrganisationen +Hochschule Heilbronn +Hochschule Heilbronn +Faculty council IT · Sept. 2017–Feb. 2018 Faculty council IT · Sept. 2017–Feb. 2018 +Logo von Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Assoziiert mit Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Assoziiert mit Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Studentischer Vertreter (Gemäß LHG §25)Studentischer Vertreter (Gemäß LHG §25) +Hochschule Heilbronn +Hochschule Heilbronn +Student council IT · März 2017–Aug. 2017Student council IT · März 2017–Aug. 2017 +Logo von Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Assoziiert mit Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Assoziiert mit Hochschule Heilbronn - Hochschule für Technik, Wirtschaft und Informatik +Member of the student council IT. + +Implementation of projects for quality assurance of teaching and studies.Member of the student council IT. Implementation of projects for quality assurance of teaching and studies.… mehr anzeigen +Alle 3 Organisationen anzeigen +InteressenInteressen +UnternehmenUnternehmen +Hochschulen/BerufsschulenHochschulen/Berufsschulen +Logo von heise +heise +heise +24.100 Follower:innen24.100 Follower:innen + +Folgen +Logo von Stack Overflow +Stack Overflow +Stack Overflow +1.580.636 Follower:innen1.580.636 Follower:innen diff --git a/apps/memoro/apps/landing/context/team/nils_profile.md b/apps/memoro/apps/landing/context/team/nils_profile.md new file mode 100644 index 000000000..639187663 --- /dev/null +++ b/apps/memoro/apps/landing/context/team/nils_profile.md @@ -0,0 +1,68 @@ +# Nils Weiser + +## CTO bei Memoro + +Nils Weiser ist seit Mai 2025 CTO bei Memoro und bringt eine einzigartige Kombination aus unternehmerischer Erfahrung und technischer Leidenschaft mit. Als Co-Founder der Codify AG seit 2019 hat er gelernt, wie man technologische Visionen in funktionierende Produkte verwandelt – durch viel Ausprobieren und gelegentliches Scheitern. + +## Der Entdecker-Entwickler + +"We are at our human finest, dancing with our minds, when there are more choices than ten, even twenty different ways to go, all but one bound to be wrong, and the richness of the selection in such situations can lift us onto totally new ground."\* Dieses Zitat bringt Nils' Philosophie auf den Punkt: Trial and Error ist für ihn kein bloßes Verfahren, sondern eine Kunstform — ein Tanz zwischen Irrtum und Erkenntnis. + +Seine Reise begann früh – mit 16 programmierte er seinen ersten Taschenrechner in C++. Was als Neugier begann, wurde zu einer lebenslangen Leidenschaft für das Erkunden technischer Möglichkeiten. Diese Leidenschaft führte ihn dazu, an einigen der größten Apps der Schweiz mitzuarbeiten und sogar für das Bundesamt zu entwickeln. + +Als Co-Founder der Codify AG in Kreuzlingen sammelt Nils seit über 6 Jahren Erfahrung im Aufbau von Software-Unternehmen. Diese unternehmerische Perspektive kombiniert mit seiner technischen Experimentierfreude macht ihn zum idealen CTO für Memoros nächste Wachstumsphase. + +## Technische Expertise + +### Full-Stack mit Experimentierfreude + +Mit seiner Trial-and-Error-Mentalität hat sich Nils ein breites technisches Spektrum erarbeitet: + +```javascript +// Nils' Approach to Tech +function solveProblem(challenge, availableTools) { + const myApproach = "trial_and_error"; + const timeToLearn = "as_long_as_it_takes"; + + const bestTool = findOptimalSolution(challenge, availableTools); + + // Whether it's Rust, Go, Python, TypeScript, or that new + // framework everyone's talking about... + const result = experiment(bestTool) + .then(() => "got it working!") + .catch((error) => { + learnFromMistakes(error); + const remainingTools = availableTools.filter((tool) => tool !== bestTool); + return solveProblem(challenge, remainingTools); + }); + + return result; // 🚀 +} + +// Current toolkit (but always expanding): +const expertise = { + frontend: ["Angular", "React", "Vue", "plain", "whatever_works"], + backend: [ + "Node.js", + "SpringBoot", + "GO", + "Express", + "FastAPI", + "if_needed_anything", + ], + ai: ["MCP Tools", "AI Agents", "LLM Integration"], + testing: ["Jest", "Cypress", "the_art_of_breaking_things"], + devops: ["Git", "Docker", "CI/CD", "Terraform", "making_it_work_everywhere"], + philosophy: ["trial_and_error", "fail_fast_learn_faster"], +}; +``` + +_"Die beste Sprache ist die, die das Problem löst. Die beste Library ist die, die funktioniert. Der beste Ansatz ist der, der nach ein paar Fehlversuchen zum Ziel führt."_ + +--- + +**Zitat-Referenz:** + +- **Autor**: Lewis Thomas +- **Werk**: Essay _„Computers"_ in _The Medusa and the Snail: More Notes of a Biology Watcher_ (1974, überarbeitete Ausgabe 1979) +- **Zitaterweiterung**: Auch aufgeführt als Teil von _A Long Line of Cells: Collected Essays_ diff --git a/apps/memoro/apps/landing/context/testimonials/Bildschirmfoto 2025-08-11 um 20.33.32.png b/apps/memoro/apps/landing/context/testimonials/Bildschirmfoto 2025-08-11 um 20.33.32.png new file mode 100644 index 000000000..a5e1f29b4 Binary files /dev/null and b/apps/memoro/apps/landing/context/testimonials/Bildschirmfoto 2025-08-11 um 20.33.32.png differ diff --git a/apps/memoro/apps/landing/conversion/ConversionOptimizationTipps.md b/apps/memoro/apps/landing/conversion/ConversionOptimizationTipps.md new file mode 100644 index 000000000..49d6d6bb8 --- /dev/null +++ b/apps/memoro/apps/landing/conversion/ConversionOptimizationTipps.md @@ -0,0 +1,254 @@ +Basierend auf der + Memoro-Website und + Best Practices für + SaaS-Landingpages, + hier sind sinnvolle + A/B-Tests: + + 🎯 Empfohlene + A/B-Tests für Memoro + + 1. Navigation + Download-Button 🔝 + + Hypothese: Ein + prominenterer + Download-Button + erhöht Conversions + + Varianten: + - A: "App + herunterladen" + (rechts, normal) + - B: "Kostenlos + testen" (mit Badge + "Neu!") + - C: Pulsierender + Button mit Icon + + Metriken: + Download-Klicks, + Scroll-Verhalten + + --- + 2. + Pricing-Darstellung + 💰 + + Hypothese: + Verschiedene + Pricing-Layouts + beeinflussen + Kaufentscheidung + + Varianten: + - A: Grid-Layout + (aktuell) + - B: + Vergleichstabelle + - C: Slider + (Monatlich/Jährlich + Toggle) + + Metriken: + Time-on-Page, + Plan-Auswahl, + Conversions + + --- + 3. Social Proof + Position ⭐ + + Hypothese: + Testimonials direkt + nach Hero erhöhen + Vertrauen + + Varianten: + - A: Testimonials + unten (aktuell) + - B: + Mini-Testimonials + direkt nach Hero + - C: Floating + Testimonial-Ticker + + Metriken: + Scroll-Tiefe, + Verweildauer, + Conversions + + --- + 4. Video vs. Demo 🎥 + + Hypothese: Live-Demo + ist überzeugender als + Video + + Varianten: + - A: "Video ansehen" + (aktuell) + - B: "Interaktive + Demo" + - C: "30-Sek + Kurzvideo" (autoplay) + + Metriken: + Engagement-Rate, + Demo-Completions + + --- + 5. Urgency/Scarcity + Test ⏰ -> Wichtig: SOWAS WOLLEN WIR NICHT MACHEN - KEINERLEI DARK PATTERNS ETC + + Hypothese: Zeitliche + Begrenzung erhöht + Conversions + + Varianten: + - A: Keine Urgency + - B: "Nur noch heute: + 50% Rabatt" + - C: "Begrenzt: Erste + 100 Nutzer + kostenlos" + + Metriken: + Conversion-Rate, + Absprungrate + + --- + 6. + Feature-Priorisierung + 📋 + + Hypothese: + KI-Features sind + wichtiger als + Datenschutz + + Varianten: + - A: Datenschutz + zuerst + - B: KI-Features + zuerst + - C: Use-Cases zuerst + + Metriken: Engagement, + Feature-Klicks + + --- + 7. Mobile-First Hero + 📱 + + Hypothese: + App-Screenshots + performen besser als + Illustrations + + Varianten: + - A: Aktuelle + Illustration + - B: iPhone-Mockup + mit App + - C: Animiertes + App-Demo + + Metriken: Mobile + Conversions, Bounce + Rate + + --- + 8. Sprache & Tone 💬 + + Hypothese: + Persönlichere + Ansprache erhöht + Engagement + + Varianten: + - A: "Ihre Gedanken" + (formell) + - B: "Deine Ideen" + (informell) + - C: "Sprechen Sie + einfach drauflos" + (aktiv) + + Metriken: + Verweildauer, + Engagement + + --- + 9. Trust-Badges 🛡️ + + Hypothese: + Sicherheits-Badges + erhöhen Vertrauen + + Varianten: + - A: Keine Badges + - B: "DSGVO-konform" + + "SSL-verschlüsselt" + - C: "TÜV-geprüft" + + Kundenzahlen + + Metriken: + Conversion-Rate, + Trust-Umfragen + + --- + 10. Onboarding-Flow + 🚀 + + Hypothese: Geführtes + Onboarding erhöht + Aktivierung + + Varianten: + - A: Direkt zur App + - B: 3-Schritt + Tutorial + - C: Persönliche Demo + buchen + + Metriken: + Aktivierungsrate, + 7-Tage Retention + + --- + 📊 + Priorisierungs-Matrix + + | Test | + Aufwand | Impact | + Empfehlung | + |-------------------| + ---------|--------|-- + ----------| + | Navigation Button | + Niedrig | Hoch | + ⭐⭐⭐ | + | Pricing Layout | + Mittel | Hoch | + ⭐⭐⭐ | + | Video vs Demo | + Hoch | Mittel | + ⭐⭐ | + | Trust Badges | + Niedrig | Mittel | + ⭐⭐ | + | Mobile Hero | + Mittel | Hoch | + ⭐⭐⭐ | + + 🚀 Nächste Schritte + + 1. Navigation-Button + Test (einfach & + high-impact) + 2. Pricing-Layout + (wichtig für + Conversion) + 3. Mobile-Hero (50%+ + Traffic ist mobil) \ No newline at end of file diff --git a/apps/memoro/apps/landing/copy/hero-design-verbesserungen.md b/apps/memoro/apps/landing/copy/hero-design-verbesserungen.md new file mode 100644 index 000000000..c4ad55a9b --- /dev/null +++ b/apps/memoro/apps/landing/copy/hero-design-verbesserungen.md @@ -0,0 +1,104 @@ +# Hero Design Verbesserungen + +## 🎨 Aktuelle Design-Probleme + +1. **Social Proof Box**: Zu dunkel und verschmilzt mit dem Hintergrund +2. **Trust Badges**: Zu klein und unauffällig +3. **Bild-Qualität**: Das Bild wirkt etwas dunkel/unscharf +4. **Visueller Fluss**: Die Elemente wirken noch nicht optimal verbunden +5. **CTAs**: Könnten noch mehr hervorstechen + +## 💡 Design-Verbesserungsvorschläge + +### 1. Social Proof Redesign +```css +/* Heller, auffälliger Hintergrund */ +background: linear-gradient(135deg, rgba(255,255,255,0.1) 0%, rgba(255,255,255,0.05) 100%); +border: 1px solid rgba(255,255,255,0.2); +backdrop-filter: blur(10px); +box-shadow: 0 4px 20px rgba(0,0,0,0.1); +``` + +### 2. Trust Badges Enhancement +- Größere Icons und Text +- Eigene Cards für jeden Badge +- Leichter Hover-Effekt +- Bessere Spacing + +### 3. Hero Image Improvements +- Overlay mit Gradient für besseren Kontrast +- Subtle Animation beim Laden +- Optional: Mehrere Bilder im Wechsel + +### 4. Typography & Spacing +- Größerer Zeilenabstand für bessere Lesbarkeit +- Stärkerer Kontrast zwischen Headline und Subtitle +- Mehr Whitespace zwischen Elementen + +### 5. CTA Buttons Enhancement +- Primär-Button: Stärkerer Glow-Effekt +- Sekundär-Button: Besserer Kontrast +- Micro-interactions beim Hover + +### 6. Animations & Effects +- Fade-in Animation für alle Elemente +- Parallax-Effekt für das Bild +- Smooth reveal beim Scrollen + +## 🚀 Quick Implementation + +### Sofort umsetzbar: +1. Social Proof Box aufhellen +2. Trust Badges vergrößern und besser positionieren +3. CTA Buttons optimieren +4. Micro-Copy prominenter machen + +### Mittelfristig: +1. Animationen hinzufügen +2. Bild-Overlay optimieren +3. Mobile Optimierungen + +## 📐 Konkrete CSS-Änderungen + +### Social Proof Box +```astro +
+``` + +### Trust Badges +```astro +
+ {trustBadges.map((badge) => ( +
+ {badge.icon} + {badge.text} +
+ ))} +
+``` + +### CTA Buttons +```css +/* Primary Button mit Glow */ +.bg-primary { + box-shadow: 0 4px 20px rgba(255, 193, 7, 0.3); + transition: all 0.3s ease; +} + +.bg-primary:hover { + box-shadow: 0 6px 30px rgba(255, 193, 7, 0.5); + transform: translateY(-2px); +} +``` + +### Hero Image Container +```astro +
+
+ {title} +
+``` \ No newline at end of file diff --git a/apps/memoro/apps/landing/copy/hero-quick-wins-konzept.md b/apps/memoro/apps/landing/copy/hero-quick-wins-konzept.md new file mode 100644 index 000000000..0d1e31e55 --- /dev/null +++ b/apps/memoro/apps/landing/copy/hero-quick-wins-konzept.md @@ -0,0 +1,231 @@ +# Hero Section Quick Wins - Umsetzungskonzept + +## 🎯 Übersicht +Dieses Dokument beschreibt die sofort umsetzbaren Verbesserungen für die Hero-Section der Memoro Landing Page. Alle Vorschläge sind darauf ausgelegt, mit minimalem Aufwand maximalen Impact zu erzielen. + +## 1. Headlines & Subtitles Optimierung + +### Deutsche Headlines (3 Varianten zum Testen) + +#### Variante A: Nutzen-fokussiert +``` +Headline: "Verwandeln Sie jedes Gespräch in verwertbares Wissen" +Subtitle: "KI-gestützte Gesprächsdokumentation, die mitdenkt. Sparen Sie 3+ Stunden pro Woche." +``` + +#### Variante B: Problem-fokussiert +``` +Headline: "Nie wieder wichtige Details aus Meetings verlieren" +Subtitle: "Memoro dokumentiert, strukturiert und erinnert – während Sie sich aufs Wesentliche konzentrieren." +``` + +#### Variante C: Zeitersparnis-fokussiert +``` +Headline: "3 Stunden pro Woche zurückgewinnen" +Subtitle: "Lassen Sie KI Ihre Gespräche dokumentieren. Automatisch. Präzise. DSGVO-konform." +``` + +### Englische Headlines (3 Varianten zum Testen) + +#### Variante A: Benefit-focused +``` +Headline: "Turn Every Conversation Into Actionable Insights" +Subtitle: "AI-powered documentation that thinks along. Save 3+ hours every week." +``` + +#### Variante B: Problem-focused +``` +Headline: "Never Miss Important Meeting Details Again" +Subtitle: "Memoro captures, structures, and reminds – while you focus on what matters." +``` + +#### Variante C: Time-saving focused +``` +Headline: "Get Back 3 Hours Every Week" +Subtitle: "Let AI document your conversations. Automatically. Accurately. GDPR-compliant." +``` + +## 2. CTA Micro-Copy + +### Primärer CTA +**Button Text**: "Jetzt kostenlos starten" / "Start Free Now" +**Micro-Copy darunter**: +- "✓ Keine Kreditkarte erforderlich" +- "✓ In 30 Sekunden eingerichtet" +- "✓ 14 Tage kostenlos testen" + +### Sekundärer CTA +**Button Text**: "Demo ansehen (2 Min)" / "Watch Demo (2 min)" +**Micro-Copy**: "Sehen Sie Memoro in Aktion" + +## 3. Trust-Badges & Sicherheitssignale + +### Badge-Leiste unter den CTAs +``` +[🔒 DSGVO-konform] [🇩🇪 Made in Germany] [🛡️ SSL-verschlüsselt] [✓ ISO 27001] +``` + +### Alternativ als Text +"Ihre Daten sind sicher: DSGVO-konform • Ende-zu-Ende verschlüsselt • Server in Deutschland" + +## 4. Social Proof Integration + +### Option A: Bewertungszeile +``` +⭐⭐⭐⭐⭐ 4.9/5 basierend auf 127 Bewertungen +"Die beste Investition in unsere Produktivität" - Thomas M., Geschäftsführer +``` + +### Option B: Nutzer-Statistik +``` +Bereits 2.500+ Professionals sparen Zeit mit Memoro +Über 50.000 Stunden Gespräche erfolgreich dokumentiert +``` + +### Option C: Logo-Leiste +``` +"Vertraut von Teams bei:" +[Siemens Logo] [SAP Logo] [Bosch Logo] [Mercedes Logo] [Telekom Logo] +``` + +## 5. Urgency/Scarcity Elemente (Optional) + +### Zeitlich begrenzt +``` +🎯 Black Friday Special: 50% Rabatt auf alle Pläne – nur noch 48 Stunden +``` + +### Begrenzte Plätze +``` +🚀 Early Access: Nur noch 23 kostenlose Beta-Plätze verfügbar +``` + +## 6. Implementation Details + +### HeroSection.astro Anpassungen + +1. **Micro-Copy Component** hinzufügen: +```astro +{microCopy && ( +
+ + {microCopy} +
+)} +``` + +2. **Trust Badges Component**: +```astro +
+
+ 🔒 + {t('hero.trust.gdpr')} +
+
+ 🇩🇪 + {t('hero.trust.madeInGermany')} +
+
+ 🛡️ + {t('hero.trust.encrypted')} +
+
+``` + +3. **Social Proof Component**: +```astro +
+
+
+ {"⭐".repeat(5)} +
+ 4.9/5 + ({reviewCount} Bewertungen) +
+

"{testimonialQuote}"

+

– {testimonialAuthor}

+
+``` + +## 7. A/B Testing Strategie + +### Test 1: Headlines +- Control: Aktuelle Headline +- Variante A: Nutzen-fokussiert +- Variante B: Problem-fokussiert +- Variante C: Zeitersparnis-fokussiert + +### Test 2: Social Proof +- Control: Keine Social Proof +- Variante A: Bewertungszeile +- Variante B: Nutzer-Statistik +- Variante C: Logo-Leiste + +### Test 3: Micro-Copy +- Control: Kein Micro-Copy +- Variante A: "Keine Kreditkarte erforderlich" +- Variante B: Alle 3 Punkte + +## 8. Tracking & Erfolgsmessung + +### KPIs +1. **Click-Through-Rate (CTR)** auf primären CTA +2. **Conversion Rate** zu Registrierung +3. **Bounce Rate** der Landing Page +4. **Time on Page** +5. **Scroll Depth** + +### Event Tracking +```javascript +// Hero View +gtag('event', 'hero_view', { + 'variant': currentVariant, + 'headline': headlineText +}); + +// CTA Click +gtag('event', 'hero_cta_click', { + 'button': 'primary', + 'variant': currentVariant, + 'position': 'hero' +}); + +// Trust Badge Hover +gtag('event', 'trust_badge_hover', { + 'badge': badgeType +}); +``` + +## 9. Mobile Optimierungen + +### Kürzere Mobile Headlines +``` +Desktop: "Verwandeln Sie jedes Gespräch in verwertbares Wissen" +Mobile: "Gespräche in Wissen verwandeln" + +Desktop: "3 Stunden pro Woche zurückgewinnen" +Mobile: "3h/Woche sparen" +``` + +### Angepasste Layouts +- Trust Badges: 2x2 Grid auf Mobile +- Social Proof: Kompaktere Darstellung +- CTAs: Full-width auf Mobile + +## 10. Nächste Schritte + +1. **Sofort (diese Woche)**: + - [ ] Neue Headlines in home.mdx implementieren + - [ ] Micro-Copy zu CTAs hinzufügen + - [ ] Trust Badges einbauen + - [ ] A/B Test erweitern + +2. **Kurzfristig (nächste 2 Wochen)**: + - [ ] Social Proof Komponente entwickeln + - [ ] Logo-Leiste designen + - [ ] Mobile Optimierungen + +3. **Follow-up**: + - [ ] Erste Test-Ergebnisse analysieren + - [ ] Gewinner-Varianten ausrollen + - [ ] Neue Test-Hypothesen entwickeln \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/Analytics/MemoroStats.md b/apps/memoro/apps/landing/docs/Analytics/MemoroStats.md new file mode 100644 index 000000000..905eb2177 --- /dev/null +++ b/apps/memoro/apps/landing/docs/Analytics/MemoroStats.md @@ -0,0 +1,145 @@ +08.08.2025 +{"memo_number":13896, +"memo_entry_number":225932, +"time_recorded":"5363:20:20.815 (hh:mm:ss,ms)", +"transcript_words":46.314.389, +"user_number":2553} + + + + +28.03.2025 +Memos:11.134 +"memo_entry_number":176033," +time_recorded":"4062:08:07.633 (hh:mm:ss,ms)"," + + + +transcript_words":35845740," +user_number":2009} + + +03.03.2025 +Memos: 10.405, +"memo_entry_number":162540, +time_recorded":"3735:49:37.421 + + +(hh:mm:ss,ms)", +Wörter: 33.177.948 +Users :1828 + + + +04.02.2025 +{"memo_number":9392, +"memo_entry_number":143793," + +time_recorded":"3370:57:54.073 (hh:mm:ss,ms)", +202260 Minutes + + +"transcript_words":30007331,"user_number":1638} + + + + + + +27.01.2025 +69 Paying Users +12 Android +56 iOS + +Länder: + +DE - 62 +CH - 2 +AT - 2 +US - 1 +Sweden - 1 +Finnland - 1 + + +21.01.2025 +Über 9000 Memos aufgenommen + + +{"memo_number":9054,"memo_entry_number":137677,"time_recorded":"3145:37:24.003 (hh:mm:ss,ms)","transcript_words":28213286,"user_number":1564} + + + +14.01.2025 +1504 Nutzer +MAUs: +495 Users (RevenueCat) +281 Users (Google Analytics) +151 Users (Admin App) +309 Durchschnitt + +{"memo_number":8782,"memo_entry_number":133301,"time_recorded":"3036:02:55.864 (hh:mm:ss,ms)","transcript_words":27319354,"user_number":1504} + + + +10.12.2024 +1310 Nutzer +8200 erstellte Memos +2800 Stunden Aufnahme +25 Millionen gesprochene Wörter +{"memo_number":8200,"memo_entry_number":122550,"time_recorded":"2834:35:37.030 (hh:mm:ss,ms)","transcript_words":25693505,"user_number":1310} + + + +26.11.2024 +Memos: 7852 +Stunden: 2708 h +Wörter: 24.555.821 + + +{"memo_number":7852,"memo_entry_number":116352,"time_recorded":"2708:35:00.093 (hh:mm:ss,ms)","transcript_words":24555821,"user_number":1238} + + + +06.11.2024 + + +Words: 22.513.813 +Hours: 2471 +{"memo_number":7237,"memo_entry_number":105938,"time_recorded":"2471:20:49.687 (hh:mm:ss,ms)","transcript_words":22513813,"user_number":1126} + + + +16.10.2024 +{"memo_number":6677,"memo_entry_number":95212,"time_recorded":"2213:26:39.882 (hh:mm:ss,ms)","transcript_words":20363477,"user_number":1031} + + + +07.10.2024 +Memos: 6460Time: 2105:58 (Stunden)Minuten: 126358 Minuten +Words: 19444341 +Users: 978 + +{"memo_number":6460,"memo_entry_number":90728,"time_recorded":"2105:58:07.954 (hh:mm:ss,ms)","transcript_words":19444341,"user_number":978} + +01.10.2024 +{"memo_number":6389, +"memo_entry_number":89327, +"time_recorded":"2075:02:10.580 +2075 h in minuten +(hh:mm:ss,ms)","transcript_words":19226681,"user_number":953} + +24.08.2024 + + +09.08.2024 + +{"memo_number":5495,"memo_entry_number":72678,"time_recorded":"1640:08:14.997 (hh:mm:ss,ms)","transcript_words":15770544,"user_number":699} + + + +24.07.2024 +{"memo_number":5235,"memo_entry_number":68122,"time_recorded":"1507:57:39.116 (hh:mm:ss,ms)","transcript_words":14694220,"user_number":640} + + +24.07.2024 - 02:30 +{"memo_number":5222,"memo_entry_number":67964,"time_recorded":"1503:55:22.022 (hh:mm:ss,ms)","transcript_words":14.664.072,"user_number":639} Durchschnittlichte Wörtanzahl pro Stunde: 10.000 \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/AstroAddLastModifiedTime.md b/apps/memoro/apps/landing/docs/AstroAddLastModifiedTime.md new file mode 100644 index 000000000..df0e423b4 --- /dev/null +++ b/apps/memoro/apps/landing/docs/AstroAddLastModifiedTime.md @@ -0,0 +1,106 @@ +Add last modified time +Learn how to build a remark plugin that adds the last modified time to the frontmatter of your Markdown and MDX files. Use this property to display the modified time in your pages. + +Uses Git history + +This recipe calculates time based on your repository’s Git history and may not be accurate on some deployment platforms. Your host may be performing shallow clones which do not retrieve the full git history. + +Recipe +Install Helper Packages + +Install Day.js to modify and format times: + +npm +pnpm +Yarn +Terminal window +npm install dayjs + +Create a Remark Plugin + +This plugin uses execSync to run a Git command that returns the timestamp of the latest commit in ISO 8601 format. The timestamp is then added to the frontmatter of the file. + +remark-modified-time.mjs +import { execSync } from "child_process"; + +export function remarkModifiedTime() { + return function (tree, file) { + const filepath = file.history[0]; + const result = execSync(`git log -1 --pretty="format:%cI" "${filepath}"`); + file.data.astro.frontmatter.lastModified = result.toString(); + }; +} + +Using the file system instead of Git +Add the plugin to your config + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import { remarkModifiedTime } from './remark-modified-time.mjs'; + +export default defineConfig({ + markdown: { + remarkPlugins: [remarkModifiedTime], + }, +}); + +Now all Markdown documents will have a lastModified property in their frontmatter. + +Display Last Modified Time + +If your content is stored in a content collection, access the remarkPluginFrontmatter from the render(entry) function. Then render lastModified in your template wherever you would like it to appear. + +src/pages/posts/[slug].astro +--- +import { getCollection, render } from 'astro:content'; +import dayjs from "dayjs"; +import utc from "dayjs/plugin/utc"; + +dayjs.extend(utc); + +export async function getStaticPaths() { + const blog = await getCollection('blog'); + return blog.map(entry => ({ + params: { slug: entry.id }, + props: { entry }, + })); +} + +const { entry } = Astro.props; +const { Content, remarkPluginFrontmatter } = await render(entry); + +const lastModified = dayjs(remarkPluginFrontmatter.lastModified) + .utc() + .format("HH:mm:ss DD MMMM YYYY UTC"); +--- + + + ... + + ... +

Last Modified: {lastModified}

+ ... + + + +If you’re using a Markdown layout, use the lastModified frontmatter property from Astro.props in your layout template. + +src/layouts/BlogLayout.astro +--- +import dayjs from "dayjs"; +import utc from "dayjs/plugin/utc"; + +dayjs.extend(utc); + +const lastModified = dayjs() + .utc(Astro.props.frontmatter.lastModified) + .format("HH:mm:ss DD MMMM YYYY UTC"); +--- + + + ... + +

{lastModified}

+ + + \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/AstroAddReadingTime.md b/apps/memoro/apps/landing/docs/AstroAddReadingTime.md new file mode 100644 index 000000000..04adcf755 --- /dev/null +++ b/apps/memoro/apps/landing/docs/AstroAddReadingTime.md @@ -0,0 +1,91 @@ +Add reading time +Create a remark plugin which adds a reading time property to the frontmatter of your Markdown or MDX files. Use this property to display the reading time for each page. + +Recipe +Install Helper Packages + +Install these two helper packages: + +reading-time to calculate minutes read +mdast-util-to-string to extract all text from your markdown +npm +pnpm +Yarn +Terminal window +npm install reading-time mdast-util-to-string + +Create a remark plugin. + +This plugin uses the mdast-util-to-string package to get the Markdown file’s text. This text is then passed to the reading-time package to calculate the reading time in minutes. + +remark-reading-time.mjs +import getReadingTime from 'reading-time'; +import { toString } from 'mdast-util-to-string'; + +export function remarkReadingTime() { + return function (tree, { data }) { + const textOnPage = toString(tree); + const readingTime = getReadingTime(textOnPage); + // readingTime.text will give us minutes read as a friendly string, + // i.e. "3 min read" + data.astro.frontmatter.minutesRead = readingTime.text; + }; +} + +Add the plugin to your config: + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import { remarkReadingTime } from './remark-reading-time.mjs'; + +export default defineConfig({ + markdown: { + remarkPlugins: [remarkReadingTime], + }, +}); + +Now all Markdown documents will have a calculated minutesRead property in their frontmatter. + +Display Reading Time + +If your blog posts are stored in a content collection, access the remarkPluginFrontmatter from the render(entry) function. Then, render minutesRead in your template wherever you would like it to appear. + +src/pages/posts/[slug].astro +--- +import { getCollection, render } from 'astro:content'; + +export async function getStaticPaths() { + const blog = await getCollection('blog'); + return blog.map(entry => ({ + params: { slug: entry.id }, + props: { entry }, + })); +} + +const { entry } = Astro.props; +const { Content, remarkPluginFrontmatter } = await render(entry); +--- + + + ... + + ... +

{remarkPluginFrontmatter.minutesRead}

+ ... + + + +If you’re using a Markdown layout, use the minutesRead frontmatter property from Astro.props in your layout template. + +src/layouts/BlogLayout.astro +--- +const { minutesRead } = Astro.props.frontmatter; +--- + + + ... + +

{minutesRead}

+ + + \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/AstroIconsFor ExternalLinks.md b/apps/memoro/apps/landing/docs/AstroIconsFor ExternalLinks.md new file mode 100644 index 000000000..ef005c569 --- /dev/null +++ b/apps/memoro/apps/landing/docs/AstroIconsFor ExternalLinks.md @@ -0,0 +1,41 @@ +Add icons to external links +Using a rehype plugin, you can identify and modify links in your Markdown files that point to external sites. This example adds icons to the end of each external link, so that visitors will know they are leaving your site. + +Prerequisites +An Astro project using Markdown for content pages. +Recipe +Install the rehype-external-links plugin. + +npm +pnpm +Yarn +Terminal window +npm install rehype-external-links + +Import the plugin into your astro.config.mjs file. + +Pass rehypeExternalLinks to the rehypePlugins array, along with an options object that includes a content property. Set this property’s type to text if you want to add plain text to the end of the link. To add HTML to the end of the link instead, set the property type to raw. + +// ... +import rehypeExternalLinks from 'rehype-external-links'; + +export default defineConfig({ + // ... + markdown: { + rehypePlugins: [ + [ + rehypeExternalLinks, + { + content: { type: 'text', value: ' 🔗' } + } + ], + ] + }, +}); + +Note + +The value of the content property is not represented in the accessibility tree. As such, it’s best to make clear that the link is external in the surrounding content, rather than relying on the icon alone. + +Resources +rehype-external-links \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/AstroInternalizationReadMe.md b/apps/memoro/apps/landing/docs/AstroInternalizationReadMe.md new file mode 100644 index 000000000..845344f50 --- /dev/null +++ b/apps/memoro/apps/landing/docs/AstroInternalizationReadMe.md @@ -0,0 +1,321 @@ +Internationalization (i18n) Routing +Astro’s internationalization (i18n) features allow you to adapt your project for an international audience. This routing API helps you generate, use, and verify the URLs that your multi-language site produces. + +Astro’s i18n routing allows you to bring your multilingual content with support for configuring a default language, computing relative page URLs, and accepting preferred languages provided by your visitor’s browser. You can also specify fallback languages on a per-language basis so that your visitors can always be directed to existing content on your site. + +Routing Logic +Astro uses a middleware to implement its routing logic. This middleware function is placed in the first position where it awaits every Response coming from any additional middleware and each page route before finally executing its own logic. + +This means that operations (e.g. redirects) from your own middleware and your page logic are run first, your routes are rendered, and then the i18n middleware performs its own actions such as verifying that a localized URL corresponds to a valid route. + +You can also choose to add your own i18n logic in addition to or instead of Astro’s i18n middleware, giving you even more control over your routes while still having access to the astro:i18n helper functions. + +Configure i18n routing +Both a list of all supported languages (locales) and a default language (defaultLocale), which must be one of the languages listed in locales, need to be specified in an i18n configuration object. Additionally, you can configure more specific routing and fallback behavior to match your desired URLs. + +astro.config.mjs +import { defineConfig } from "astro/config" +export default defineConfig({ + i18n: { + locales: ["es", "en", "pt-br"], + defaultLocale: "en", + } +}) + +Create localized folders +Organize your content folders with localized content by language. Create individual /[locale]/ folders anywhere within src/pages/ and Astro’s file-based routing will create your pages at corresponding URL paths. + +Your folder names must match the items in locales exactly. Include a localized folder for your defaultLocale only if you configure prefixDefaultLocale: true to show a localized URL path for your default language (e.g. /en/about/). + +Directorysrc +Directorypages +about.astro +index.astro +Directoryes +about.astro +index.astro +Directorypt-br +about.astro +index.astro +Note + +The localized folders do not need to be at the root of the /pages/ folder. + +Create links +With i18n routing configured, you can now compute links to pages within your site using the helper functions such as getRelativeLocaleUrl() available from the astro:i18n module. These generated links will always provide the correct, localized route and can help you correctly use, or check, URLs on your site. + +You can also still write the links manually. + +src/pages/es/index.astro +--- +import { getRelativeLocaleUrl } from 'astro:i18n'; + +// defaultLocale is "es" +const aboutURL = getRelativeLocaleUrl("es", "about"); +--- + +¡Vamos! +Blog +Acerca + +routing +Astro’s built-in file-based routing automatically creates URL routes for you based on your file structure within src/pages/. + +When you configure i18n routing, information about this file structure (and the corresponding URL paths generated) is available to the i18n helper functions so they can generate, use, and verify the routes in your project. Many of these options can be used together for even more customization and per-language flexibility. + +You can even choose to implement your own routing logic manually for even greater control. + +prefixDefaultLocale +Added in: astro@3.5.0 + +This routing option defines whether or not your default language’s URLs should use a language prefix (e.g. /en/about/). + +All non-default supported languages will use a localized prefix (e.g. /fr/ or /french/) and content files must be located in appropriate folders. This configuration option allows you to specify whether your default language should also follow a localized URL structure. + +This setting also determines where the page files for your default language must exist (e.g. src/pages/about/ or src/pages/en/about) as the file structure and URL structure must match for all languages. + +"prefixDefaultLocale: false" (default): URLs in your default language will not have a /[locale]/ prefix. All other locales will. + +"prefixDefaultLocale: true": All URLs, including your default language, will have a /[locale]/ prefix. + +prefixDefaultLocale: false +astro.config.mjs +import { defineConfig } from "astro/config" +export default defineConfig({ + i18n: { + locales: ["es", "en", "fr"], + defaultLocale: "en", + routing: { + prefixDefaultLocale: false + } + } +}) + +This is the default value. Set this option when URLs in your default language will not have a /[locale]/ prefix and files in your default language exist at the root of src/pages/: + +Directorysrc +Directorypages +about.astro +index.astro +Directoryes +about.astro +index.astro +Directoryfr +about.astro +index.astro +src/pages/about.astro will produce the route example.com/about/ +src/pages/fr/about.astro will produce the route example.com/fr/about/ +prefixDefaultLocale: true +astro.config.mjs +import { defineConfig } from "astro/config" +export default defineConfig({ + i18n: { + locales: ["es", "en", "fr"], + defaultLocale: "en", + routing: { + prefixDefaultLocale: true + } + } +}) + +Set this option when all routes will have their /locale/ prefix in their URL and when all page content files, including those for your defaultLocale, exist in a localized folder: + +Directorysrc +Directorypages +index.astro +// Note: this file is always required +Directoryen +index.astro +about.astro +Directoryes +about.astro +index.astro +Directorypt-br +about.astro +index.astro +URLs without a locale prefix, (e.g. example.com/about/) will return a 404 (not found) status code unless you specify a fallback strategy. +redirectToDefaultLocale +Added in: astro@4.2.0 + +Configures whether or not the home URL (/) generated by src/pages/index.astro will redirect to /. + +Setting prefixDefaultLocale: true will also automatically set redirectToDefaultLocale: true in your routing config object. By default, the required src/pages/index.astro file will automatically redirect to the index page of your default locale. + +You can opt out of this behavior by setting redirectToDefaultLocale: false. This allows you to have a site home page that exists outside of your configured locale folder structure. + +manual +Added in: astro@4.6.0 + +When this option is enabled, Astro will disable its i18n middleware so that you can implement your own custom logic. No other routing options (e.g. prefixDefaultLocale) may be configured with routing: "manual". + +You will be responsible for writing your own routing logic, or executing Astro’s i18n middleware manually alongside your own. + +astro.config.mjs +import { defineConfig } from "astro/config" +export default defineConfig({ + i18n: { + locales: ["es", "en", "fr"], + defaultLocale: "en", + routing: "manual" + } +}) + +Astro provides helper functions for your middleware so you can control your own default routing, exceptions, fallback behavior, error catching, etc: redirectToDefaultLocale(), notFound(), and redirectToFallback(): + +src/middleware.js +import { defineMiddleware } from "astro:middleware"; +import { redirectToDefaultLocale } from "astro:i18n"; // function available with `manual` routing +export const onRequest = defineMiddleware(async (ctx, next) => { + if (ctx.url.startsWith("/about")) { + return next(); + } else { + return redirectToDefaultLocale(302); + } +}) + +middleware function +The middleware function manually creates Astro’s i18n middleware. This allows you to extend Astro’s i18n routing instead of completely replacing it. + +You can run middleware with routing options in combination with your own middleware, using the sequence utility to determine the order: + +src/middleware.js +import {defineMiddleware, sequence} from "astro:middleware"; +import { middleware } from "astro:i18n"; // Astro's own i18n routing config + +export const userMiddleware = defineMiddleware(async (ctx, next) => { + // this response might come from Astro's i18n middleware, and it might return a 404 + const response = await next(); + // the /about page is an exception and we want to render it + if (ctx.url.startsWith("/about")) { + return new Response("About page", { + status: 200 + }); + } else { + return response; + } +}); + + +export const onRequest = sequence( + userMiddleware, + middleware({ + redirectToDefaultLocale: false, + prefixDefaultLocale: true + }) +) + +domains +Added in: astro@4.9.0 + +This routing option allows you to customize your domains on a per-language basis for server rendered projects using the @astrojs/node or @astrojs/vercel adapter with a site configured. + +Add i18n.domains to map any of your supported locales to custom URLs: + +astro.config.mjs +import { defineConfig } from "astro/config" +export default defineConfig({ + site: "https://example.com", + output: "server", // required, with no prerendered pages + adapter: node({ + mode: 'standalone', + }), + i18n: { + locales: ["es", "en", "fr", "ja"], + defaultLocale: "en", + routing: { + prefixDefaultLocale: false + }, + domains: { + fr: "https://fr.example.com", + es: "https://example.es" + } + } +}) + +All non-mapped locales will follow your prefixDefaultLocales configuration. However, even if this value is false, page files for your defaultLocale must also exist within a localized folder. For the configuration above, an /en/ folder is required. + +With the above configuration: + +The file /fr/about.astro will create the URL https://fr.example.com/about. +The file /es/about.astro will create the URL https://example.es/about. +The file /ja/about.astro will create the URL https://example.com/ja/about. +The file /en/about.astro will create the URL https://example.com/about. +The above URLs will also be returned by the getAbsoluteLocaleUrl() and getAbsoluteLocaleUrlList() functions. + +Fallback +When a page in one language doesn’t exist (e.g. a page that is not yet translated), instead of displaying a 404 page, you can choose to display fallback content from another locale on a per-language basis. This is useful when you do not yet have a page for every route, but you want to still provide some content to your visitors. + +Your fallback strategy consists of two parts: choosing which languages should fallback to which other languages (i18n.fallback) and choosing whether to perform a redirect or a rewrite to show the fallback content (i18n.routing.fallbackType added in Astro v4.15.0). + +For example, when you configure i18n.fallback: { fr: "es" }, Astro will ensure that a page is built in src/pages/fr/ for every page that exists in src/pages/es/. + +If any page does not already exist, then a page will be created depending on your fallbackType: + +With a redirect to the corresponding es route (default behavior). +With the content of the /es/ page (i18n.routing.fallbackType: "rewrite"). +For example, the configuration below sets es as the fallback locale for any missing fr routes. This means that a user visiting example.com/fr/my-page/ will be shown the content for example.com/es/my-page/ (without being redirected) instead of being taken to a 404 page when src/pages/fr/my-page.astro does not exist. + +astro.config.mjs +import { defineConfig } from "astro/config" +export default defineConfig({ + i18n: { + locales: ["es", "en", "fr"], + defaultLocale: "en", + fallback: { + fr: "es" + }, + routing: { + fallbackType: "rewrite" + } + } +}) + +Custom locale paths +In addition to defining your site’s supported locales as strings (e.g. “en”, “pt-br”), Astro also allows you to map an arbitrary number of browser-recognized language codes to a custom URL path. While locales can be strings of any format as long as they correspond to your project folder structure, codes must follow the browser’s accepted syntax. + +Pass an object to the locales array with a path key to define a custom URL prefix, and codes to indicate the languages mapped to this URL. In this case, your /[locale]/ folder name must match exactly the value of the path and your URLs will be generated using the path value. + +This is useful if you support multiple variations of a language (e.g. "fr", "fr-BR", and "fr-CA") and you want to have all these variations mapped under the same URL /fr/, or even customize it entirely (e.g. /french/): + +astro.config.mjs +import { defineConfig } from "astro/config" +export default defineConfig({ + i18n: { + locales: ["es", "en", "fr"], + locales: ["es", "en", { + path: "french", // no slashes included + codes: ["fr", "fr-BR", "fr-CA"] + }], + defaultLocale: "en", + routing: { + prefixDefaultLocale: true + } + } +}) + +When using functions from the astro:i18n virtual module to compute valid URL paths based on your configuration (e.g. getRelativeLocaleUrl()), use the path as the value for locale. + +Limitations +This feature has some restrictions: + +The site option is mandatory. +The output option must be set to "server". +There cannot be any individual prerendered pages. +Astro relies on the following headers in order to support the feature: + +X-Forwarded-Host and Host. Astro will use the former, and if not present, will try the latter. +X-Forwarded-Proto and URL#protocol of the server request. +Make sure that your server proxy/hosting platform is able to provide this information. Failing to retrieve these headers will result in a 404 (status code) page. + +Browser language detection +Astro’s i18n routing allows you to access two properties for browser language detection in pages rendered on demand: Astro.preferredLocale and Astro.preferredLocaleList. All pages, including static prerendered pages, have access to Astro.currentLocale. + +These combine the browser’s Accept-Language header, and your locales (strings or codes) to automatically respect your visitor’s preferred languages. + +Astro.preferredLocale: Astro can compute a preferred locale for your visitor if their browser’s preferred locale is included in your locales array. This value is undefined if no such match exists. + +Astro.preferredLocaleList: An array of all locales that are both requested by the browser and supported by your website. This produces a list of all compatible languages between your site and your visitor. The value is [] if none of the browser’s requested languages are found in your locales array. If the browser does not specify any preferred languages, then this value will be i18n.locales. + +Astro.currentLocale: The locale computed from the current URL, using the syntax specified in your locales configuration. If the URL does not contain a /[locale]/ prefix, then the value will default to i18n.defaultLocale. + +In order to successfully match your visitors’ preferences, provide your codes using the same pattern used by the browser. \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/AstroRecipeLanguageSetup.md b/apps/memoro/apps/landing/docs/AstroRecipeLanguageSetup.md new file mode 100644 index 000000000..e075820cb --- /dev/null +++ b/apps/memoro/apps/landing/docs/AstroRecipeLanguageSetup.md @@ -0,0 +1,409 @@ +Add i18n features +In this recipe, you will learn how to use content collections and dynamic routing to build your own internationalization (i18n) solution and serve your content in different languages. + +Tip + +In v4.0, Astro added built-in support for i18n routing that allows you to configure default and supported languages and includes valuable helper functions to assist you in serving an international audience. If you want to use this instead, see our internationalization guide to learn about these features. + +This example serves each language at its own subpath, e.g. example.com/en/blog for English and example.com/fr/blog for French. + +If you prefer the default language to not be visible in the URL unlike other languages, there are instructions to hide the default language below. + +See the resources section for external links to related topics such as right-to-left (RTL) styling and choosing language tags. +Recipe +Set up pages for each language +Create a directory for each language you want to support. For example, en/ and fr/ if you are supporting English and French: + +Directorysrc/ +Directorypages/ +Directoryen/ +about.astro +index.astro +Directoryfr/ +about.astro +index.astro +index.astro +Set up src/pages/index.astro to redirect to your default language. + +Static +SSR +src/pages/index.astro + + +This approach uses a meta refresh and will work however you deploy your site. Some static hosts also let you configure server redirects with a custom configuration file. See your deploy platform’s documentation for more details. + +Use collections for translated content +Create a folder in src/content/ for each type of content you want to include and add subdirectories for each supported language. For example, to support English and French blog posts: + +Directorysrc/ +Directorycontent/ +Directoryblog/ +Directoryen/ +Blog posts in English +post-1.md +post-2.md +Directoryfr/ +Blog posts in French +post-1.md +post-2.md +Create a src/content.config.ts file and export a collection for each type of content. + +src/content.config.ts +import { defineCollection, z } from 'astro:content'; + +const blogCollection = defineCollection({ + schema: z.object({ + title: z.string(), + author: z.string(), + date: z.date() + }) +}); + +export const collections = { + 'blog': blogCollection +}; + +Read more about Content Collections. +Use dynamic routes to fetch and render content based on a lang and a slug parameter. + +Static +SSR +In static rendering mode, use getStaticPaths to map each content entry to a page: + +src/pages/[lang]/blog/[...slug].astro +--- +import { getCollection } from 'astro:content'; + +export async function getStaticPaths() { + const pages = await getCollection('blog'); + + const paths = pages.map(page => { + const [lang, ...slug] = page.slug.split('/'); + return { params: { lang, slug: slug.join('/') || undefined }, props: page }; + }); + + return paths; +} + +const { lang, slug } = Astro.params; +const page = Astro.props; +const formattedDate = page.data.date.toLocaleString(lang); + +const { Content } = await page.render(); +--- +

{page.data.title}

+

by {page.data.author} • {formattedDate}

+ + +Read more about dynamic routing. +Date formatting + +The example above uses the built-in toLocaleString() date-formatting method to create a human-readable string from the frontmatter date. This ensures the date and time are formatted to match the user’s language. + +Translate UI strings +Create dictionaries of terms to translate the labels for UI elements around your site. This allows your visitors to experience your site fully in their language. + +Create a src/i18n/ui.ts file to store your translation strings: + +src/i18n/ui.ts +export const languages = { + en: 'English', + fr: 'Français', +}; + +export const defaultLang = 'en'; + +export const ui = { + en: { + 'nav.home': 'Home', + 'nav.about': 'About', + 'nav.twitter': 'Twitter', + }, + fr: { + 'nav.home': 'Accueil', + 'nav.about': 'À propos', + }, +} as const; + +Create two helper functions: one to detect the page language based on the current URL, and one to get translations strings for different parts of the UI in src/i18n/utils.ts: + +src/i18n/utils.ts +import { ui, defaultLang } from './ui'; + +export function getLangFromUrl(url: URL) { + const [, lang] = url.pathname.split('/'); + if (lang in ui) return lang as keyof typeof ui; + return defaultLang; +} + +export function useTranslations(lang: keyof typeof ui) { + return function t(key: keyof typeof ui[typeof defaultLang]) { + return ui[lang][key] || ui[defaultLang][key]; + } +} + +Did you notice? + +In step 1, the nav.twitter string was not translated to French. You may not want every term translated, such as proper names or common industry terms. The useTranslations helper will return the default language’s value if a key is not translated. In this example, French users will also see “Twitter” in the nav bar. + +Import the helpers where needed and use them to choose the UI string that corresponds to the current language. For example, a nav component might look like: + +src/components/Nav.astro +--- +import { getLangFromUrl, useTranslations } from '../i18n/utils'; + +const lang = getLangFromUrl(Astro.url); +const t = useTranslations(lang); +--- + + +Each page must have a lang attribute on the element that matches the language on the page. In this example, a reusable layout extracts the language from the current route: + +src/layouts/Base.astro +--- +import { getLangFromUrl } from '../i18n/utils'; + +const lang = getLangFromUrl(Astro.url); +--- + + + + + + Astro + + + + + + +You can then use this base layout to ensure that pages use the correct lang attribute automatically. + +src/pages/en/about.astro +--- +import Base from '../../layouts/Base.astro'; +--- + +

About me

+ ... + + +Let users switch between languages +Create links to the different languages you support so users can choose the language they want to read your site in. + +Create a component to show a link for each language: + +src/components/LanguagePicker.astro +--- +import { languages } from '../i18n/ui'; +--- +
    + {Object.entries(languages).map(([lang, label]) => ( +
  • + {label} +
  • + ))} +
+ +Add to your site so it is shown on every page. The example below adds it to the site footer in a base layout: + +src/layouts/Base.astro +--- +import LanguagePicker from '../components/LanguagePicker.astro'; +import { getLangFromUrl } from '../i18n/utils'; + +const lang = getLangFromUrl(Astro.url); +--- + + + + + + Astro + + + +
+ +
+ + + +Hide default language in the URL +Create a directory for each language except the default language. For example, store your default language pages directly in pages/, and your translated pages in fr/: + +Directorysrc/ +Directorypages/ +about.astro +index.astro +Directoryfr/ +about.astro +index.astro +Add another line to the src/i18n/ui.ts file to toggle the feature: + +src/i18n/ui.ts +export const showDefaultLang = false; + +Add a helper function to src/i18n/utils.ts, to translate paths based on the current language: + +src/i18n/utils.ts +import { ui, defaultLang, showDefaultLang } from './ui'; + +export function useTranslatedPath(lang: keyof typeof ui) { + return function translatePath(path: string, l: string = lang) { + return !showDefaultLang && l === defaultLang ? path : `/${l}${path}` + } +} + +Import the helper where needed. For example, a nav component might look like: + +src/components/Nav.astro +--- +import { getLangFromUrl, useTranslations, useTranslatedPath } from '../i18n/utils'; + +const lang = getLangFromUrl(Astro.url); +const t = useTranslations(lang); +const translatePath = useTranslatedPath(lang); +--- + + +The helper function can also be used to translate paths for a specific language. For example, when users switch between languages: + +src/components/LanguagePicker.astro +--- +import { languages } from '../i18n/ui'; +import { getLangFromUrl, useTranslatedPath } from '../i18n/utils'; + +const lang = getLangFromUrl(Astro.url); +const translatePath = useTranslatedPath(lang); +--- +
    + {Object.entries(languages).map(([lang, label]) => ( +
  • + {label} +
  • + ))} +
+ +Translate Routes +Translate the routes of your pages for each language. + +Add route mappings to src/i18n/ui.ts: + +src/i18n/ui.ts +export const routes = { + de: { + 'services': 'leistungen', + }, + fr: { + 'services': 'prestations-de-service', + }, +} + +Update the useTranslatedPath helper function in src/i18n/utils.ts to add router translation logic. + +src/i18n/utils.ts +import { ui, defaultLang, showDefaultLang, routes } from './ui'; + +export function useTranslatedPath(lang: keyof typeof ui) { + return function translatePath(path: string, l: string = lang) { + const pathName = path.replaceAll('/', '') + const hasTranslation = defaultLang !== l && routes[l] !== undefined && routes[l][pathName] !== undefined + const translatedPath = hasTranslation ? '/' + routes[l][pathName] : path + + return !showDefaultLang && l === defaultLang ? translatedPath : `/${l}${translatedPath}` + } +} + +Create a helper function to get the route, if it exists based on the current URL, in src/i18n/utils.ts: + +src/i18n/utils.ts +import { ui, defaultLang, showDefaultLang, routes } from './ui'; + +export function getRouteFromUrl(url: URL): string | undefined { + const pathname = new URL(url).pathname; + const parts = pathname?.split('/'); + const path = parts.pop() || parts.pop(); + + if (path === undefined) { + return undefined; + } + + const currentLang = getLangFromUrl(url); + + if (defaultLang === currentLang) { + const route = Object.values(routes)[0]; + return route[path] !== undefined ? route[path] : undefined; + } + + const getKeyByValue = (obj: Record, value: string): string | undefined => { + return Object.keys(obj).find((key) => obj[key] === value); + } + + const reversedKey = getKeyByValue(routes[currentLang], path); + + if (reversedKey !== undefined) { + return reversedKey; + } + + return undefined; +} + +The helper function can be used to get a translated route. For example, when no translated route is defined, the user will be redirected to the home page: + +src/components/LanguagePicker.astro +--- +import { languages } from '../i18n/ui'; +import { getRouteFromUrl } from '../i18n/utils'; + +const route = getRouteFromUrl(Astro.url); +--- +
    + {Object.entries(languages).map(([lang, label]) => ( +
  • + {label} +
  • + ))} +
+ +Resources +Choosing a Language Tag +Right-to-left (RTL) Styling 101 +Community libraries +astro-i18next — An Astro integration for i18next including some utility components. +astro-i18n — A TypeScript-first internationalization library for Astro. +astro-i18n-aut — An Astro integration for i18n that supports the defaultLocale without page generation. The integration is adapter agnostic and UI framework agnostic. +astro-react-i18next — An Astro integration that seamlessly enables the use of i18next and react-i18next in React components on Astro websites. +paraglide — A fully type-safe i18n library specifically designed for partial hydration patterns like Astro islands. \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/AstroSitemapReadMe.md b/apps/memoro/apps/landing/docs/AstroSitemapReadMe.md new file mode 100644 index 000000000..b06ad6150 --- /dev/null +++ b/apps/memoro/apps/landing/docs/AstroSitemapReadMe.md @@ -0,0 +1,333 @@ +@astrojs/ +sitemap +v3.2.1 +GitHub +npm +Changelog +This Astro integration generates a sitemap based on your pages when you build your Astro project. + +Why Astro Sitemap +A Sitemap is an XML file that outlines all of the pages, videos, and files on your site. Search engines like Google read this file to crawl your site more efficiently. See Google’s own advice on sitemaps to learn more. + +A sitemap file is recommended for large multi-page sites. If you don’t use a sitemap, most search engines will still be able to list your site’s pages, but a sitemap is a great way to ensure that your site is as search engine friendly as possible. + +With Astro Sitemap, you don’t have to worry about creating this XML file yourself: the Astro Sitemap integration will crawl your statically-generated routes and create the sitemap file, including dynamic routes like [...slug] or src/pages/[lang]/[version]/info.astro generated by getStaticPaths(). + +This integration cannot generate sitemap entries for dynamic routes in SSR mode. + +Installation +Astro includes an astro add command to automate the setup of official integrations. If you prefer, you can install integrations manually instead. + +Run one of the following commands in a new terminal window. + +npm +pnpm +Yarn +Terminal window +npx astro add sitemap + +If you run into any issues, feel free to report them to us on GitHub and try the manual installation steps below. + +Manual Install +First, install the @astrojs/sitemap package using your package manager. + +npm +pnpm +Yarn +Terminal window +npm install @astrojs/sitemap + +Then, apply the integration to your astro.config.* file using the integrations property: + +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + // ... + integrations: [sitemap()], +}); + +Usage +@astrojs/sitemap needs to know your site’s deployed URL to generate a sitemap. + +Add your site’s URL as the site option in astro.config.mjs. This must begin with http:// or https://. + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + site: 'https://stargazers.club', + integrations: [sitemap()], + // ... +}); + +With the sitemap integration configured, sitemap-index.xml and sitemap-0.xml files will be added to your output directory when building your site. + +sitemap-index.xml links to all the numbered sitemap files. sitemap-0.xml lists the pages on your site. For extremely large sites, there may also be additional numbered files like sitemap-1.xml and sitemap-2.xml. + +Example of generated files for a two-page website +Sitemap discovery +You can make it easier for crawlers to find your sitemap with links in your site’s and robots.txt file. + +Sitemap link in +Add a element to your site’s pointing to the sitemap index file: + +src/layouts/Layout.astro + + + + +Sitemap link in robots.txt +If you have a robots.txt for your website, you can add the URL for the sitemap index to help crawlers: + +public/robots.txt +User-agent: * +Allow: / + +Sitemap: https:///sitemap-index.xml + +If you want to reuse the site value from astro.config.mjs, you can also generate robots.txt dynamically. Instead of using a static file in the public/ directory, create a src/pages/robots.txt.ts file and add the following code: + +src/pages/robots.txt.ts +import type { APIRoute } from 'astro'; + +const getRobotsTxt = (sitemapURL: URL) => ` +User-agent: * +Allow: / + +Sitemap: ${sitemapURL.href} +`; + +export const GET: APIRoute = ({ site }) => { + const sitemapURL = new URL('sitemap-index.xml', site); + return new Response(getRobotsTxt(sitemapURL)); +}; + +Configuration +To configure this integration, pass an object to the sitemap() function in astro.config.mjs. + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + integrations: [ + sitemap({ + // configuration options + }), + ], +}); + +filter +All pages are included in your sitemap by default. By adding a custom filter function, you can filter included pages by URL. + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + site: 'https://stargazers.club', + integrations: [ + sitemap({ + filter: (page) => page !== 'https://stargazers.club/secret-vip-lounge/', + }), + ], +}); + +The function will be called for every page on your site. The page function parameter is the full URL of the page currently under consideration, including your site domain. Return true to include the page in your sitemap, and false to leave it out. + +To filter multiple pages, add arguments with target URLs. + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + site: 'https://stargazers.club', + integrations: [ + sitemap({ + filter: (page) => + page !== 'https://stargazers.club/secret-vip-lounge-1/' && + page !== 'https://stargazers.club/secret-vip-lounge-2/' && + page !== 'https://stargazers.club/secret-vip-lounge-3/' && + page !== 'https://stargazers.club/secret-vip-lounge-4/', + }), + ], +}); + +customPages +In some cases, a page might be part of your deployed site but not part of your Astro project. If you’d like to include a page in your sitemap that isn’t created by Astro, you can use this option. + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + site: 'https://stargazers.club', + integrations: [ + sitemap({ + customPages: ['https://stargazers.club/external-page', 'https://stargazers.club/external-page2'], + }), + ], +}); + +entryLimit +The maximum number entries per sitemap file. The default value is 45000. A sitemap index and multiple sitemaps are created if you have more entries. See this explanation of splitting up a large sitemap. + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + site: 'https://stargazers.club', + integrations: [ + sitemap({ + entryLimit: 10000, + }), + ], +}); + +changefreq, lastmod, and priority +These options correspond to the , , and tags in the Sitemap XML specification. + +Note that changefreq and priority are ignored by Google. + +Note + +Due to limitations of Astro’s Integration API, this integration can’t analyze a given page’s source code. This configuration option can set changefreq, lastmod and priority on a site-wide basis; see the next option serialize for how you can set these values on a per-page basis. + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + site: 'https://stargazers.club', + integrations: [ + sitemap({ + changefreq: 'weekly', + priority: 0.7, + lastmod: new Date('2022-02-24'), + }), + ], +}); + +serialize +A function called for each sitemap entry just before writing to a disk. This function can be asynchronous. + +It receives as its parameter a SitemapItem object that can have these properties: + +url (absolute page URL). This is the only property that is guaranteed to be on SitemapItem. +changefreq +lastmod (ISO formatted date, String type) +priority +links. +This links property contains a LinkItem list of alternate pages including a parent page. + +The LinkItem type has two fields: url (the fully-qualified URL for the version of this page for the specified language) and lang (a supported language code targeted by this version of the page). + +The serialize function should return SitemapItem, touched or not. + +The example below shows the ability to add sitemap specific properties individually. + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + site: 'https://stargazers.club', + integrations: [ + sitemap({ + serialize(item) { + if (/exclude-from-sitemap/.test(item.url)) { + return undefined; + } + if (/your-special-page/.test(item.url)) { + item.changefreq = 'daily'; + item.lastmod = new Date(); + item.priority = 0.9; + } + return item; + }, + }), + ], +}); + +i18n +To localize a sitemap, pass an object to this i18n option. + +This object has two required properties: + +defaultLocale: String. Its value must exist as one of locales keys. +locales: Record, key/value - pairs. The key is used to look for a locale part in a page path. The value is a language attribute, only English alphabet and hyphen allowed. +Read more about language attributes. + +Read more about localization. + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + site: 'https://stargazers.club', + integrations: [ + sitemap({ + i18n: { + defaultLocale: 'en', // All urls that don't contain `es` or `fr` after `https://stargazers.club/` will be treated as default locale, i.e. `en` + locales: { + en: 'en-US', // The `defaultLocale` value must present in `locales` keys + es: 'es-ES', + fr: 'fr-CA', + }, + }, + }), + ], +}); + +The resulting sitemap looks like this: + +sitemap-0.xml +... + + https://stargazers.club/ + + + + + + https://stargazers.club/es/ + + + + + + https://stargazers.club/fr/ + + + + + + https://stargazers.club/es/second-page/ + + + + +... + +xslURL +The URL of an XSL stylesheet to style and prettify your sitemap. + +The value set can be either a path relative to your configured site URL for a local stylesheet, or can be an absolute URL link to an external stylesheet. + +astro.config.mjs +import { defineConfig } from 'astro/config'; +import sitemap from '@astrojs/sitemap'; + +export default defineConfig({ + site: 'https://example.com', + integrations: [ + sitemap({ + xslURL: '/sitemap.xsl' + }), + ], +}); \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/Internal-Collections.md b/apps/memoro/apps/landing/docs/Internal-Collections.md new file mode 100644 index 000000000..e442b7f8a --- /dev/null +++ b/apps/memoro/apps/landing/docs/Internal-Collections.md @@ -0,0 +1,254 @@ +# Internal Collections Documentation + +## Übersicht + +Dieses Projekt unterstützt **private/interne Content Collections**, die nicht öffentlich zugänglich sind. Diese Collections werden für interne Zwecke wie Marketing-Strategien, Produktentwicklung und Team-Dokumentation verwendet. + +## Konventionen + +### Naming Convention +- **Prefix `_` (Underscore)**: Alle internen Collections beginnen mit einem Underscore +- Beispiele: `_personas`, `_internal-docs`, `_drafts` + +### Verzeichnisstruktur +``` +src/content/ +├── _personas/ # Intern: Marketing Personas +│ ├── de/ +│ └── en/ +├── blog/ # Öffentlich: Blog-Artikel +├── features/ # Öffentlich: Features +└── ... +``` + +## Implementierte Interne Collections + +### 1. Personas (`_personas`) + +**Zweck**: Detaillierte Zielgruppen-Profile für Marketing und Produktentwicklung + +**Schema-Highlights**: +- Demographics (Alter, Ort, Bildung, Einkommen) +- Professional Profile (Job, Branche, Verantwortlichkeiten) +- Psychographics (Persönlichkeit, Werte, Motivationen, Frustrationen) +- Behavior (Tech-Affinität, Arbeitsweise, Tools) +- Memoro-Context (Use Cases, Benefits, Concerns, Features) +- Marketing-Relevanz (Segment, Channels, Messaging) + +**Zugriff**: `/admin/personas` (nur im Dev-Modus) + +## Technische Implementierung + +### 1. Collection Definition + +In `src/content/config.ts`: + +```typescript +// Private Collection mit _ Prefix +const _personasCollection = defineCollection({ + type: 'content', + schema: z.object({ + // Schema definition + visibility: z.enum(['internal', 'team', 'stakeholders']).default('internal'), + status: z.enum(['draft', 'active', 'archived']).default('draft'), + // ... weitere Felder + }) +}); + +export const collections = { + // Öffentliche Collections + 'blog': blogCollection, + 'features': featuresCollection, + + // Private Collections (mit _ Prefix) + '_personas': _personasCollection, +}; +``` + +### 2. Zugriffskontrolle + +Admin-Seiten implementieren Zugriffskontrolle: + +```astro +--- +// Nur im Development-Modus oder mit speziellem Query-Parameter +const isDev = import.meta.env.DEV; +const hasAccess = isDev || Astro.url.searchParams.has('access'); + +if (!hasAccess) { + return Astro.redirect('/404'); +} +--- +``` + +### 3. Keine öffentlichen Routes + +**Wichtig**: Für interne Collections werden KEINE öffentlichen Routes erstellt: + +```astro +// ❌ NICHT erstellen: +src/pages/[lang]/personas/[...slug].astro + +// ✅ NUR Admin-Routes: +src/pages/admin/personas.astro +``` + +## Best Practices + +### 1. Sicherheit + +- **Niemals** öffentliche Routes für interne Collections erstellen +- Admin-Seiten immer mit Zugriffskontrolle schützen +- Sensitive Daten nicht in Git committen (nutze `.gitignore` für sehr vertrauliche Inhalte) + +### 2. Organisation + +```markdown +--- +# Metadaten für interne Dokumente +status: "draft" | "active" | "archived" +visibility: "internal" | "team" | "stakeholders" +owner: "Marketing Team" +contributors: ["Product", "Sales"] +validatedAt: 2024-01-10T10:00:00Z +--- +``` + +### 3. Verwendung in Code + +```typescript +// Interne Collections laden +import { getCollection } from 'astro:content'; + +// Mit Error Handling +let personas = []; +try { + personas = await getCollection('_personas'); + // Filtern nach Status + const activePersonas = personas.filter(p => p.data.status === 'active'); +} catch (error) { + console.log('Internal collection not available'); +} +``` + +## Neue interne Collection hinzufügen + +### Schritt 1: Schema definieren + +In `src/content/config.ts`: + +```typescript +const _myInternalCollection = defineCollection({ + type: 'content', + schema: z.object({ + title: z.string(), + visibility: z.enum(['internal', 'team', 'public']).default('internal'), + // ... weitere Felder + }) +}); +``` + +### Schritt 2: Zur Collections-Export hinzufügen + +```typescript +export const collections = { + // ... andere collections + '_myInternal': _myInternalCollection, +}; +``` + +### Schritt 3: Content-Verzeichnis erstellen + +```bash +mkdir -p src/content/_myInternal/de +mkdir -p src/content/_myInternal/en +``` + +### Schritt 4: Admin-Interface erstellen (optional) + +```astro +// src/pages/admin/my-internal.astro +--- +// Zugriffskontrolle +const isDev = import.meta.env.DEV; +if (!isDev) return Astro.redirect('/404'); + +const items = await getCollection('_myInternal'); +--- + +``` + +## Wartung und Governance + +### Verantwortlichkeiten + +| Collection | Owner | Review-Zyklus | Letzte Validierung | +|-----------|--------|---------------|-------------------| +| `_personas` | Marketing Team | Quartalsweise | Januar 2024 | +| `_internal-docs` | Product Team | Monatlich | - | + +### Archivierung + +Veraltete Inhalte sollten nicht gelöscht, sondern archiviert werden: + +```yaml +status: "archived" +archivedAt: 2024-01-15T10:00:00Z +archiveReason: "Outdated after product pivot" +``` + +## FAQ + +**Q: Können interne Collections in Produktion verwendet werden?** +A: Ja, aber nur über geschützte Admin-Interfaces oder interne Tools, niemals über öffentliche URLs. + +**Q: Wie verhindere ich versehentliches Veröffentlichen?** +A: +1. Nutze das `_` Prefix konsequent +2. Erstelle keine öffentlichen Routes +3. Implementiere Zugriffskontrolle in Admin-Seiten +4. Nutze `visibility` und `status` Felder + +**Q: Können interne Collections in der Sitemap erscheinen?** +A: Nein, da keine öffentlichen Routes erstellt werden, erscheinen sie automatisch nicht in der Sitemap. + +**Q: Wie sichere ich sehr sensitive Daten?** +A: +1. Nutze `.gitignore` für hochsensitive Inhalte +2. Verwende Umgebungsvariablen für Secrets +3. Implementiere robuste Authentifizierung für Admin-Bereiche + +## Beispiel: Personas Collection + +Die `_personas` Collection demonstriert Best Practices: + +```markdown +--- +name: "Sabine Schmidt" +title: "Die gestresste Projektmanagerin" + +# Strukturierte Daten +demographics: + age: 38 + location: "München" + +# Status-Management +status: "active" +visibility: "internal" + +# Tracking +createdAt: 2024-01-15T10:00:00Z +lastUpdated: 2024-01-15T10:00:00Z +validatedAt: 2024-01-10T10:00:00Z + +# Verantwortlichkeiten +owner: "Marketing Team" +contributors: ["Product", "Sales"] +--- +``` + +## Kontakt + +Bei Fragen zu internen Collections: +- Marketing-Personas: marketing@memoro.ai +- Technische Umsetzung: dev@memoro.ai \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/PROJECT-ARCHITECTURE.md b/apps/memoro/apps/landing/docs/PROJECT-ARCHITECTURE.md new file mode 100644 index 000000000..dbf2b0fd3 --- /dev/null +++ b/apps/memoro/apps/landing/docs/PROJECT-ARCHITECTURE.md @@ -0,0 +1,394 @@ +# Memoro Website - Projekt-Architektur-Dokumentation + +## 📋 Übersicht + +Die Memoro-Website ist eine **mehrsprachige Marketing-Website** für eine KI-gestützte App zur Gesprächsdokumentation und Notizenerstellung. Die Website wurde mit **Astro 5.3.0** als statische Website entwickelt und unterstützt Deutsch (de) als Standardsprache sowie Englisch (en). + +## 🎯 Projektziel + +Die Website dient als zentrale Marketing- und Informationsplattform für die Memoro-App und bietet: +- Produktpräsentation und Feature-Übersicht +- Anleitungen und Dokumentation +- Blog und News-Bereich +- Team-Vorstellung +- Preisgestaltung und Pläne +- Branchenspezifische Lösungen +- Download-Möglichkeiten + +## 🛠 Technologie-Stack + +### Core Framework +- **Astro 5.3.0**: Static Site Generator mit hervorragender Performance +- **TypeScript**: Strict Mode für Typsicherheit +- **MDX**: Für erweiterte Content-Erstellung mit Komponenten + +### Styling & UI +- **Tailwind CSS 3.4**: Utility-first CSS Framework +- **Custom Theme System**: Zentralisierte Design-Tokens in `/src/theme/index.js` +- **Typography Plugin**: Für optimale Text-Darstellung +- **Dark Theme**: Durchgängiges dunkles Design (#181818 Hintergrund) + +### Content Management +- **Astro Content Collections**: Strukturierte Content-Verwaltung mit Zod-Schemas +- **15+ Content-Typen**: Blog, Team, Features, Guides, Industries, etc. +- **Mehrsprachigkeit**: Vollständige de/en Unterstützung + +### Integrationen +- **Sitemap**: Automatische Generierung für alle Sprachen +- **RSS Feeds**: Für jede Content-Collection und Sprache +- **Icons**: Astro-Icon mit Material Design Icons (MDI) +- **Analytics**: Plausible & Umami Integration + +## 📁 Projektstruktur + +``` +/ +├── src/ +│ ├── components/ # Wiederverwendbare Astro-Komponenten +│ │ ├── atoms/ # Kleine, atomare Komponenten +│ │ ├── detail/ # Detail-Seiten-Komponenten +│ │ ├── experiments/ # A/B-Testing Komponenten +│ │ ├── industries/ # Branchen-spezifische Komponenten +│ │ └── layouts/ # Layout-Komponenten +│ │ +│ ├── content/ # Alle Inhalte mit Zod-Schemas +│ │ ├── blog/ # Blog-Artikel (de/en Unterordner) +│ │ ├── team/ # Team-Profile +│ │ ├── features/ # Feature-Beschreibungen +│ │ ├── guides/ # Anleitungen +│ │ ├── industries/ # Branchenlösungen +│ │ ├── testimonials/ # Kundenstimmen +│ │ ├── blueprints/ # Vorlagen für Aufnahmen +│ │ ├── memories/ # Memory-Typen +│ │ ├── faqs/ # Häufige Fragen +│ │ ├── changelog/ # Produkt-Updates +│ │ ├── statistics/ # Nutzungsstatistiken +│ │ ├── calendar/ # Content-Kalender +│ │ ├── authors/ # Autoren-Profile +│ │ ├── wallpapers/ # Hintergrundbilder +│ │ ├── contracts/ # Rechtliche Dokumente +│ │ ├── dataprotection/# Datenschutz-Dokumente +│ │ └── pages/ # Hauptseiten-Inhalte +│ │ +│ ├── i18n/ # Internationalisierung +│ │ ├── ui.ts # UI-Übersetzungen +│ │ └── utils.ts # i18n-Hilfsfunktionen +│ │ +│ ├── layouts/ # Seiten-Layouts +│ │ ├── Layout.astro # Standard-Layout +│ │ └── HomeLayout.astro # Spezielles Home-Layout +│ │ +│ ├── pages/ # Routing-Struktur +│ │ ├── [lang]/ # Dynamisches Sprach-Routing +│ │ ├── admin/ # Admin-Bereich +│ │ ├── de/ # Deutsche RSS-Feeds +│ │ └── en/ # Englische RSS-Feeds +│ │ +│ ├── styles/ # Globale Styles +│ │ └── base.css # Basis-CSS mit Tailwind +│ │ +│ ├── theme/ # Theme-Konfiguration +│ │ └── index.js # Design-System-Tokens +│ │ +│ ├── utils/ # Hilfsfunktionen +│ │ └── experiments.ts # A/B-Testing Utilities +│ │ +│ └── middleware.ts # Request-Middleware für i18n +│ +├── public/ # Statische Assets +│ ├── images/ # Bilder organisiert nach Typ +│ │ ├── blog/ +│ │ ├── brand/ +│ │ ├── guides/ +│ │ ├── industries/ +│ │ ├── product_photos/ +│ │ ├── screenshots/ +│ │ ├── team/ +│ │ └── wallpaper/ +│ └── rss/ # RSS-Feed Styles +│ +├── docs/ # Projekt-Dokumentation +├── context/ # Kontext-Dokumente für KI +├── plans/ # Projekt-Pläne +└── posts/ # Social Media Posts +``` + +## 🌍 Internationalisierung (i18n) + +### Konfiguration +- **Default Locale**: Deutsch (`de`) +- **Unterstützte Sprachen**: Deutsch (`de`), Englisch (`en`) +- **Routing**: Präfix-basiert (`/de/...`, `/en/...`) +- **Fallback**: Automatische Weiterleitung zur Standardsprache + +### Implementierung +- **Middleware**: Behandelt Sprach-Weiterleitungen und 404s +- **Content-Organisation**: Sprachspezifische Unterordner in Collections +- **UI-Übersetzungen**: Zentralisiert in `src/i18n/ui.ts` +- **Sitemap**: Enthält alle Sprachversionen + +## 📚 Content Collections + +### Übersicht der Collections + +Die Website nutzt **15+ typisierte Content Collections** mit Zod-Schemas: + +1. **blog**: Artikel mit Metadaten (Autor, Tags, Kategorie) +2. **team**: Team-Profile mit Rollen und Social Links +3. **features**: Produkt-Features mit Icons und Kategorien +4. **guides**: Tutorials mit Schwierigkeitsgrad und Dauer +5. **industries**: Branchenlösungen mit Statistiken und FAQs +6. **testimonials**: Kundenstimmen nach Typ kategorisiert +7. **blueprints**: Aufnahme-Vorlagen für verschiedene Szenarien +8. **memories**: KI-generierte Inhaltstypen +9. **faqs**: Häufige Fragen nach Kategorien +10. **changelog**: Produkt-Updates und Release Notes +11. **statistics**: Wochen- und Monatsberichte +12. **calendar**: Content-Planung mit Events +13. **authors**: Autoren-Profile mit Statistiken +14. **wallpapers**: Hintergrundbilder mit Download-Tracking +15. **dataprotection**: Datenschutz-Dokumente +16. **contracts**: Rechtliche Dokumente +17. **pages**: Hauptseiten-Inhalte + +### Schema-Validierung + +Jede Collection hat ein **striktes Zod-Schema** für: +- Typ-Sicherheit +- Konsistente Datenstruktur +- Automatische Validierung beim Build +- IntelliSense-Unterstützung in der IDE + +## 🎨 Design-System + +### Theme-Konfiguration + +Zentralisiertes Theme in `/src/theme/index.js`: + +```javascript +{ + colors: { + primary: '#3B82F6', // Blau + secondary: '#10B981', // Grün + background: { + global: '#181818', // Dunkler Hintergrund + card: 'rgba(50, 50, 50, 0.8)', + cardHover: 'rgba(50, 50, 50, 0.99)' + }, + text: { + primary: '#F9FAFB', // Heller Text + secondary: '#D1D5DB', // Sekundärer Text + tertiary: '#9CA3AF' // Tertiärer Text + } + } +} +``` + +### Tailwind-Erweiterungen + +- **Custom Font Sizes**: Hero, Display, Heading mit responsiven Varianten +- **Typography Plugin**: Angepasst für Dark Theme +- **Custom Utilities**: scrollbar-none, bg-gradient-radial +- **Group-Open Variant**: Für Details-Elemente + +## 🚀 Build & Deployment + +### Build-Prozess +```bash +npm run dev # Entwicklungsserver (localhost:4321) +npm run build # Production Build nach ./dist/ +npm run preview # Vorschau des Production Builds +npm run astro check # TypeScript Type-Checking +``` + +### Statische Generierung +- **Vollständig statisch**: Alle Seiten werden zur Build-Zeit generiert +- **Keine Server-Runtime**: Optimale Performance +- **getStaticPaths()**: Für dynamische Routen +- **Optimierte Assets**: Automatische Bild-Optimierung mit Sharp + +## 📊 Analytics & Tracking + +### Plausible Analytics +- **Cookieless Tracking**: DSGVO-konform +- **Event-Tracking**: Detailliertes Nutzerverhalten +- **Funnel-Analyse**: Conversion-Tracking +- **Custom Events**: Download, CTA-Clicks, etc. + +### Umami Analytics +- **Self-Hosted Option**: Datenschutz-freundlich +- **Real-Time Stats**: Live-Besucherdaten +- **Keine Cookies**: Privacy-first Ansatz + +## 🧪 Experimente & A/B-Testing + +### PostHog Integration +- **Feature Flags**: Für kontrollierte Rollouts +- **A/B-Tests**: Hero-Varianten, CTA-Buttons +- **Conversion-Tracking**: Erfolgsmetriken + +### Implementierte Tests +- Hero-Section Varianten +- CTA-Button Tests +- Navigation Download-Button + +## 🔒 Sicherheit & Datenschutz + +### DSGVO-Konformität +- **Datenschutzerklärung**: Mehrsprachig verfügbar +- **Cookie-Consent**: Optional für erweiterte Features +- **Datenminimierung**: Nur notwendige Daten +- **TOMs**: Technische und organisatorische Maßnahmen dokumentiert + +### Content Security +- **Statische Generierung**: Keine Server-Vulnerabilities +- **TypeScript**: Typ-sichere Entwicklung +- **Validierung**: Zod-Schemas für alle Inhalte + +## 🎯 Besondere Features + +### 1. Recording Blueprints +Vorgefertigte Vorlagen für verschiedene Aufnahme-Szenarien: +- Büro-Meetings +- Baustellendokumentation +- Akademische Vorlesungen +- Kundengespräche + +### 2. Memory-System +KI-generierte Inhaltstypen aus Aufnahmen: +- Zusammenfassungen +- Aufgaben & Termine +- Blog-Beiträge +- Social Media Posts + +### 3. Mana-Credit-System +Flexibles Abrechnungsmodell: +- Tägliche Mana-Gutschriften +- Mana-Potions zum Nachkaufen +- Maximale Mana-Limits je Plan + +### 4. Multi-Collection RSS +RSS-Feeds für jede Content-Collection: +- Sprachspezifisch +- Automatisch generiert +- XSLT-Styling + +### 5. Content-Kalender +Integrierte Planung für Content-Erstellung: +- Monatsplanung +- Status-Tracking +- Multi-Autor-Support + +## 📈 Performance-Optimierungen + +### Build-Optimierungen +- **Static Site Generation**: Keine Server-Laufzeit +- **Asset-Optimierung**: Automatische Bild-Kompression +- **Code-Splitting**: Optimale Bundle-Größen +- **Lazy Loading**: Für Bilder und Komponenten + +### Frontend-Optimierungen +- **Tailwind Purge**: Nur verwendete CSS-Klassen +- **Font-Optimierung**: System-Fonts mit Fallbacks +- **Minimal JavaScript**: Nur wo notwendig +- **Service Worker**: Für Offline-Support (geplant) + +## 🛠 Entwickler-Tools + +### Verfügbare Skripte +- `npm run dev`: Lokale Entwicklung +- `npm run build`: Production Build +- `npm run preview`: Build-Vorschau +- `npm run astro check`: Type-Checking + +### Debug-Tools +- **Admin-Bereich**: `/admin/` für Content-Übersicht +- **Author-Management**: `/admin/authors` +- **Find-Untranslated**: Script für fehlende Übersetzungen + +## 📝 Code-Konventionen + +### Komponenten +- **PascalCase**: Für Komponentennamen +- **Props-Interfaces**: TypeScript-Definitionen +- **Import-Reihenfolge**: Externe → Interne + +### Content +- **Frontmatter**: Zod-validiert +- **MDX-Support**: Komponenten in Markdown +- **Sprachordner**: de/en Struktur + +### Styling +- **Tailwind-First**: Utility-Klassen bevorzugt +- **Kebab-Case**: Für Custom-CSS +- **Keine Inline-Styles**: Außer absolut notwendig + +## 🚦 Deployment-Strategie + +### Hosting +- **Static Hosting**: Optimiert für CDN-Deployment +- **Asset-CDN**: Für Bilder und Medien +- **Multi-Region**: Für optimale Latenz + +### CI/CD +- **Automatische Builds**: Bei Git-Push +- **Type-Checking**: Vor jedem Build +- **Sitemap-Generierung**: Automatisch +- **RSS-Feed-Updates**: Bei Content-Änderungen + +## 📊 Monitoring & Wartung + +### Analytics-Dashboard +- Besucher-Statistiken +- Conversion-Tracking +- Event-Analyse +- Funnel-Visualisierung + +### Content-Management +- Regelmäßige Blog-Posts +- Feature-Updates +- Team-Änderungen +- FAQ-Erweiterungen + +### Performance-Monitoring +- Lighthouse-Scores +- Core Web Vitals +- Bundle-Size-Tracking +- Build-Zeit-Optimierung + +## 🔮 Zukünftige Erweiterungen + +### Geplante Features +- PWA-Support +- Erweiterte Suche +- Newsletter-Integration +- Mehr Sprachen +- API-Dokumentation +- Video-Tutorials + +### Technische Verbesserungen +- Service Worker +- WebP-Bildformat +- Erweiterte Caching-Strategien +- GraphQL-API (optional) +- CMS-Integration (optional) + +## 📞 Support & Dokumentation + +### Interne Dokumentation +- `/docs/`: Technische Dokumentation +- `/context/`: KI-Kontext-Dokumente +- `/plans/`: Projekt-Roadmaps +- `CLAUDE.md`: KI-Assistenz-Guidelines + +### Externe Ressourcen +- [Astro Dokumentation](https://docs.astro.build) +- [Tailwind CSS](https://tailwindcss.com) +- [MDX](https://mdxjs.com) +- [Zod](https://zod.dev) + +--- + +**Letztes Update**: Januar 2025 +**Version**: 2.0.0 +**Maintainer**: Memoro Development Team \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/appstores/AppStoreEntryInfos.md b/apps/memoro/apps/landing/docs/appstores/AppStoreEntryInfos.md new file mode 100644 index 000000000..6fe855d45 --- /dev/null +++ b/apps/memoro/apps/landing/docs/appstores/AppStoreEntryInfos.md @@ -0,0 +1,370 @@ +# App Store & Play Store Entry Informationen + +## ⚠️ Wichtige Formatierungshinweise + +App Store Limitierungen: +- Keine Markdown-Formatierung (keine ** für Bold) +- Keine Bullet Points (•), stattdessen Bindestriche (-) verwenden +- Einfacher Text ohne Formatierung +- Keine Emojis im Text (werden nicht unterstützt) + +## 📱 App Store (iOS) Informationen + +### App Name +Memoro - KI Meeting-Protokoll +- Untertitel: Automatische Gesprächsnotizen + +### Keywords (100 Zeichen) +meeting,protokoll,transkription,notizen,aufnahme,diktat,voice-to-text,gespräch,zusammenfassung + +### Beschreibung (4000 Zeichen) + +Headline: Memoro - KI Meeting-Protokoll & Automatische Notizen + +Haupttext: +Verwandle jedes Gespräch in strukturierte Notizen - automatisch und intelligent. + +Memoro ist deine KI-gestützte Meeting-App für automatische Gesprächsprotokolle, Transkriptionen und intelligente Zusammenfassungen. Nie wieder händisch mitschreiben - konzentriere dich auf das Wesentliche, während Memoro für dich dokumentiert. + +Perfekt für: +- Business Meetings & Konferenzen +- Vorlesungen & Seminare +- Kundengespräche & Beratungen +- Interviews & Recherchen +- Baustellenbesprechungen +- Arztgespräche & Therapiesitzungen +- Team-Briefings & Stand-ups + +Kernfunktionen: +Intelligente Meeting-Aufzeichnung +- Automatische Transkription in Echtzeit +- KI-gestützte Zusammenfassungen +- Erkennung von Aufgaben & Terminen +- Sprechererkennung für klare Zuordnung + +26+ Sprachen & Dialekte +- Deutsch, Schweizerdeutsch, Englisch +- Automatische Spracherkennung +- Mehrsprachige Meetings problemlos +- Übersetzungsfunktion integriert + +Professionelle Vorlagen +- Business: Meeting-Protokolle, Kundengespräche +- Bildung: Vorlesungen, Seminare +- Handwerk: Baustellenbesprechungen +- Medizin: Patientengespräche +- Beratung: Coaching-Sessions +- Vertrieb: Sales-Gespräche +- Individuell anpassbar + +Nahtlose Team-Kollaboration +- Ein-Klick-Sharing via WhatsApp, Teams, Slack +- PDF & Word Export +- Web-App unter app.memoro.ai +- Automatische Cloud-Synchronisation + +KI-Assistent integriert +- Frage deine Memos direkt +- Kombiniere mehrere Gespräche +- Intelligente Suche +- Automatische Kategorisierung + +Produktivitätssteigerung: +- 80% Zeitersparnis bei Protokollen +- 100% der wichtigen Details erfasst +- 5x schnellere Nachbereitung +- Durchsuchbare Gesprächsarchive + +Deutscher Datenschutz: +- DSGVO-konform +- Server in Deutschland +- Ende-zu-Ende-Verschlüsselung +- Keine Weitergabe an Dritte +- ISO 27001 zertifiziert + +Premium Features: +- Unbegrenzte Aufnahmedauer +- Erweiterte KI-Analysen +- Team-Workspaces +- API-Zugang +- Priority Support + +Das sagen unsere Nutzer: +"Endlich kann ich mich auf Gespräche konzentrieren statt auf Notizen!" - Sarah M., Projektmanagerin + +"Spart mir täglich 2 Stunden Nacharbeit." - Dr. Thomas K., Berater + +"Unverzichtbar für internationale Meetings!" - Lisa W., Sales Director + +Jetzt kostenlos testen +Starte mit 60 Freiminuten und erlebe, wie Memoro deine Meetings revolutioniert. Keine Kreditkarte erforderlich. + +Support & Kontakt: +support@memoro.ai +www.memoro.ai + +Transformiere deine Gespräche in Wissen - mit Memoro. + +### Promotional Text (170 Zeichen) +KI Meeting-Protokoll in 26 Sprachen. Automatische Transkription & Zusammenfassung. DSGVO-konform. Jetzt 60 Min kostenlos testen! + +### Neu in dieser Version +Memoro 2.0 - Das größte Update aller Zeiten! + +Revolutionäre KI-Features: +- Blitzschnelle Verarbeitung - bis zu 10x schneller Memos erhalten +- Mana Credits System - Transparente KI-Nutzung mit voller Kostenkontrolle, Einmalkäufe möglich +- Memos befragen - Stelle Fragen zu deinen Aufnahmen, erhalte intelligente Antworten +- Memos kombinieren - Verbinde mehrere Gespräche zu umfassenden Projekten + +Maximaler Datenschutz: +- Google Analytics komplett entfernt - 100% Privatsphäre +- Open-Source Analytics in der EU - DSGVO-konform +- Datenbank in Frankfurt - Höchste deutsche Sicherheitsstandards +- Alle Daten bleiben in der EU - Keine Drittländer-Übertragung +- Verbesserter Datenschutz mit weniger Drittanbietern + +Performance & Design: +- Komplett neues Design - Moderner, intuitiver, Kontext-Menüs +- Erweiterte Aufnahmen - Längere Sessions möglich +- Web-App Aufnahme - Direkt im Browser aufzeichnen + +Noch mehr Verbesserungen: +- Mehr Sprachen - Jetzt 80+ Sprachen und Dialekte +- Verbesserte Zusammenfassungen +- Optimierte Sprechererkennung +- Stabilere Langzeit-Aufnahmen + +--- + +## 🤖 Google Play Store (Android) Informationen + +### App-Titel (30 Zeichen) +Memoro - KI Meeting-Protokoll + +### Kurzbeschreibung (80 Zeichen) +Automatische Meeting-Protokolle & Transkription. KI-Notizen in 26 Sprachen. + +### Vollständige Beschreibung (4000 Zeichen) +[Identisch mit App Store Beschreibung oben] + +### Tags +meeting, protokoll, transkription, notizen, aufnahme, diktat, voice-to-text, gespräch, zusammenfassung, ki, künstliche intelligenz, business, produktivität, team + +--- + +## 📸 Screenshot-Texte (Optimiert für SEO) + +### Screenshot 1 - Startbildschirm +Headline: KI Meeting-Protokoll starten +Subline: Automatische Transkription beginnt sofort + +### Screenshot 2 - Aufnahme +Headline: Meeting aufzeichnen & transkribieren +Subline: Echtzeit Voice-to-Text in 26 Sprachen + +### Screenshot 3 - KI-Zusammenfassung +Headline: Intelligente Meeting-Zusammenfassung +Subline: KI erkennt Aufgaben, Termine & Kernpunkte + +### Screenshot 4 - Sprechererkennung +Headline: Automatische Sprecherzuordnung +Subline: Wer hat was gesagt - klar dokumentiert + +### Screenshot 5 - Teilen +Headline: Meeting-Notizen sofort teilen +Subline: WhatsApp, Teams, Slack, E-Mail & mehr + +### Screenshot 6 - Vorlagen +Headline: Professionelle Meeting-Vorlagen +Subline: Business, Bildung, Medizin & mehr + +### Screenshot 7 - Mehrsprachig +Headline: 26 Sprachen automatisch erkennen +Subline: Internationale Meetings problemlos + +### Screenshot 8 - KI-Assistent +Headline: Frage deine Meetings direkt +Subline: KI-Suche in allen Protokollen + +### Screenshot 9 - Datenschutz +Headline: DSGVO-konform & sicher +Subline: Deutsche Server, verschlüsselt + +### Screenshot 10 - Web-App +Headline: Überall verfügbar +Subline: iOS, Android & Web synchronisiert + +--- + +## 🌍 Lokalisierte Versionen + +### English (App Store/Play Store) + +#### App Name +Memoro - AI Meeting Minutes +Subtitle: Automatic conversation notes + +#### Keywords +meeting,minutes,transcription,notes,recording,dictation,voice-to-text,conversation,summary,ai + +#### Short Description +AI-powered meeting minutes & transcription. Automatic notes in 26+ languages. + +#### Description (Kurzfassung) +Transform every conversation into structured notes - automatically and intelligently. + +Memoro is your AI-powered meeting app for automatic minutes, transcriptions, and intelligent summaries. Never write notes manually again - focus on what matters while Memoro documents for you. + +Core Features: +- Real-time transcription +- AI summaries & action items +- 26+ languages supported +- Speaker recognition +- Professional templates +- One-click sharing +- GDPR compliant +- German servers + +Start free with 60 minutes. No credit card required. + +--- + +## 📊 App Store Optimization (ASO) Strategie + +### Primäre Keywords (Deutsch) +1. Meeting Protokoll +2. Transkription App +3. Voice to Text +4. Gesprächsnotizen +5. Diktierfunktion +6. KI Zusammenfassung +7. Automatische Notizen +8. Spracherkennung + +### Sekundäre Keywords +1. Business Meeting App +2. Konferenz Tool +3. Team Kollaboration +4. Digitale Notizen +5. Audio Transkription +6. Meeting Aufzeichnung +7. Gesprächsprotokoll +8. Speech to Text Deutsch + +### Long-Tail Keywords +1. Automatische Meeting Protokolle erstellen +2. KI gestützte Gesprächszusammenfassung +3. Mehrsprachige Transkription App +4. DSGVO konforme Meeting App +5. Voice to Text für Business + +### Konkurrenzanalyse Keywords +- Otter.ai Alternative +- Notion Voice Notes +- Microsoft Teams Transkription +- Zoom Meeting Protokoll +- Google Meet Notizen + +--- + +## 🎯 Marketing Messaging + +### Value Propositions +1. Zeitersparnis: "Spare 2 Stunden täglich" +2. Vollständigkeit: "Verpasse nie wieder wichtige Details" +3. Produktivität: "5x schnellere Meeting-Nachbereitung" +4. Sicherheit: "100% DSGVO-konform, Server in Deutschland" +5. Vielseitigkeit: "26 Sprachen, alle Branchen" + +### Call-to-Actions +- "Jetzt kostenlos testen" +- "60 Minuten gratis" +- "Meeting-Stress beenden" +- "Protokolle automatisieren" +- "KI-Assistent aktivieren" + +### Social Proof +- "50.000+ zufriedene Nutzer" +- "4.8 Sterne Bewertung" +- "Von Experten empfohlen" +- "#1 Meeting App in Deutschland" + +--- + +## 📝 Zusätzliche Store-Felder + +### Kategorien +- Primär: Produktivität +- Sekundär: Business + +### Altersfreigabe +- 4+ (Keine Altersbeschränkung) + +### In-App-Käufe +- Memoro Pro Monat (9,99 €) +- Memoro Pro Jahr (89,99 €) +- Memoro Teams (ab 19,99 €/Nutzer) + +### Datenschutz-Labels +- Kontaktinformationen (optional) +- Audio-Aufnahmen (erforderlich) +- Verwendungsdaten (Analytics) +- Diagnose (Crash-Reports) + +### Support-Informationen +- E-Mail: support@memoro.ai +- Website: https://www.memoro.ai +- Datenschutz: https://www.memoro.ai/de/legal/dataprivacy +- AGB: https://www.memoro.ai/de/legal/agb + +### App-Größe +- iOS: ~45 MB +- Android: ~38 MB + +### Mindestanforderungen +- iOS: 14.0 oder höher +- Android: 7.0 (API 24) oder höher + +--- + +## 🚀 Launch-Strategie + +### Soft Launch Länder +1. Deutschland +2. Österreich +3. Schweiz + +### Vollständiger Launch +- EU-Länder +- USA & Kanada +- UK & Australien + +### Preisgestaltung +- Freemium: 60 Min/Monat kostenlos +- Pro: 9,99 €/Monat +- Teams: Ab 19,99 €/Nutzer/Monat +- Enterprise: Individuell + +--- + +## 📈 Erfolgsmetriken + +### Key Performance Indicators +- Download-Rate +- Conversion Rate (Free zu Pro) +- User Retention (Tag 1, 7, 30) +- Durchschnittliche Sitzungsdauer +- App Store Bewertung +- Keyword Rankings + +### Ziele Q1 2025 +- 100.000 Downloads +- 4.5+ Sterne Bewertung +- Top 10 Produktivitäts-Apps +- 15% Conversion Rate + +--- + +*Letzte Aktualisierung: September 2025* +*Version: 1.0* \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/appstores/AppStoreScreenshotAnalysis.md b/apps/memoro/apps/landing/docs/appstores/AppStoreScreenshotAnalysis.md new file mode 100644 index 000000000..01e825b70 --- /dev/null +++ b/apps/memoro/apps/landing/docs/appstores/AppStoreScreenshotAnalysis.md @@ -0,0 +1,268 @@ +# App Store Screenshot Analyse & Optimierungsvorschläge + +## Best Practices für App Store Screenshots + +### Grundlegende Prinzipien + +#### 1. Die ersten 2-3 Screenshots sind entscheidend +- 70% der Nutzer schauen nur die ersten 2-3 Screenshots an +- Diese müssen die Hauptfunktionen und den größten Nutzen zeigen +- Der erste Screenshot sollte die Kernfunktion visualisieren + +#### 2. Text-zu-Bild Verhältnis +- Maximal 20% der Fläche sollte Text sein +- Headlines sollten kurz und prägnant sein (3-6 Wörter) +- Verwende große, gut lesbare Schriftarten + +#### 3. Visuelle Hierarchie +- Klare Fokuspunkte setzen +- Kontrastreiche Farben für wichtige Elemente +- Device-Mockups verwenden für Kontext + +#### 4. Storytelling durch Screenshots +- Erzähle eine Geschichte über die User Journey +- Zeige Problem → Lösung → Ergebnis +- Baue emotionale Verbindung auf + +#### 5. SEO-Optimierung +- Keywords in Screenshot-Überschriften verwenden +- Relevante Suchbegriffe einbauen +- Call-to-Actions integrieren + +#### 6. A/B Testing Prioritäten +- Reihenfolge der Screenshots +- Überschriften-Texte +- Farbschemata +- Mit/ohne Device-Mockups + +#### 7. Lokalisierung +- Screenshots für verschiedene Märkte anpassen +- Sprache der UI-Elemente beachten +- Kulturell relevante Beispiele verwenden + +#### 8. Technische Anforderungen +- iOS: 1242 x 2208px (iPhone) oder 1290 x 2796px +- Hochauflösende Grafiken verwenden +- Konsistentes Design über alle Screenshots + +--- + +## Aktuelle Screenshot-Analyse + +### Screenshot 1: "Memoro hört zu und führt Protokoll" +**Stärken:** +- Klare Hauptfunktion dargestellt +- Einfaches, verständliches Interface +- Guter Kontrast mit gelbem Accent + +**Schwächen:** +- Text könnte prägnanter sein +- Kein klarer Call-to-Action +- Interface wirkt etwas leer + +### Screenshot 2: "Schreibt Gesprochenes auf und fasst zusammen" +**Stärken:** +- Zeigt konkrete Funktionalität +- Liste verdeutlicht Vielseitigkeit + +**Schwächen:** +- Zu viel Text im Screenshot +- Headline zu lang +- Visuelle Hierarchie unklar + +### Screenshot 3: "Sortiere deine Memos und durchsuche sie" +**Stärken:** +- Zeigt Organisationsfunktionen +- Tags-System sichtbar + +**Schwächen:** +- Nicht sofort verständlich +- Wenig emotionaler Appeal + +### Screenshot 4: "Angepasste Modi für etliche Gespräche" +**Stärken:** +- Zeigt Anpassungsmöglichkeiten +- Verschiedene Use Cases + +**Schwächen:** +- Headline unklar formuliert +- Interface sehr textlastig + +### Screenshot 5: "Teile deine Memos über alle Kanäle" +**Stärken:** +- Klare Sharing-Funktion +- Bekannte App-Icons + +**Schwächen:** +- Layout nicht optimal +- Könnte visuell ansprechender sein + +### Screenshot 6: "Versteht über 80 Sprachen und übersetzt" +**Stärken:** +- Starkes Verkaufsargument +- Flaggen visualisieren Feature + +**Schwächen:** +- Design inkonsistent zu anderen Screenshots +- Zahlen-Claim sollte prominenter sein + +### Screenshot 7: "Neue Funktionen in Memoro 2.0" +**Stärken:** +- Update-Informationen +- Feature-Liste + +**Schwächen:** +- Zu viel Text +- Nicht für Erstnutzer relevant +- Sollte nicht in ersten Screenshots sein + +### Screenshot 8: "Dein automatisches Meeting-Protokoll" +**Stärken:** +- Klarer Use Case +- Device-Mockup für Kontext + +**Schwächen:** +- Redundant zu Screenshot 1 +- Text auf Device zu klein + +--- + +## Verbesserungsvorschläge + +### Option A: Feature-Fokussiert +**Zielgruppe:** Produktivitätsorientierte Business-Nutzer + +1. **KI Meeting-Protokoll in Aktion** + - Live-Aufnahme mit Echtzeit-Transkription + - Headline: "Nie wieder mitschreiben" + - Zeige aktive Aufnahme mit Wellenform + +2. **Intelligente Zusammenfassung** + - Fertige Zusammenfassung mit Kernpunkten + - Headline: "KI fasst zusammen" + - Vorher/Nachher Vergleich + +3. **80+ Sprachen automatisch** + - Mehrsprachiges Meeting + - Headline: "Versteht 80+ Sprachen" + - Flaggen-Grid prominent + +4. **Ein-Klick Sharing** + - Export-Optionen + - Headline: "Sofort teilen" + - WhatsApp, Teams, Slack Icons + +5. **DSGVO-konform** + - Sicherheits-Features + - Headline: "100% sicher" + - Deutsche Server betonen + +### Option B: Use-Case-Orientiert +**Zielgruppe:** Verschiedene Berufsgruppen + +1. **Business Meeting** + - Manager in Meeting-Situation + - Headline: "Meeting-Stress ade" + - Zeige Protokoll-Ergebnis + +2. **Student in Vorlesung** + - Uni-Kontext + - Headline: "Vorlesungen meistern" + - Zusammenfassung einer Vorlesung + +3. **Arzt-Patient Gespräch** + - Medizinischer Kontext + - Headline: "Gespräche dokumentiert" + - DSGVO-Hinweis + +4. **Baustellenbesprechung** + - Handwerker-Kontext + - Headline: "Projekte im Griff" + - Checklisten-Feature + +5. **Journalist Interview** + - Interview-Situation + - Headline: "Interviews perfekt" + - Transkription-Feature + +### Option C: Problem-Lösung-Storytelling +**Zielgruppe:** Schmerzpunkt-orientierte Nutzer + +1. **Das Problem** + - Gestresste Person mit Notizblock + - Headline: "Kennst du das?" + - Chaos visualisieren + +2. **Die Lösung** + - Memoro App-Start + - Headline: "Memoro hört zu" + - Ein-Knopf-Bedienung + +3. **Der Prozess** + - KI arbeitet + - Headline: "KI macht die Arbeit" + - Fortschrittsanzeige + +4. **Das Ergebnis** + - Fertige Zusammenfassung + - Headline: "Fertig in Sekunden" + - Struktur zeigen + +5. **Der Mehrwert** + - Zufriedener Nutzer + - Headline: "2 Stunden gespart" + - Testimonial einbauen + +--- + +## Empfohlene Sofortmaßnahmen + +### Priorität 1: Headlines optimieren +- Kürzer (max. 4 Wörter) +- Keywords einbauen +- Nutzen statt Features + +### Priorität 2: Ersten Screenshot optimieren +- Stärkerer visueller Hook +- Klarere Value Proposition +- Call-to-Action einbauen + +### Priorität 3: Konsistenz verbessern +- Einheitliches Farbschema +- Gleiche Schriftgrößen +- Device-Mockups vereinheitlichen + +### Priorität 4: Text reduzieren +- Weniger Text pro Screenshot +- Größere Schrift +- Mehr visuelle Elemente + +### Priorität 5: A/B Tests durchführen +- Option A vs. Option B testen +- Conversion-Rates messen +- Iterativ verbessern + +--- + +## Metriken zur Erfolgsmessung + +- **Conversion Rate:** App Store Seite → Download +- **Screenshot-Engagement:** Wie viele Screenshots werden angeschaut +- **Time-on-Page:** Verweildauer auf App Store Seite +- **Keyword Rankings:** Position für Hauptkeywords +- **A/B Test Results:** Welche Variante performt besser + +--- + +## Nächste Schritte + +1. Entscheidung für eine Option (A, B oder C) +2. Neue Screenshots erstellen +3. A/B Test aufsetzen +4. Performance monitoren +5. Basierend auf Daten optimieren + +--- + +*Erstellt: September 2025* +*Version: 1.0* \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/appstores/images/current-app-store-screenshots.png b/apps/memoro/apps/landing/docs/appstores/images/current-app-store-screenshots.png new file mode 100644 index 000000000..d3da112f4 Binary files /dev/null and b/apps/memoro/apps/landing/docs/appstores/images/current-app-store-screenshots.png differ diff --git a/apps/memoro/apps/landing/docs/blog-image-guidelines.md b/apps/memoro/apps/landing/docs/blog-image-guidelines.md new file mode 100644 index 000000000..9542d1d11 --- /dev/null +++ b/apps/memoro/apps/landing/docs/blog-image-guidelines.md @@ -0,0 +1,192 @@ +# Blog Image Guidelines + +## Overview +This document defines the visual standards and guidelines for all blog images on the Memoro website. Following these guidelines ensures consistency, professionalism, and brand alignment across all content. + +## Visual Identity + +### Core Principles +- **Modern & Professional**: Clean, contemporary design that reflects Memoro's innovative approach +- **Tech-Forward but Human-Centric**: Balance technical sophistication with approachability +- **Consistent Brand Experience**: Every image should feel like part of the Memoro family +- **Accessibility First**: High contrast, clear visuals that work for all users + +### Brand Colors +- **Primary Yellow**: `#F8D62B` +- **Accent Colors**: Use sparingly for highlights +- **Backgrounds**: Primarily white (`#FFFFFF`) or black (`#181818`) +- **Text on Images**: Dark gray (`#333333`) for maximum readability + +## Image Types & Specifications + +### 1. Hero Images +- **Purpose**: Article headers, primary visual impact +- **Dimensions**: 1200x630px (Open Graph optimized) +- **Style**: Bold, eye-catching, sets the article tone +- **File Size**: Max 200KB (optimized WebP) + +### 2. Concept Visualizations +- **Purpose**: Abstract representations of complex ideas +- **Dimensions**: 800x450px +- **Style**: Minimalist diagrams, icons, and shapes +- **Use When**: Explaining abstract concepts, workflows, or systems + +### 3. Workflow Diagrams +- **Purpose**: Step-by-step process illustrations +- **Dimensions**: 800x450px or 1000x600px for complex flows +- **Style**: Numbered steps, clear flow direction, consistent iconography +- **Elements**: Arrows, numbered circles, descriptive labels + +### 4. Comparison Graphics +- **Purpose**: Before/after, with/without scenarios +- **Dimensions**: 800x450px +- **Style**: Split-screen or side-by-side layouts +- **Key Feature**: Clear visual distinction between states + +### 5. Data Visualizations +- **Purpose**: Statistics, metrics, performance indicators +- **Dimensions**: 600x400px +- **Style**: Clean charts, modern gauges, progress indicators +- **Colors**: Use brand colors for data points + +## Design Standards + +### Typography on Images +- **Headings**: Sans-serif, bold, minimum 24pt +- **Body Text**: Sans-serif, regular, minimum 16pt +- **Font Suggestions**: Inter, Roboto, or system fonts +- **Contrast**: Always ensure WCAG AA compliance + +### Iconography +- **Style**: Outline or filled, consistent weight +- **Size**: Minimum 32x32px for visibility +- **Sources**: Heroicons, Tabler Icons, or custom +- **Consistency**: Use same icon set throughout article + +### Composition Guidelines +- **Spacing**: Generous whitespace, avoid clutter +- **Alignment**: Consistent grid system +- **Hierarchy**: Clear visual priority +- **Balance**: Even distribution of visual weight + +## AI Image Generation Prompts + +### Standard Prompt Structure +``` +Create a [IMAGE TYPE] showing [DESCRIPTION]. +Style: Modern, professional, clean design. +Colors: Use blue (#0066CC) and purple (#6B46C1) as primary colors. +Background: White or light gray. +Additional: [SPECIFIC REQUIREMENTS] +Dimensions: [WIDTH]x[HEIGHT]px +Avoid: Stock photo clichés, overly complex designs, dark backgrounds +``` + +### Example Prompts by Type + +#### Hero Image +``` +Create a hero image visualizing AI-powered meeting assistance. +Style: Modern, professional, clean design with abstract geometric shapes. +Colors: Gradient from blue (#0066CC) to purple (#6B46C1). +Background: White with subtle geometric patterns. +Include: Floating interface elements suggesting productivity. +Dimensions: 1200x630px +Avoid: Literal office scenes, stock photo style +``` + +#### Workflow Diagram +``` +Design a 4-step workflow diagram for meeting documentation. +Style: Flat design with numbered steps connected by flowing lines. +Colors: Primary blue (#0066CC) with purple accents. +Each step: Icon + short label, connected by arrows. +Background: Clean white. +Dimensions: 800x450px +``` + +## Do's and Don'ts + +### ✅ DO +- Use consistent visual language across article series +- Include subtle gradients and modern effects +- Maintain high contrast for accessibility +- Add subtle shadows for depth +- Use brand colors prominently +- Keep text minimal and impactful +- Test images at different sizes + +### ❌ DON'T +- Use generic stock photos +- Include photos of real people (unless team photos) +- Create overly complex or busy designs +- Use conflicting color schemes +- Add unnecessary decorative elements +- Use low-resolution or pixelated graphics +- Forget mobile optimization + +## File Management + +### Naming Convention +``` +[article-slug]-[image-type]-[number].webp +``` +Examples: +- `prompt-engineering-hero-01.webp` +- `ai-assistant-workflow-01.webp` +- `decision-making-comparison-01.webp` + +### Storage Structure +``` +/public/images/blog/ +├── heroes/ # Hero images +├── diagrams/ # Workflow and concept diagrams +├── comparisons/ # Before/after graphics +└── data-viz/ # Charts and data visualizations +``` + +### Optimization Requirements +1. **Format**: WebP with JPG fallback +2. **Compression**: 85% quality for WebP +3. **Lazy Loading**: Implement for all non-hero images +4. **Alt Text**: Descriptive, keyword-optimized +5. **Responsive**: Provide 2x versions for retina displays + +## Implementation Checklist + +Before publishing any blog image: +- [ ] Follows brand color guidelines +- [ ] Meets dimension requirements +- [ ] Under 200KB file size +- [ ] Has descriptive alt text +- [ ] Tested on mobile devices +- [ ] Consistent with article series style +- [ ] Optimized for web performance +- [ ] Accessible contrast ratios + +## Tools & Resources + +### Recommended Design Tools +- **Figma**: For creating custom graphics +- **Canva**: For quick layouts with brand templates +- **draw.io**: For technical diagrams +- **DALL-E / Midjourney**: For AI-generated base images + +### Optimization Tools +- **Squoosh**: Web-based image optimization +- **ImageOptim**: Mac app for batch optimization +- **TinyPNG**: Online WebP/PNG optimization + +### Color & Contrast Checkers +- **WebAIM Contrast Checker** +- **Stark (Figma plugin)** +- **Colorable.co** + +## Version History +- v1.0 (2025-01-22): Initial guidelines created +- Created by: Till Schneider +- Last updated: 2025-01-22 + +--- + +For questions or suggestions about these guidelines, please contact the content team. \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/components/roi-calculator.md b/apps/memoro/apps/landing/docs/components/roi-calculator.md new file mode 100644 index 000000000..03f357251 --- /dev/null +++ b/apps/memoro/apps/landing/docs/components/roi-calculator.md @@ -0,0 +1,174 @@ +# ROI-Rechner Komponente + +## Übersicht +Interaktiver ROI (Return on Investment) Rechner zur Visualisierung der Zeit- und Geldersparnis mit Memoro. + +## Features +- 🎚️ **Interaktive Slider** für alle Parameter +- 📊 **Echtzeit-Berechnung** bei jeder Änderung +- 💰 **Zeit- und Geldersparnis** für Woche/Monat/Jahr +- 📈 **ROI-Berechnung** zeigt Amortisationszeit +- 🌍 **Mehrsprachig** (DE/EN) +- 📱 **Responsive Design** für alle Geräte +- 🎨 **Anpassbares Design** mit Farbschema + +## Verwendung + +### Basic Usage +```astro +import ROICalculator from "../components/ROICalculator.astro"; + + +``` + +### Mit Custom Props +```astro + +``` + +## Props + +| Prop | Type | Default | Beschreibung | +|------|------|---------|--------------| +| `lang` | `'de' \| 'en'` | `'de'` | Sprache der Komponente | +| `title` | `string` | Auto | Überschrift des Rechners | +| `subtitle` | `string` | Auto | Untertitel/Beschreibung | +| `accentColor` | `string` | `'primary'` | Farbschema für Akzente | + +## Einstellbare Parameter + +### Meetings pro Woche +- **Range:** 1-30 Meetings +- **Default:** 10 Meetings +- **Einfluss:** Direkte Multiplikation der Zeitersparnis + +### Minuten pro Meeting +- **Range:** 15-120 Minuten +- **Default:** 45 Minuten +- **Schritte:** 15 Minuten +- **Einfluss:** Basis für Protokoll-Zeit + +### Minuten für Protokoll +- **Range:** 10-90 Minuten +- **Default:** 30 Minuten +- **Schritte:** 5 Minuten +- **Einfluss:** Hauptfaktor für Zeitersparnis (80% mit Memoro gespart) + +### Stundensatz +- **Range:** 20-200 €/Stunde +- **Default:** 50 €/Stunde +- **Schritte:** 10 € +- **Einfluss:** Berechnung der Geldersparnis + +## Berechnungslogik + +### Zeitersparnis +```javascript +// 80% Zeitersparnis bei der Protokollerstellung +minutesSavedPerMeeting = protocolTime * 0.8 +minutesSavedPerWeek = minutesSavedPerMeeting * meetings +hoursSavedPerWeek = minutesSavedPerWeek / 60 +hoursSavedPerMonth = hoursSavedPerWeek * 4.33 +hoursSavedPerYear = hoursSavedPerWeek * 52 +daysSavedPerYear = hoursSavedPerYear / 8 +``` + +### Geldersparnis +```javascript +moneySavedPerWeek = hoursSavedPerWeek * hourlyRate +moneySavedPerMonth = hoursSavedPerMonth * hourlyRate +moneySavedPerYear = hoursSavedPerYear * hourlyRate +``` + +### ROI (Return on Investment) +```javascript +memoroCostPerMonth = 15 // Durchschnittlicher Memoro-Preis +daysToROI = Math.ceil(memoroCostPerMonth / (moneySavedPerMonth / 30)) +``` + +## Annahmen + +- **80% Zeitersparnis** bei der Protokollerstellung durch Memoro +- **4.33 Wochen** pro Monat (Durchschnitt) +- **52 Wochen** pro Jahr +- **8 Stunden** Arbeitstag für Tagesberechnung +- **15€/Monat** durchschnittliche Memoro-Kosten für ROI + +## Styling + +Die Komponente verwendet: +- Tailwind CSS für Layout und Styling +- Custom CSS für Slider-Styling +- Gradient-Backgrounds für visuelle Attraktivität +- Smooth Transitions für bessere UX + +### Slider-Styling +```css +.slider { + background: linear-gradient(to right, + #ef4444 0%, + #ef4444 var(--value, 50%), + #e5e7eb var(--value, 50%), + #e5e7eb 100% + ); +} +``` + +## Integration + +### Auf Landing Pages +```astro +// In meeting-protokoll-software.mdx +import ROICalculator from "../components/ROICalculator.astro"; + + +``` + +### Auf der Homepage +```astro +// Nach NumbersSection für maximale Wirkung + + + + + +``` + +## Browser-Kompatibilität + +- ✅ Chrome/Edge (alle Versionen) +- ✅ Firefox (alle Versionen) +- ✅ Safari (12+) +- ✅ Mobile Browser (iOS/Android) + +## Performance + +- **Bundle Size:** ~5KB (unkomprimiert) +- **JavaScript:** Vanilla JS, keine Dependencies +- **Rendering:** Client-side Berechnungen +- **Accessibility:** ARIA-Labels für Screenreader + +## Zukünftige Verbesserungen + +- [ ] Speichern der Einstellungen im LocalStorage +- [ ] Export der Berechnung als PDF +- [ ] Vergleich mit anderen Tools +- [ ] Erweiterte Berechnungen (Team-Größe, etc.) +- [ ] A/B Testing verschiedener Default-Werte +- [ ] Analytics-Integration für Usage-Tracking + +--- + +*Komponente erstellt: 28. Dezember 2024* \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/content-collections.md b/apps/memoro/apps/landing/docs/content-collections.md new file mode 100644 index 000000000..152d6c174 --- /dev/null +++ b/apps/memoro/apps/landing/docs/content-collections.md @@ -0,0 +1,545 @@ +# Memoro Content Collections Documentation + +This document provides a comprehensive overview of all content collections used in the Memoro website. Each collection is structured with Zod schemas for type safety and validation. + +## Quick Overview + +The Memoro website contains 15 content collections: + +1. **Blog** - Articles and blog posts about features and updates +2. **Team** - Team member profiles (core team, freelancers, mentors, supporters, alumni) +3. **Guides** - How-to tutorials with difficulty levels (beginner, intermediate, advanced) +4. **Features** - Product feature descriptions with icons and categories +5. **Legal** - Legal documents (privacy policy, terms of service) +6. **Industries** - Industry-specific use cases and solutions +7. **Testimonials** - Customer reviews organized by type (private, company, network, press) +8. **Pages** - Structured content for special pages like pricing with Mana system +9. **Contracts** - Downloadable legal contracts and agreements +10. **Blueprints** - Pre-configured templates for different use cases +11. **Memories** - Memory templates and examples +12. **Wallpapers** - Downloadable wallpapers in multiple formats and resolutions +13. **FAQs** - Frequently asked questions by category +14. **Statistics** - Weekly/monthly reports with usage metrics +15. **Changelog** - Product updates and release notes with semantic versioning + +## Overview + +The Memoro website uses Astro's content collections to manage various types of content. All collections support internationalization with German (de) and English (en) locales. + +## Collections + +### 1. Blog Collection + +**Purpose**: Articles and blog posts about Memoro features, updates, and industry insights. + +**Schema**: +```typescript +{ + title: string + description: string + pubDate: Date + author: string (default: 'Anonymous') + image?: string + tags: string[] (default: []) + lang: 'de' | 'en' + slug?: string + lastUpdated?: Date + draft?: boolean +} +``` + +**Location**: `src/content/blog/{de|en}/` + +--- + +### 2. Team Collection + +**Purpose**: Team member profiles showcasing the people behind Memoro. + +**Schema**: +```typescript +{ + title: string + description: string + role: string + image?: string + social?: { + linkedin?: string + github?: string + twitter?: string + } + lang: 'de' | 'en' + category: 'kernteam' | 'freelance' | 'mentoren' | 'unterstuetzer' | 'alumni' + order?: number + categoryOrder?: number + lastUpdated?: Date +} +``` + +**Location**: `src/content/team/{de|en}/` + +**Categories**: +- `kernteam`: Core team members +- `freelance`: Freelance contributors +- `mentoren`: Mentors +- `unterstuetzer`: Supporters +- `alumni`: Former team members + +--- + +### 3. Guide Collection + +**Purpose**: Tutorials and how-to guides for using Memoro features. + +**Schema**: +```typescript +{ + title: string + description: string + difficulty: 'beginner' | 'intermediate' | 'advanced' + duration: string + category: string + author: string (default: 'Das Platform-Team') + lastUpdated: Date + lang: 'de' | 'en' +} +``` + +**Location**: `src/content/guides/{de|en}/` + +--- + +### 4. Features Collection + +**Purpose**: Detailed descriptions of Memoro's features and capabilities. + +**Schema**: +```typescript +{ + title: string + description: string + lang: 'de' | 'en' + icon: string + color: 'blue' | 'red' | 'purple' | 'green' | 'orange' + category?: 'organization' | 'language' | 'customization' | 'recording' | + 'analytics' | 'collaboration' | 'ai-features' | 'sharing' + order?: number +} +``` + +**Location**: `src/content/features/{de|en}/` + +--- + +### 5. Legal Collection + +**Purpose**: Legal documents including privacy policy, terms of service, etc. + +**Schema**: +```typescript +{ + title: string + lastUpdated?: Date + lang: 'de' | 'en' +} +``` + +**Location**: `src/content/legal/` + +--- + +### 6. Industry Collection + +**Purpose**: Industry-specific use cases and solutions. + +**Schema**: +```typescript +{ + title: string + description: string + icon: string + color: 'blue' | 'red' | 'purple' | 'green' + lang: 'de' | 'en' + order?: number + keyFeatures?: string[] + testimonials?: Array<{ + quote: string + author: string + role: string + company: string + }> +} +``` + +**Location**: `src/content/industries/{de|en}/` + +--- + +### 7. Testimonials Collection + +**Purpose**: Customer testimonials and reviews organized by type. + +**Schema**: +```typescript +{ + name: string + role: string + company?: string // Optional for private individuals + image: string + text: string + lang: 'de' | 'en' + type: 'private' | 'company' | 'network' | 'press' + order?: number + lastUpdated?: Date + source?: string // For press: publication name + sourceUrl?: string // For press: article link +} +``` + +**Location**: `src/content/testimonials/{de|en}/{type}/` + +**Types**: +- `private`: Individual users +- `company`: Corporate testimonials +- `network`: Partner/network testimonials +- `press`: Press mentions and reviews + +--- + +### 8. Pages Collection + +**Purpose**: Structured content for special pages like pricing. + +**Schema**: +```typescript +{ + title: string + description: string + lang: 'de' | 'en' + type: string + lastUpdated: Date + sections: { + hero: { + title: string + subtitle?: string + } + plans?: Array<{ + id?: string + name: string + price: { + monthly: number + yearly: number + } + priceUnit?: string + yearlyBreakdown?: string + features: string[] + // Legacy fields + minutes?: number + memoLength?: number + dailyMemos?: number | '∞' + // Mana-based fields + initialMana?: number + dailyMana?: number + maxMana?: number + canGiftMana?: boolean + cta: string + highlight: boolean + }> + manaPotions?: { + title: string + subtitle: string + items: Array<{ + id: string + name: string + manaAmount: number + price: number + popular: boolean + }> + } + comparison?: { title: string } + faq?: { + title: string + items: Array<{ + question: string + answer: string + }> + } + callToAction?: { + title: string + description: string + buttonText: string + buttonLink: string + } + } +} +``` + +**Location**: `src/content/pages/{de|en}/` + +--- + +### 9. Contracts Collection + +**Purpose**: Legal contracts and agreements available for download. + +**Schema**: +```typescript +{ + title: string + description: string + lastUpdated: Date + lang: 'de' | 'en' + category: string + order?: number + downloadUrl?: string + previewEnabled: boolean (default: false) +} +``` + +**Location**: `src/content/contracts/{de|en}/` + +--- + +### 10. Blueprints Collection + +**Purpose**: Pre-configured templates and setups for different use cases. + +**Schema**: +```typescript +{ + title: string + description: string + icon: string + color: 'blue' | 'red' | 'purple' | 'green' | 'orange' | 'yellow' + lang: 'de' | 'en' + order?: number + lastUpdated?: Date + isActive: boolean (default: true) + features?: string[] + compatibility?: string[] +} +``` + +**Location**: `src/content/blueprints/{de|en}/` + +--- + +### 11. Memories Collection + +**Purpose**: Memory templates and examples for different scenarios. + +**Schema**: +```typescript +{ + title: string + description: string + icon: string + color: 'blue' | 'red' | 'purple' | 'green' | 'orange' | 'yellow' + category?: string + lang: 'de' | 'en' + order?: number + lastUpdated?: Date + isActive: boolean (default: true) + features?: string[] + compatibility?: string[] +} +``` + +**Location**: `src/content/memories/{de|en}/` + +--- + +### 12. Wallpaper Collection + +**Purpose**: Downloadable wallpapers in various formats and resolutions. + +**Schema**: +```typescript +{ + title: string + description: string + thumbnail: string + formats: Array<{ + type: 'desktop' | 'mobile' | 'tablet' | 'ultrawide' + device?: string // e.g., "iPhone 16 Pro", "iPad Pro" + resolution: string // "3840x2160" + aspectRatio: string // "16:9" + fileUrl: string // "/images/wallpaper/..." + fileSize?: string // "2.4 MB" + }> + category: 'nature' | 'abstract' | 'city' | 'technology' | 'other' + colors?: string[] + tags?: string[] + lang: 'de' | 'en' + order?: number + lastUpdated?: Date + isActive: boolean (default: true) + isFeatured: boolean (default: false) + downloadCount: number (default: 0) + formatDownloads?: Record +} +``` + +**Location**: `src/content/wallpapers/{de|en}/` + +--- + +### 13. FAQs Collection + +**Purpose**: Frequently asked questions organized by category. + +**Schema**: +```typescript +{ + question: string + answer: string + category: 'general' | 'features' | 'technical' | 'pricing' | + 'security' | 'business' | 'industries' | 'guides' + tags?: string[] + order: number (default: 0) + featured: boolean (default: false) + relatedLinks?: Array<{ + title: string + url: string + }> + lang: 'de' | 'en' +} +``` + +**Location**: `src/content/faqs/{de|en}/` + +**Categories**: +- `general`: General questions +- `features`: Feature-related questions +- `technical`: Technical questions +- `pricing`: Pricing and plans +- `security`: Security and privacy +- `business`: Business and enterprise +- `industries`: Industry-specific questions +- `guides`: Tutorials and how-to questions + +--- + +### 14. Statistics Collection + +**Purpose**: Weekly and monthly reports with usage statistics and metrics. + +**Schema**: +```typescript +{ + title: string + description: string + reportType: 'weekly' | 'monthly' (default: 'weekly') + weekNumber?: number // Calendar week (for weekly reports) + month?: number // Month (for monthly reports) + year: number + period: { + start: Date + end: Date + } + stats: { + totalUsers: number + newUsers: number + activeUsers: number + totalRecordings: number + totalMinutes: number + totalWords?: number + totalEntries?: number + manaConsumed: number + manaPurchased: number + } + highlights?: string[] // Important events + trends?: { + userGrowth: number // Percentage compared to previous period + recordingGrowth: number + manaGrowth: number + } + topFeatures?: Array<{ + name: string + usage: number + }> + lang: 'de' | 'en' + publishDate: Date + draft: boolean (default: false) + author: string (default: 'Das Memoro Team') +} +``` + +**Location**: `src/content/statistics/{de|en}/` + +--- + +### 15. Changelog Collection + +**Purpose**: Product updates, release notes, and version history. + +**Schema**: +```typescript +{ + title: string + description: string + version: string // e.g., "1.2.0" + releaseDate: Date + type: 'major' | 'minor' | 'patch' // Semantic versioning + category: Array<'feature' | 'improvement' | 'bugfix' | + 'security' | 'performance' | 'other'> + highlights?: string[] // Key features of this version + breaking: boolean (default: false) + deprecated?: string[] // Deprecated features + migration?: string // Migration guide for breaking changes + platforms?: Array<'web' | 'ios' | 'android' | 'api'> + lang: 'de' | 'en' + draft: boolean (default: false) + author: string (default: 'Das Memoro Team') +} +``` + +**Location**: `src/content/changelog/{de|en}/` + +**Categories**: +- `feature`: New features +- `improvement`: Improvements +- `bugfix`: Bug fixes +- `security`: Security updates +- `performance`: Performance improvements +- `other`: Other changes + +--- + +## Best Practices + +1. **Internationalization**: Always create content in both German and English +2. **File Naming**: Use kebab-case for file names (e.g., `my-blog-post.mdx`) +3. **Frontmatter**: Ensure all required fields are filled according to the schema +4. **Images**: Store images in `/public/images/` organized by content type +5. **Draft Mode**: Use the `draft` field to hide content from production +6. **Ordering**: Use the `order` field to control display sequence +7. **Dates**: Use ISO date format (YYYY-MM-DD) for all date fields + +## Content Organization + +The content is organized following this structure: +``` +src/content/ +├── [collection-name]/ +│ ├── de/ # German content +│ │ └── *.mdx # Content files +│ └── en/ # English content +│ └── *.mdx # Content files +└── config.ts # Collection schemas +``` + +For testimonials, an additional level of organization by type is used: +``` +src/content/testimonials/ +├── de/ +│ ├── private/ +│ ├── company/ +│ ├── network/ +│ └── press/ +└── en/ + ├── private/ + ├── company/ + ├── network/ + └── press/ +``` \ No newline at end of file diff --git a/apps/memoro/apps/landing/docs/creating-content-collections.md b/apps/memoro/apps/landing/docs/creating-content-collections.md new file mode 100644 index 000000000..6388fa9a4 --- /dev/null +++ b/apps/memoro/apps/landing/docs/creating-content-collections.md @@ -0,0 +1,784 @@ +# Creating a New Content Collection + +This document explains how to create and configure a new content collection in our Astro-based website. + +## What are Content Collections? + +Content collections in Astro are a way to organize and validate content in your project. They allow you to: + +- Group related content together +- Define a schema for validating content +- Query content with TypeScript type safety +- Support multiple languages (de/en) + +## Step 1: Define the Collection Schema + +First, you need to define the schema for your new collection in the `src/content/config.ts` file: + +```typescript +// 1. Import the defineCollection and z functions +import { defineCollection, z } from 'astro:content'; + +// 2. Define your collection schema +const myNewCollection = defineCollection({ + type: 'content', + schema: z.object({ + title: z.string(), + description: z.string(), + lastUpdated: z.date(), + lang: z.enum(['de', 'en']), + // Add any other fields specific to your collection + category: z.string(), + order: z.number().optional(), + // Add custom fields as needed + customField: z.string().optional() + }) +}); + +// 3. Add your collection to the collections export +export const collections = { + // Existing collections... + 'myNewCollection': myNewCollection, +}; +``` + +## Step 2: Create the Directory Structure + +Create the necessary directories for your content collection: + +```bash +mkdir -p src/content/my-new-collection/de src/content/my-new-collection/en +``` + +Our project organizes content by language, so we create separate directories for German (de) and English (en) content. + +## Step 3: Add Content Files + +Create content files in the appropriate language directories. We use `.mdx` files for content that includes Markdown with JSX components: + +**German Example** (`src/content/my-new-collection/de/example.mdx`): + +```markdown +--- +title: "Beispieltitel" +description: "Eine Beispielbeschreibung" +lastUpdated: 2025-02-26 +lang: "de" +category: "example" +order: 1 +customField: "Beispielwert" +--- + +# Beispielinhalt + +Dies ist ein Beispielinhalt für die neue Content Collection. +``` + +**English Example** (`src/content/my-new-collection/en/example.mdx`): + +```markdown +--- +title: "Example Title" +description: "An example description" +lastUpdated: 2025-02-26 +lang: "en" +category: "example" +order: 1 +customField: "Example value" +--- + +# Example Content + +This is example content for the new content collection. +``` + +## Step 4: Create Components for Displaying Content + +Create components to display your content. For example, create a component to display a list of items from your collection: + +```astro +--- +// src/components/MyCollectionList.astro +import { getCollection } from 'astro:content'; +import { getLangFromUrl } from '../i18n/utils'; + +const lang = getLangFromUrl(Astro.url); +const items = await getCollection('my-new-collection', ({ data }) => { + return data.lang === lang; +}); + +// Sort items if needed +const sortedItems = [...items].sort((a, b) => { + return (a.data.order || 0) - (b.data.order || 0); +}); +--- + +
+ {sortedItems.map((item) => ( + +

{item.data.title}

+

{item.data.description}

+
+ ))} +
+``` + +Also create a component for displaying a single item: + +```astro +--- +// src/components/MyCollectionItem.astro +import type { CollectionEntry } from 'astro:content'; + +interface Props { + item: CollectionEntry<'my-new-collection'>; +} + +const { item } = Astro.props; +const { Content } = await item.render(); +--- + +
+

{item.data.title}

+

{item.data.description}

+
+ +
+
+``` + +## Step 5: Create Page Routes + +Create pages to display your collection. You'll typically need: + +1. A listing page that shows all items +2. A dynamic route for individual items + +**Listing Page** (`src/pages/de/my-collection/index.astro` and `src/pages/en/my-collection/index.astro`): + +```astro +--- +import { getCollection } from 'astro:content'; +import Layout from '../../../layouts/Layout.astro'; +import { getLangFromUrl, useTranslations } from '../../../i18n/utils'; + +const lang = getLangFromUrl(Astro.url); +const t = useTranslations(lang); + +const items = await getCollection('my-new-collection', ({ data }) => { + return data.lang === lang; +}); + +const sortedItems = [...items].sort((a, b) => { + return (a.data.order || 0) - (b.data.order || 0); +}); +--- + + +
+

{t('myCollection.title')}

+

{t('myCollection.description')}

+ +
+ {sortedItems.map((item) => ( + +

{item.data.title}

+

{item.data.description}

+
+ ))} +
+
+
+``` + +**Dynamic Route** (`src/pages/de/my-collection/[...slug].astro` and `src/pages/en/my-collection/[...slug].astro`): + +```astro +--- +import { getCollection } from 'astro:content'; +import Layout from '../../../layouts/Layout.astro'; +import { getLangFromUrl } from '../../../i18n/utils'; +import MyCollectionItem from '../../../components/MyCollectionItem.astro'; + +export async function getStaticPaths() { + const lang = 'de'; // or 'en' depending on the file + const items = await getCollection('my-new-collection', ({ data }) => { + return data.lang === lang; + }); + + return items.map((item) => ({ + params: { slug: item.slug.replace(`${lang}/`, '') }, + props: { item }, + })); +} + +const { item } = Astro.props; +--- + + +
+ +
+
+``` + +## Step 6: Add Navigation Links (Optional) + +Update your navigation components to include links to your new collection: + +```astro +--- +// In your navigation component +import { getLangFromUrl, useTranslations } from '../i18n/utils'; + +const lang = getLangFromUrl(Astro.url); +const t = useTranslations(lang); +--- + + +``` + +## Step 7: Add Translations (Optional) + +If your site uses translations, add the necessary translation strings to your i18n files: + +```typescript +// src/i18n/ui.ts +export const languages = { + de: { + 'myCollection': { + 'title': 'Meine Sammlung', + 'description': 'Beschreibung meiner Sammlung' + }, + 'nav': { + 'myCollection': 'Meine Sammlung' + } + }, + en: { + 'myCollection': { + 'title': 'My Collection', + 'description': 'Description of my collection' + }, + 'nav': { + 'myCollection': 'My Collection' + } + } +}; +``` + +## Testing Your Collection + +After setting up your collection, test it by: + +1. Building the site: `npm run build` +2. Checking for any TypeScript or schema validation errors +3. Previewing the site: `npm run preview` +4. Navigating to your collection pages to ensure they display correctly + +## Example: Creating a "Contracts" Collection + +Here's a complete example of creating a "contracts" collection: + +1. **Schema Definition**: + +```typescript +// src/content/config.ts +const contractsCollection = defineCollection({ + type: 'content', + schema: z.object({ + title: z.string(), + description: z.string(), + lastUpdated: z.date(), + lang: z.enum(['de', 'en']), + category: z.string(), + order: z.number().optional(), + downloadUrl: z.string().optional(), + previewEnabled: z.boolean().default(false) + }) +}); + +export const collections = { + // Other collections... + contracts: contractsCollection, +}; +``` + +2. **Directory Structure**: +``` +src/content/contracts/ +├── de/ +│ ├── nutzungsbedingungen.mdx +│ └── datenschutz.mdx +└── en/ + ├── terms-of-service.mdx + └── privacy-policy.mdx +``` + +3. **Content Example**: +```markdown +--- +title: "Nutzungsbedingungen" +description: "Allgemeine Nutzungsbedingungen für unsere Plattform" +lastUpdated: 2025-02-26 +lang: "de" +category: "legal" +order: 1 +downloadUrl: "/downloads/nutzungsbedingungen.pdf" +previewEnabled: true +--- + +# Nutzungsbedingungen + +## 1. Geltungsbereich + +Diese Nutzungsbedingungen regeln die Nutzung unserer Plattform... +``` + +4. **Component for Displaying Contracts**: +```astro +--- +// src/components/ContractItem.astro +import type { CollectionEntry } from 'astro:content'; + +interface Props { + contract: CollectionEntry<'contracts'>; +} + +const { contract } = Astro.props; +const { Content } = await contract.render(); +--- + +
+

{contract.data.title}

+

{contract.data.description}

+ + {contract.data.previewEnabled && ( +
+ +
+ )} + + {contract.data.downloadUrl && ( + + Download als PDF + + )} +
+``` + +5. **Page Routes**: +```astro +--- +// src/pages/de/contracts/[...slug].astro +import { getCollection } from 'astro:content'; +import Layout from '../../../layouts/Layout.astro'; +import ContractItem from '../../../components/ContractItem.astro'; + +export async function getStaticPaths() { + const contracts = await getCollection('contracts', ({ data }) => { + return data.lang === 'de'; + }); + + return contracts.map((contract) => ({ + params: { slug: contract.slug.replace('de/', '') }, + props: { contract }, + })); +} + +const { contract } = Astro.props; +--- + + +
+ +
+
+``` + +## Complete Implementation Walkthrough + +Below is a detailed walkthrough of all the steps we took to implement a new "Contracts" content collection: + +### 1. Define the Collection Schema + +First, we added the contracts collection schema to `src/content/config.ts`: + +```typescript +const contractsCollection = defineCollection({ + type: 'content', + schema: z.object({ + title: z.string(), + description: z.string(), + lastUpdated: z.date(), + lang: z.enum(['de', 'en']), + category: z.string(), + order: z.number().optional(), + downloadUrl: z.string().optional(), + previewEnabled: z.boolean().default(false) + }) +}); + +export const collections = { + // Other collections... + contracts: contractsCollection, +}; +``` + +### 2. Create Directory Structure + +We created the necessary directories for our content: + +```bash +mkdir -p src/content/contracts/de src/content/contracts/en +``` + +### 3. Create Components + +We created two components for displaying contracts: + +**ContractCard.astro** - For displaying contract cards in the listing page: + +```astro +--- +import type { CollectionEntry } from 'astro:content'; +import { getLangFromUrl } from '../i18n/utils'; + +interface Props { + contract: CollectionEntry<'contracts'>; +} + +const { contract } = Astro.props; +const lang = getLangFromUrl(Astro.url); +const cleanSlug = contract.slug.replace(`${lang}/`, ''); +--- + + +
+
+

+ {contract.data.title} +

+

+ {contract.data.description} +

+

+ Zuletzt aktualisiert: {contract.data.lastUpdated.toLocaleDateString()} +

+
+
+ + + +
+
+
+``` + +**ContractDetail.astro** - For displaying a single contract's details: + +```astro +--- +import type { CollectionEntry } from 'astro:content'; +import { getLangFromUrl, useTranslations } from '../i18n/utils'; + +interface Props { + contract: CollectionEntry<'contracts'>; +} + +const { contract } = Astro.props; +const { Content } = await contract.render(); +const lang = getLangFromUrl(Astro.url); +const t = useTranslations(lang); +--- + +
+
+

{contract.data.title}

+

{contract.data.description}

+

+ {t('contracts.lastUpdated')}: {contract.data.lastUpdated.toLocaleDateString()} +

+
+ + {contract.data.previewEnabled && ( +
+ +
+ )} + + {contract.data.downloadUrl && ( + + )} +
+``` + +### 4. Create Page Routes + +We created the necessary page routes for both German and English: + +**Listing Pages** (`src/pages/de/contracts/index.astro` and `src/pages/en/contracts/index.astro`): + +```astro +--- +import { getCollection, getEntry } from "astro:content"; +import Layout from "../../../layouts/Layout.astro"; +import ContractCard from "../../../components/ContractCard.astro"; +import CallToAction from "../../../components/CallToAction.astro"; +import { getLangFromUrl, useTranslations } from "../../../i18n/utils"; + +const lang = getLangFromUrl(Astro.url); +const t = useTranslations(lang); + +// Get contracts in current language +const contractEntries = await getCollection("contracts", ({ data }) => { + return data.lang === "de"; // or "en" for English version +}); + +// Sort contracts by category and then by order +const sortedContracts = [...contractEntries].sort((a, b) => { + // First sort by category + if (a.data.category !== b.data.category) { + return a.data.category.localeCompare(b.data.category); + } + // Then sort by order within the same category + const orderA = a.data.order || 0; + const orderB = b.data.order || 0; + return orderA - orderB; +}); + +// Group contracts by category +const contractsByCategory = sortedContracts.reduce((acc, contract) => { + const category = contract.data.category; + if (!acc[category]) { + acc[category] = []; + } + acc[category].push(contract); + return acc; +}, {}); + +const pageTitle = t('contracts.title'); +const pageDescription = t('contracts.description'); +--- + + +
+

{pageTitle}

+

{pageDescription}

+ + {Object.entries(contractsByCategory).map(([category, contracts]) => ( +
+

+ {category === "legal" ? (lang === "de" ? "Rechtliche Dokumente" : "Legal Documents") : category} +

+
+ {contracts.map((contract) => ( + + ))} +
+
+ ))} + + +
+
+``` + +**Dynamic Routes** (`src/pages/de/contracts/[...slug].astro` and `src/pages/en/contracts/[...slug].astro`): + +```astro +--- +import { getCollection } from "astro:content"; +import Layout from "../../../layouts/Layout.astro"; +import { getLangFromUrl, useTranslations } from "../../../i18n/utils"; +import ContractDetail from "../../../components/ContractDetail.astro"; +import CallToAction from "../../../components/CallToAction.astro"; + +export async function getStaticPaths() { + const contracts = await getCollection("contracts", ({ data }) => { + return data.lang === "de"; // or "en" for English version + }); + + return contracts.map((contract) => { + return { + params: { slug: contract.slug.replace("de/", "") }, // or "en/" for English version + props: { contract }, + }; + }); +} + +const { contract } = Astro.props; +const lang = getLangFromUrl(Astro.url); +const t = useTranslations(lang); + +// Check if content is in the correct language +if (contract.data.lang !== lang) { + // Get all contracts + const allContracts = await getCollection("contracts"); + // Find matching content in the correct language + const localizedContract = allContracts.find( + (c) => c.data.slug === contract.data.slug && c.data.lang === lang + ); + + if (localizedContract) { + return Astro.redirect(`/${lang}/contracts/${localizedContract.data.slug}`); + } +} +--- + + +
+ + + + +
+ +
+
+
+``` + +### 5. Add Sample Content + +We created sample contract files in both German and English: + +**German Example** (`src/content/contracts/de/nutzungsbedingungen.mdx`): + +```markdown +--- +title: "Nutzungsbedingungen" +description: "Allgemeine Nutzungsbedingungen für unsere Plattform" +lastUpdated: 2025-02-26 +lang: "de" +category: "legal" +order: 1 +downloadUrl: "/downloads/nutzungsbedingungen.pdf" +previewEnabled: true +--- + +# Nutzungsbedingungen + +## 1. Geltungsbereich + +Diese Nutzungsbedingungen regeln die Nutzung unserer Plattform... +``` + +**English Example** (`src/content/contracts/en/terms-of-service.mdx`): + +```markdown +--- +title: "Terms of Service" +description: "General terms of service for our platform" +lastUpdated: 2025-02-26 +lang: "en" +category: "legal" +order: 1 +downloadUrl: "/downloads/terms-of-service.pdf" +previewEnabled: true +--- + +# Terms of Service + +## 1. Scope + +These Terms of Service govern the use of our platform... +``` + +### 6. Add Translations + +We added translations for the contracts section in `src/i18n/ui.ts`: + +```typescript +// German translations +'contracts.title': 'Verträge & Rechtliches', +'contracts.description': 'Alle rechtlichen Dokumente und Verträge für unsere Plattform', +'contracts.download': 'Als PDF herunterladen', +'contracts.lastUpdated': 'Zuletzt aktualisiert', +'contracts.backToOverview': 'Zurück zur Übersicht', + +// English translations +'contracts.title': 'Contracts & Legal', +'contracts.description': 'All legal documents and contracts for our platform', +'contracts.download': 'Download as PDF', +'contracts.lastUpdated': 'Last updated', +'contracts.backToOverview': 'Back to overview', +``` + +### 7. Update Navigation + +We added a link to the contracts page in the Footer component: + +```astro +
  • + + {isGerman ? "Verträge" : "Contracts"} + +
  • +``` + +### 8. Testing + +After implementing all these components, you should test the contracts pages by: + +1. Building the site: `npm run build` +2. Checking for any TypeScript or schema validation errors +3. Previewing the site: `npm run preview` +4. Navigating to `/de/contracts` and `/en/contracts` to ensure they display correctly +5. Clicking on individual contracts to ensure the detail pages work properly +6. Testing the download functionality if you have PDF files available + +### 9. Next Steps + +To further enhance your contracts collection, consider: + +1. Creating actual PDF files for download in the `/public/downloads/` directory +2. Adding more contract types (privacy policy, terms of use, etc.) +3. Implementing a search functionality for contracts +4. Adding version history for contracts to track changes over time diff --git a/apps/memoro/apps/landing/docs/features/admin-tool-modularization-and-replicate-integration.md b/apps/memoro/apps/landing/docs/features/admin-tool-modularization-and-replicate-integration.md new file mode 100644 index 000000000..39a90be56 --- /dev/null +++ b/apps/memoro/apps/landing/docs/features/admin-tool-modularization-and-replicate-integration.md @@ -0,0 +1,561 @@ +# Admin-Tool Modularisierung & Replicate-Integration + +> **Dokument erstellt:** 28.01.2025 +> **Status:** Konzeptphase +> **Ziel:** Wiederverwendbares Admin-Tool mit KI-Bildgenerierung + +## 📋 Executive Summary + +Das Memoro Admin-Tool entwickelt sich zu einem eigenständigen, wertvollen Werkzeug. Dieses Dokument beschreibt Konzepte zur: +1. **Modularisierung** des Admin-Tools für Wiederverwendbarkeit in anderen Projekten +2. **Integration von Replicate** zur KI-basierten Bildgenerierung für Personas +3. **Backend-Architektur** auf Hetzner VPS mit Coolify + +## 🎯 Anforderungen + +### Funktionale Anforderungen +- Admin-Tool soll in anderen Websites wiederverwendbar sein +- Komplette Trennung von Code und Content +- KI-basierte Bildgenerierung für Personas via Replicate +- Zentrale Backend-Services auf Hetzner VPS +- Verwaltung via Coolify (Docker-basiert) + +### Nicht-funktionale Anforderungen +- Einfache Installation/Integration +- Minimale Dependencies +- Skalierbare Architektur +- Sichere API-Kommunikation +- Kosteneffiziente Bildgenerierung + +## 🏗️ Modularisierungskonzepte + +### Konzept 1: NPM Package + API Backend +**Architektur:** +``` +@memoro/admin-tool (NPM Package) +├── components/ # Wiederverwendbare UI-Komponenten +├── layouts/ # Admin-Layouts +├── hooks/ # React/Vue Hooks für API +├── types/ # TypeScript Definitionen +└── utils/ # Helper Functions + +@memoro/admin-api (Separates Backend) +├── /api/personas # Personas CRUD +├── /api/images # Bildgenerierung +├── /api/content # Content Management +└── /api/auth # Authentication +``` + +**Vorteile:** +- ✅ Maximale Wiederverwendbarkeit +- ✅ Framework-agnostisch (Adapter Pattern) +- ✅ Versionskontrolle via NPM +- ✅ Type-Safety durch TypeScript + +**Nachteile:** +- ❌ Komplexe Initial-Setup +- ❌ Wartung von zwei Packages +- ❌ Breaking Changes Management + +**Integration:** +```typescript +// In beliebiger Astro/Next/Vue App +import { AdminTool } from '@memoro/admin-tool'; +import { MemoroadminProvider } from '@memoro/admin-tool/providers'; + +// Konfiguration +const config = { + apiUrl: 'https://api.memoro-admin.com', + apiKey: process.env.MEMORO_API_KEY, + features: ['personas', 'content', 'images'] +}; + + + + +``` + +### Konzept 2: Monorepo mit Shared Packages +**Struktur:** +``` +memoro-workspace/ +├── apps/ +│ ├── memoro-website/ # Aktuelle Website +│ ├── admin-dashboard/ # Standalone Admin +│ └── api-backend/ # Zentrales Backend +├── packages/ +│ ├── admin-ui/ # UI Components +│ ├── admin-core/ # Business Logic +│ ├── content-types/ # Shared Types +│ └── api-client/ # API Client Library +└── services/ + ├── image-generator/ # Replicate Service + └── content-sync/ # Content Synchronization +``` + +**Vorteile:** +- ✅ Einheitliche Entwicklung +- ✅ Shared Dependencies +- ✅ Einfaches Testing +- ✅ Atomic Commits + +**Nachteile:** +- ❌ Größeres Repository +- ❌ Komplexere CI/CD +- ❌ Schwieriger für externe Nutzer + +**Tools:** +- Turborepo oder NX für Monorepo Management +- Changesets für Versionierung +- pnpm Workspaces für Dependencies + +### Konzept 3: Microservices + Web Components +**Architektur:** +``` +Frontend (Web Components) +├── +├── +├── +└── + +Microservices (Docker/Coolify) +├── persona-service/ # Node.js/Fastify +├── image-service/ # Python/FastAPI + Replicate +├── content-service/ # Node.js/Express +├── auth-service/ # Node.js/JWT +└── gateway/ # Kong/Traefik +``` + +**Vorteile:** +- ✅ Framework-unabhängig +- ✅ Isolierte Services +- ✅ Unabhängige Skalierung +- ✅ Native Browser-Support + +**Nachteile:** +- ❌ Komplexe Orchestrierung +- ❌ Network Latency +- ❌ Service Discovery + +**Integration:** +```html + + + + +``` + +### Konzept 4: Plugin-System (Empfohlen) ⭐ +**Architektur:** +``` +@memoro/admin-core +├── core/ +│ ├── plugin-system.ts # Plugin Registry +│ ├── api-client.ts # API Abstraction +│ └── auth.ts # Auth Management +├── plugins/ +│ ├── personas/ # Personas Plugin +│ ├── image-generator/ # Replicate Plugin +│ ├── content-manager/ # Content Plugin +│ └── analytics/ # Analytics Plugin +└── adapters/ + ├── astro/ # Astro Integration + ├── nextjs/ # Next.js Integration + └── vue/ # Vue Integration +``` + +**Plugin-Beispiel:** +```typescript +// personas-plugin.ts +export const personasPlugin: AdminPlugin = { + id: 'personas', + name: 'Personas Management', + version: '1.0.0', + routes: [ + { path: '/personas', component: PersonasList }, + { path: '/personas/:id', component: PersonaDetail } + ], + api: { + endpoints: [ + { method: 'GET', path: '/personas', handler: getPersonas }, + { method: 'POST', path: '/personas/:id/image', handler: generateImage } + ] + }, + permissions: ['personas.read', 'personas.write', 'personas.generate'], + config: { + replicateModel: 'stability-ai/sdxl', + imageStyles: ['portrait', 'professional', 'casual'] + } +}; +``` + +**Vorteile:** +- ✅ Maximale Flexibilität +- ✅ Einfache Erweiterung +- ✅ Selective Features +- ✅ Community Plugins möglich + +**Nachteile:** +- ❌ Initial-Komplexität +- ❌ Plugin-Kompatibilität +- ❌ Versioning-Challenges + +## 🖼️ Replicate Integration + +### Backend Service Architektur +```typescript +// services/image-generator/src/replicate-service.ts +import Replicate from 'replicate'; +import { Queue } from 'bullmq'; +import { S3 } from '@aws-sdk/client-s3'; + +export class ReplicateImageService { + private replicate: Replicate; + private queue: Queue; + private storage: S3; + + async generatePersonaImage(persona: Persona): Promise { + // 1. Prompt generieren basierend auf Persona-Daten + const prompt = this.buildPrompt(persona); + + // 2. Job in Queue einreihen + const job = await this.queue.add('generate-image', { + personaId: persona.id, + prompt, + model: 'stable-diffusion-xl', + parameters: { + width: 1024, + height: 1024, + num_outputs: 4, + guidance_scale: 7.5 + } + }); + + // 3. Auf Completion warten + const result = await job.waitUntilFinished(); + + // 4. Bilder in S3/Hetzner speichern + const imageUrls = await this.storeImages(result.images); + + return imageUrls; + } + + private buildPrompt(persona: Persona): string { + const { appearance, outfits, demographics } = persona; + + return ` + Professional portrait photo of a ${demographics.age} year old ${demographics.gender}, + ${appearance.description}, + ${appearance.hairColor} hair in ${appearance.hairStyle}, + ${appearance.eyeColor} eyes, + wearing ${outfits[0]?.items.top || 'business attire'}, + ${appearance.firstImpression}, + studio lighting, high quality, detailed, realistic + `; + } +} +``` + +### Admin UI Integration +```typescript +// components/PersonaImageGenerator.tsx +export function PersonaImageGenerator({ persona }: Props) { + const [generating, setGenerating] = useState(false); + const [images, setImages] = useState([]); + const [selectedImage, setSelectedImage] = useState(); + const [prompt, setPrompt] = useState(''); + + const generateImages = async () => { + setGenerating(true); + + const response = await fetch(`/api/personas/${persona.id}/generate-images`, { + method: 'POST', + body: JSON.stringify({ + prompt: prompt || buildDefaultPrompt(persona), + style: selectedStyle, + count: 4 + }) + }); + + const data = await response.json(); + setImages(data.images); + setGenerating(false); + }; + + return ( +
    +

    KI Bildgenerierung

    + + {/* Prompt Editor */} + +
    + +
    +

    + * Pflichtfelder +

    + +
    + +
    + + + {/* All Contact Options Section */} +
    +

    Weitere Kontaktmöglichkeiten

    +
    + {/* Social Media Card */} + + + {/* App Downloads Card */} +
    +
    +
    + 📱 +
    +

    Memoro App

    +
    + +

    + Laden Sie die Memoro App herunter und erleben Sie die Zukunft der Gesprächsdokumentation. +

    + + +
    +
    +
    + + + +
    + +
    + diff --git a/apps/memoro/apps/landing/src/content/pages/de/faq.mdx b/apps/memoro/apps/landing/src/content/pages/de/faq.mdx new file mode 100644 index 000000000..d80e8f0c9 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/faq.mdx @@ -0,0 +1,29 @@ +--- +title: "FAQ | Memoro" +description: "Häufig gestellte Fragen zu Memoro - Finden Sie Antworten auf alle Ihre Fragen" +lang: "de" +type: "page" +lastUpdated: 2025-07-22 +sections: + hero: + title: "Häufig gestellte Fragen" + subtitle: "Finden Sie Antworten auf die häufigsten Fragen zu Memoro" + callToAction: + title: "Noch Fragen?" + description: "Unser Support-Team hilft Ihnen gerne weiter." + buttonText: "Kontakt aufnehmen" + buttonLink: "/de/contact" +--- + +export const FAQContent = ({ faqs }) => ( + <> +
    +

    {frontmatter.sections.hero.title}

    +

    + {frontmatter.sections.hero.subtitle} +

    +
    + +); + + \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/de/features.mdx b/apps/memoro/apps/landing/src/content/pages/de/features.mdx new file mode 100644 index 000000000..078811e7f --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/features.mdx @@ -0,0 +1,170 @@ +--- +title: "KI-gestützte Transkription & Spracherkennung - Funktionen | Memoro" +description: "✓ Automatische Spracherkennung für 80+ Sprachen ✓ KI-Transkription mit Sprechererkennung ✓ Offline-Aufnahme ► Alle Memoro Funktionen im Überblick" +lang: "de" +type: "page" +lastUpdated: 2024-02-22 +sections: + hero: + title: "Alle Funktionen der Meeting-Protokoll Software" + subtitle: "KI-gestützte Transkription, automatische Spracherkennung und intelligente Dokumentation für effiziente Meetings" + categories: + recording: + title: "Aufnahme" + customization: + title: "Anpassung" + language: + title: "Sprachen" + organization: + title: "Organisation" + sharing: + title: "Teilen" + faq: + title: "Häufig gestellte Fragen zu den Funktionen" + items: + - question: "Welche Betriebssysteme werden unterstützt?" + answer: "Memoro ist für macOS, Windows und Linux verfügbar. Zusätzlich bieten wir mobile Apps für iOS und Android an." + - question: "Kann ich Memoro offline nutzen?" + answer: "Ja, Sie können Memoro vollständig offline nutzen. Ihre Notizen werden lokal gespeichert und bei bestehender Internetverbindung automatisch synchronisiert." + - question: "Gibt es ein Limit für die Anzahl der Notizen?" + answer: "Nein, auch in der kostenlosen Version können Sie unbegrenzt viele Notizen erstellen." + - question: "Wie funktioniert die KI-Assistenz?" + answer: "Die KI-Assistenz analysiert Ihre Notizen und macht intelligente Vorschläge für Verknüpfungen, Zusammenfassungen und Lernkarten. Diese Funktion ist in der Pro-Version verfügbar." + callToAction: + title: "Bereit für bessere Meeting-Dokumentation?" + description: "Entdecke wie Memoro deine Meetings effizienter und produktiver macht." + buttonText: "App herunterladen" + buttonLink: "/de/download" +--- + +import FeatureCard from "../../../components/FeatureCard.astro"; +import CallToAction from "../../../components/CallToAction.astro"; +import FAQSection from "../../../components/FAQSection.astro"; + + + +{/* Hilfsfunktion zur Bereinigung des Slugs */} +export function cleanSlug(slug, lang) { + const langPrefix = `${lang}/`; + return slug.startsWith(langPrefix) ? slug.substring(langPrefix.length) : slug; +} + +{/* Recording Features */} +
    +

    {frontmatter.sections.categories.recording.title}

    +
    + {props.features + .filter(feature => feature.data.category === 'recording') + .sort((a, b) => (parseInt(a.data.order) || 0) - (parseInt(b.data.order) || 0)) + .map(feature => ( +
    + +
    + ))} +
    +
    + +{/* Customization Features */} +
    +

    {frontmatter.sections.categories.customization.title}

    +
    + {props.features + .filter(feature => feature.data.category === 'customization') + .sort((a, b) => (parseInt(a.data.order) || 0) - (parseInt(b.data.order) || 0)) + .map(feature => ( +
    + +
    + ))} +
    +
    + +{/* Language Features */} +
    +

    {frontmatter.sections.categories.language.title}

    +
    + {props.features + .filter(feature => feature.data.category === 'language') + .sort((a, b) => (parseInt(a.data.order) || 0) - (parseInt(b.data.order) || 0)) + .map(feature => ( +
    + +
    + ))} +
    +
    + +{/* Organization Features */} +
    +

    {frontmatter.sections.categories.organization.title}

    +
    + {props.features + .filter(feature => feature.data.category === 'organization') + .sort((a, b) => (parseInt(a.data.order) || 0) - (parseInt(b.data.order) || 0)) + .map(feature => ( +
    + +
    + ))} +
    +
    + +{/* Sharing Features */} +
    +

    {frontmatter.sections.categories.sharing.title}

    +
    + {props.features + .filter(feature => feature.data.category === 'sharing') + .sort((a, b) => (parseInt(a.data.order) || 0) - (parseInt(b.data.order) || 0)) + .map(feature => ( +
    + +
    + ))} +
    +
    + + + +
    + +
    diff --git a/apps/memoro/apps/landing/src/content/pages/de/fireflies-ai-alternative.mdx b/apps/memoro/apps/landing/src/content/pages/de/fireflies-ai-alternative.mdx new file mode 100644 index 000000000..4cd9959ba --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/fireflies-ai-alternative.mdx @@ -0,0 +1,556 @@ +--- +title: "Fireflies.ai Alternative Deutschland - 100% DSGVO-konform & EU-Server | Memoro" +description: "Die sichere Fireflies.ai Alternative für deutsche Unternehmen ► Deutsche Server statt US-Cloud ✓ DSGVO-konform ✓ Kein Datenschutz-Risiko ✓ Jetzt wechseln!" +keywords: ["fireflies.ai alternative", "fireflies alternative deutschland", "fireflies.ai dsgvo", "fireflies alternative dsgvo", "meeting software dsgvo konform", "transkription deutsche server", "fireflies.ai datenschutz", "sichere meeting software"] +lang: de +type: comparison +lastUpdated: 2025-01-09 +sections: + hero: + title: "Fireflies.ai Alternative - DSGVO-konform mit deutschen Servern" + subtitle: "Die sichere Alternative zu Fireflies.ai aus Deutschland" + cta: "Sicher starten" + features: + title: "Warum Memoro sicherer ist" + items: ["Deutsche Server", "DSGVO-konform", "Ende-zu-Ende Verschlüsselung"] +ogImage: "/images/og/fireflies-alternative.png" +canonical: "https://memoro.ai/de/fireflies-ai-alternative" +robots: "index, follow" +priority: 0.9 +changefreq: "monthly" +--- + +import Button from '../../../components/atoms/Button.astro'; +import { Icon } from 'astro-icon/components'; +import FAQ from '../../../components/FAQ.astro'; +import ComparisonTable from '../../../components/ComparisonTable.astro'; +import TestimonialCard from '../../../components/TestimonialCard.astro'; +import SecurityComparison from '../../../components/SecurityComparison.astro'; +import ROICalculator from '../../../components/ROICalculator.astro'; + +# Fireflies.ai Alternative - DSGVO-konform mit deutschen Servern + +
    +
    + +
    +

    ⚠️ Datenschutz-Warnung: Fireflies.ai verarbeitet Ihre Daten in den USA

    +

    + Fireflies.ai nutzt Google Cloud Server in den USA für die Datenverarbeitung. + Auch wenn sie "GDPR-compliant" behaupten, bleiben erhebliche rechtliche Risiken für deutsche Unternehmen. +

    +

    + Nach dem Schrems-II-Urteil des EuGH kann dies zu Bußgeldern bis zu 20 Mio. € oder 4% des Jahresumsatzes führen. +

    +
    +
    +
    + +
    +
    +

    Die sichere Alternative zu Fireflies.ai aus Deutschland

    +

    + Memoro bietet alle Vorteile von Fireflies.ai - aber mit 100% DSGVO-Konformität, + deutschen Servern und ohne Datenschutz-Risiken. Schützen Sie Ihre Unternehmensdaten und bleiben Sie compliant. +

    +
    + + +
    +
    +
    + + Deutsche Server +
    +
    + + ISO 27001 +
    +
    + + DSGVO-Zertifikat +
    +
    + + E2E-Verschlüsselung +
    +
    +
    +
    + Memoro Sicherheits-Dashboard - DSGVO-konforme Alternative zu Fireflies +
    +
    + +## Das Datenschutz-Problem mit Fireflies.ai für deutsche Unternehmen + + +
    +

    🚨 Kritische Datenschutz-Risiken bei Fireflies.ai

    + +
    +

    1. US-Datenverarbeitung trotz "GDPR-Compliance"

    +

    Fireflies speichert Daten optional in der EU, aber verarbeitet sie in den USA. + Dies verstößt potentiell gegen DSGVO Art. 44-49.

    +
    + +
    +

    2. Google Cloud Infrastructure

    +

    Nutzung von Google Cloud bedeutet, dass US-Behörden theoretisch Zugriff auf Ihre Daten haben könnten (CLOUD Act).

    +
    + +
    +

    3. Unklare Subprozessoren

    +

    Fireflies nutzt verschiedene US-basierte Drittanbieter für KI-Verarbeitung - oft ohne transparente Auflistung.

    +
    + +
    +

    4. Betriebsrat & Mitbestimmung

    +

    US-Software mit Mitarbeiterüberwachungs-Potential kann in Deutschland zu arbeitsrechtlichen Problemen führen.

    +
    + +
    +

    5. Keine lokale Rechtsprechung

    +

    Bei Datenschutzverletzungen müssen Sie in den USA klagen - teuer und aussichtslos.

    +
    +
    + +
    +

    ✅ Memoros rechtssichere Lösung

    + +
    +

    1. 100% deutsche Datenverarbeitung

    +

    Alle Daten werden ausschließlich in Deutschland gespeichert UND verarbeitet (Hetzner Datacenter).

    +
    + +
    +

    2. Keine US-Cloud-Anbieter

    +

    Vollständig europäische Infrastruktur ohne Abhängigkeit von US-Unternehmen.

    +
    + +
    +

    3. Transparente Datenverarbeitung

    +

    Klare Auftragsverarbeitungsverträge (AVV) nach deutschem Recht mit allen Subprozessoren.

    +
    + +
    +

    4. Betriebsrat-konform

    +

    Speziell für deutsche Mitbestimmung entwickelt - mit Betriebsvereinbarungs-Vorlagen.

    +
    + +
    +

    5. Deutscher Gerichtsstand

    +

    Bei Fragen oder Problemen gilt deutsches Recht mit Gerichtsstand München.

    +
    +
    +
    + +## Fireflies.ai vs. Memoro - Der Compliance-Vergleich + + + +## Was Datenschutzbeauftragte über den Wechsel sagen + +
    + + + + + + + +
    + +## Rechtliche Risiken vermeiden - Der Wechsel-Guide + +
    +

    🔒 In 4 Schritten zu rechtssicherer Meeting-Dokumentation

    + +
    +
    +
    +
    1
    +
    +

    Fireflies.ai rechtssicher beenden

    +
      +
    • • Daten-Export anfordern (DSGVO Art. 20)
    • +
    • • Löschung verlangen & bestätigen lassen
    • +
    • • Dokumentation für Compliance aufbewahren
    • +
    +
    +
    +
    + +
    +
    +
    2
    +
    +

    Memoro DSGVO-konform einrichten

    +
      +
    • • AVV (Auftragsverarbeitung) abschließen
    • +
    • • Technische Maßnahmen dokumentieren
    • +
    • • Mitarbeiter-Einwilligungen einholen
    • +
    +
    +
    +
    + +
    +
    +
    3
    +
    +

    Betriebsrat einbinden

    +
      +
    • • Betriebsvereinbarungs-Vorlage nutzen
    • +
    • • Deutsche Server als Argument
    • +
    • • Keine Mitarbeiterüberwachung möglich
    • +
    +
    +
    +
    + +
    +
    +
    4
    +
    +

    Compliance dokumentieren

    +
      +
    • • Verarbeitungsverzeichnis aktualisieren
    • +
    • • Datenschutz-Folgenabschätzung
    • +
    • • Audit-Trail aktivieren
    • +
    +
    +
    +
    +
    +
    + +## Die wahren Kosten von Datenschutz-Verstößen + + + +## Memoro's Sicherheits-Features im Detail + +
    +
    + +

    Deutsche Infrastruktur

    +
      +
    • ✓ Hetzner Datacenter Nürnberg
    • +
    • ✓ Keine US-Cloud-Dienste
    • +
    • ✓ Georedundante Backups in DE
    • +
    • ✓ 99.9% Verfügbarkeit SLA
    • +
    +
    + +
    + +

    Verschlüsselung

    +
      +
    • ✓ Ende-zu-Ende Verschlüsselung
    • +
    • ✓ AES-256 für Daten at Rest
    • +
    • ✓ TLS 1.3 für Transport
    • +
    • ✓ Zero-Knowledge Option
    • +
    +
    + +
    + +

    Zertifizierungen

    +
      +
    • ✓ ISO 27001 zertifiziert
    • +
    • ✓ DSGVO-Zertifikat
    • +
    • ✓ BSI Grundschutz konform
    • +
    • ✓ TISAX Level 2 (Automotive)
    • +
    +
    + +
    + +

    Zugriffskontrolle

    +
      +
    • ✓ Rollenbasierte Rechte (RBAC)
    • +
    • ✓ 2-Faktor-Authentifizierung
    • +
    • ✓ SSO mit SAML 2.0
    • +
    • ✓ Audit-Logs (unveränderbar)
    • +
    +
    + +
    + +

    Rechtssicherheit

    +
      +
    • ✓ Deutscher AVV Standard
    • +
    • ✓ Betriebsvereinbarungs-Vorlagen
    • +
    • ✓ Löschkonzept nach DSGVO
    • +
    • ✓ Deutscher Gerichtsstand
    • +
    +
    + +
    + +

    Datenhoheit

    +
      +
    • ✓ Jederzeit exportierbar
    • +
    • ✓ Sofortige Löschung möglich
    • +
    • ✓ On-Premise Option verfügbar
    • +
    • ✓ Keine Daten für KI-Training
    • +
    +
    +
    + +## Häufige Fragen zum Datenschutz-Wechsel + + + +## Sofort handeln - Schützen Sie Ihr Unternehmen + +
    + +

    + Jeder Tag mit Fireflies.ai ist ein Compliance-Risiko +

    +

    + Wechseln Sie jetzt zu Memoro und schützen Sie Ihr Unternehmen vor Bußgeldern, + Datenschutz-Verstößen und rechtlichen Konsequenzen. +

    + +
    + + +
    + +
    +

    🎁 Wechsel-Bonus für Fireflies-Nutzer:

    +
      +
    • + + Kostenlose Datenmigration im Wert von €500 +
    • +
    • + + 3 Monate Premium kostenlos testen +
    • +
    • + + Persönliche Compliance-Beratung inklusive +
    • +
    • + + Fertige Betriebsvereinbarungs-Vorlagen +
    • +
    +
    +
    + +## Trusted by Compliance-bewusste Unternehmen + +
    +

    Unternehmen die bereits gewechselt haben

    +
    + Deutsche Bank + Siemens + BMW + SAP + Volkswagen + Bosch +
    +

    + + über 5.000 weitere Unternehmen vertrauen auf Memoros DSGVO-konforme Lösung +

    +
    + +## Weitere sichere Alternativen zu US-Tools + + \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/de/granola-ai-alternative.mdx b/apps/memoro/apps/landing/src/content/pages/de/granola-ai-alternative.mdx new file mode 100644 index 000000000..c8d08af19 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/granola-ai-alternative.mdx @@ -0,0 +1,392 @@ +--- +title: "Granola AI Alternative für Deutschland 2025 - DSGVO-konform & ohne Bot | Memoro" +description: "Die bessere Granola AI Alternative für deutsche Teams ► Keine störenden Bots ✓ DSGVO-konform ✓ Deutsche Server ✓ Offline-Modus ✓ Jetzt wechseln!" +keywords: ["granola ai alternative", "granola alternative deutschland", "granola ai deutsch", "granola alternative dsgvo", "meeting notizen ohne bot", "lokale transkription", "granola ai konkurrenz", "bessere alternative zu granola"] +lang: de +type: comparison +lastUpdated: 2025-01-09 +sections: + hero: + title: "Granola AI Alternative - DSGVO-konform mit deutschen Servern" + subtitle: "Die sichere Alternative zu Granola AI - ohne Einschränkungen" + cta: "Kostenlos testen" + features: + title: "Warum Memoro die bessere Wahl ist" + items: ["Echte DSGVO-Compliance", "Deutsche Server", "Unbegrenzte Meetings", "Alle Plattformen"] +ogImage: "/images/og/granola-alternative.png" +canonical: "https://memoro.ai/de/granola-ai-alternative" +robots: "index, follow" +priority: 0.9 +changefreq: "monthly" +--- + +import Button from '../../../components/atoms/Button.astro'; +import { Icon } from 'astro-icon/components'; +import TestimonialCard from '../../../components/TestimonialCard.astro'; +import ROICalculator from '../../../components/ROICalculator.astro'; + +# Granola AI Alternative für Deutschland - Ohne Bot, mit voller DSGVO-Compliance + +
    +
    +

    Die bessere Alternative zu Granola AI - Made in Germany

    +

    + Erleben Sie diskrete Meeting-Dokumentation ohne Bot, mit unbegrenzten Meetings, + echter DSGVO-Compliance und Support für alle Plattformen - nicht nur Mac. +

    +
    + + +
    +
    +
    + + 600 Min kostenlos +
    +
    + + DSGVO-konform +
    +
    + + Unbegrenzte Meetings +
    +
    +
    +
    + Memoro vs Granola AI Vergleich +
    +
    + +## 🚨 Die Grenzen von Granola AI für deutsche Unternehmen + +
    +
    + +

    Nur 25 kostenlose Meetings

    +

    + Nach nur 25 Meetings ist Schluss mit dem kostenlosen Plan. + Das sind weniger als 2 Wochen normaler Geschäftsbetrieb! +

    +
    + +
    + +

    Hauptsächlich für Mac

    +

    + Windows-Version noch in Beta mit vielen fehlenden Features. + Android-App? Nicht in Sicht. +

    +
    + +
    + +

    Keine Aufzeichnungen

    +

    + Kein Zugriff auf Audio- oder Video-Aufzeichnungen. + Meetings sind nach der Transkription verloren. +

    +
    + +
    + +

    Unzuverlässige Suche

    +

    + Die Suchfunktion findet oft keine Meetings, selbst bei exakten Suchbegriffen. + Frustrierend bei vielen Meetings. +

    +
    +
    + +## 📊 Memoro vs. Granola AI - Der direkte Vergleich + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FeatureGranola AIMemoroVorteil
    Kostenlose Meetings❌ 25 insgesamt✅ 600 Min/Monat✅ Memoro
    DSGVO-Compliance⚠️ Unklar✅ 100% konform✅ Memoro
    Server-Standort🌍 International🇩🇪 Deutschland✅ Memoro
    Plattform-Support⚠️ Mac (Windows Beta)✅ Alle Plattformen✅ Memoro
    Audio-Aufzeichnung❌ Nicht verfügbar✅ Verfügbar✅ Memoro
    Suchfunktion❌ Unzuverlässig✅ Präzise & schnell✅ Memoro
    Meeting Bot✅ Kein Bot✅ Optional ohne Bot= Beide
    Offline-Modus⚠️ Teilweise✅ Vollständig✅ Memoro
    Integrationen❌ Sehr begrenzt✅ 30+ Tools✅ Memoro
    Preis Pro/Monat$10€9,99✅ Memoro
    +
    + +## 🎯 Warum Teams von Granola AI zu Memoro wechseln + +
    +
    + +

    Unbegrenzte Meetings

    +

    + Keine künstliche Begrenzung auf 25 Meetings. + Bei uns bekommen Sie 600 Minuten pro Monat kostenlos - das sind über 200 Meetings! +

    +
    + +
    + +

    Deutsche Server & DSGVO

    +

    + 100% DSGVO-konform mit Servern in Deutschland. + Ihre Daten verlassen niemals die EU - garantiert. +

    +
    + +
    + +

    Alle Plattformen

    +

    + Funktioniert perfekt auf Windows, Mac, iOS und Android. + Keine Beta-Versionen, keine Einschränkungen. +

    +
    +
    + +## 💡 Exklusive Memoro Features die Granola AI fehlen + +
    +

    Was Sie mit Memoro zusätzlich bekommen:

    + +
    +
    + +
    +

    Audio & Video Aufzeichnung

    +

    Greifen Sie jederzeit auf Original-Aufnahmen zu - perfekt für wichtige Details.

    +
    +
    + +
    + +
    +

    Intelligente Suche

    +

    Finden Sie jedes Meeting, jeden Teilnehmer, jedes Thema in Sekunden.

    +
    +
    + +
    + +
    +

    50+ Sprachen & Dialekte

    +

    Inklusive Schweizerdeutsch, Österreichisch und Fachjargon.

    +
    +
    + +
    + +
    +

    30+ Integrationen

    +

    Nahtlose Integration mit Slack, Teams, Asana, ClickUp, Zapier und mehr.

    +
    +
    + +
    + +
    +

    Meeting Analytics

    +

    Verstehen Sie Meeting-Muster und optimieren Sie Ihre Zeit.

    +
    +
    + +
    + +
    +

    Deutscher Support

    +

    24/7 Support in deutscher Sprache - keine Sprachbarrieren.

    +
    +
    +
    +
    + +## 🚀 So einfach wechseln Sie von Granola AI zu Memoro + +
    +

    In 5 Minuten startklar:

    + +
    +
    +
    1
    +
    +

    Memoro Account erstellen

    +

    Kostenlos registrieren und 600 Minuten pro Monat erhalten

    +
    +
    + +
    +
    2
    +
    +

    Desktop & Mobile Apps installieren

    +

    Verfügbar für alle Plattformen - keine Beta-Versionen

    +
    +
    + +
    +
    3
    +
    +

    Meeting-Plattformen verbinden

    +

    Ein-Klick Integration mit Zoom, Teams, Google Meet etc.

    +
    +
    + +
    +
    4
    +
    +

    Team einladen

    +

    Kollegen hinzufügen und gemeinsam produktiver werden

    +
    +
    + +
    +
    5
    +
    +

    Erste Meeting-Notizen genießen

    +

    Automatische Protokolle mit 98% Genauigkeit ab dem ersten Meeting

    +
    +
    +
    +
    + + + +## ❓ Häufig gestellte Fragen + +
    +
    + Kann ich Memoro auch ohne Bot nutzen wie bei Granola? +

    + Ja! Memoro bietet beide Optionen: Sie können diskret ohne Bot aufzeichnen (wie Granola) oder + optional einen Bot für automatische Aufnahmen nutzen. Sie haben die volle Kontrolle. +

    +
    + +
    + Wie viele Meetings sind bei Memoro kostenlos? +

    + Sie erhalten 600 Minuten pro Monat kostenlos - das sind typischerweise 200+ Meetings. + Im Gegensatz zu Granolas einmaligen 25 Meetings erneuert sich Ihr Kontingent jeden Monat. +

    +
    + +
    + Funktioniert Memoro auch auf Windows und Android? +

    + Absolut! Memoro funktioniert vollständig auf Windows, Mac, Linux, iOS und Android. + Keine Beta-Versionen, keine Einschränkungen - volle Funktionalität auf allen Plattformen. +

    +
    + +
    + Kann ich meine alten Meeting-Notizen importieren? +

    + Ja, wir bieten einen kostenlosen Import-Service für Ihre bestehenden Meeting-Notizen. + Unser Support-Team hilft Ihnen beim nahtlosen Übergang. +

    +
    + +
    + Ist Memoro wirklich DSGVO-konform? +

    + Ja, zu 100%. Unsere Server stehen in Deutschland, wir haben einen deutschen Datenschutzbeauftragten + und erfüllen alle DSGVO-Anforderungen. Ihre Daten verlassen niemals die EU. +

    +
    +
    + +
    +

    Bereit für unbegrenzte, sichere Meeting-Notizen?

    +

    + Starten Sie jetzt mit 600 kostenlosen Minuten pro Monat und erleben Sie den Unterschied. +

    +
    + + +
    +

    + Keine Kreditkarte erforderlich · 600 Min/Monat kostenlos · DSGVO-konform +

    +
    \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/de/guides.mdx b/apps/memoro/apps/landing/src/content/pages/de/guides.mdx new file mode 100644 index 000000000..32d981895 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/guides.mdx @@ -0,0 +1,67 @@ +--- +title: "Anleitungen | Memoro" +description: "Entdecken Sie hilfreiche Anleitungen und Tutorials für Memoro" +lang: "de" +type: "page" +lastUpdated: 2024-02-22 +sections: + hero: + title: "Anleitungen" + subtitle: "Entdecken Sie hilfreiche Anleitungen und Tutorials für Memoro" + faq: + title: "Häufig gestellte Fragen zu den Anleitungen" + items: + - question: "Sind die Anleitungen für Anfänger geeignet?" + answer: "Ja, unsere Anleitungen sind nach Schwierigkeitsgrad kategorisiert. Beginnen Sie einfach mit den Anfänger-Guides und arbeiten Sie sich nach Ihrem eigenen Tempo vor." + - question: "Wie aktuell sind die Anleitungen?" + answer: "Wir aktualisieren unsere Anleitungen regelmäßig, um sie mit den neuesten Funktionen von Memoro abzustimmen. Das letzte Update-Datum finden Sie jeweils am Ende der Anleitung." + - question: "Kann ich eigene Anleitungen vorschlagen?" + answer: "Natürlich! Wir freuen uns über Vorschläge aus der Community. Kontaktieren Sie uns einfach mit Ihren Ideen oder Verbesserungsvorschlägen." + - question: "Gibt es Video-Tutorials?" + answer: "Ja, zu vielen Anleitungen bieten wir ergänzende Video-Tutorials an. Diese finden Sie direkt bei der jeweiligen schriftlichen Anleitung." + callToAction: + title: "Bereit für bessere Meeting-Dokumentation?" + description: "Entdecke wie Memoro deine Meetings effizienter und produktiver macht." + buttonText: "App herunterladen" + buttonLink: "/de/download" +--- + +import GuideCard from "../../../components/GuideCard.astro"; +import CallToAction from "../../../components/CallToAction.astro"; +import FAQSection from "../../../components/FAQSection.astro"; + +export const GuidesContent = ({ guides }) => ( + <> +
    +

    {frontmatter.sections.hero.title}

    +

    + {frontmatter.sections.hero.subtitle} +

    +
    + + {/* Guide Grid */} +
    + {guides.map(guide => ( + + ))} +
    + + + +
    + +
    + + +); + +{" "} diff --git a/apps/memoro/apps/landing/src/content/pages/de/home.mdx b/apps/memoro/apps/landing/src/content/pages/de/home.mdx new file mode 100644 index 000000000..a4eda57ce --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/home.mdx @@ -0,0 +1,82 @@ +--- +title: "Meeting Protokoll Software - Automatische Protokollerstellung | Memoro" +description: "✓ Automatische Meeting-Protokolle in Minuten ✓ 80+ Sprachen ✓ DSGVO-konform ► Sparen Sie 3+ Stunden/Woche mit KI-Transkription. Jetzt 14 Tage kostenlos testen!" +lang: "de" +type: "home" +lastUpdated: 2024-12-21 +sections: + hero: + title: "Einfach sprechen. Memoro schreibt das Protokoll." + subtitle: "Konzentrieren Sie sich aufs Gespräch – Memoro erledigt die Dokumentation. Fertige Protokolle in Minuten statt Stunden." + image: "/images/product_photos/Memoro-Conversation-TopDown.jpg" + cta: + primary: + text: "Kostenlos herunterladen" + link: "/de/download" + secondary: + text: "Video ansehen" + videoId: "u05nEBNy7bk" + socialProof: + rating: 4.9 + reviewCount: 127 + quote: "Die beste Investition in unsere Produktivität" + author: "Thomas M., Geschäftsführer" + trustBadges: + - icon: "🔒" + text: "DSGVO-konform" + - icon: "🇩🇪" + text: "Made in Germany" + - icon: "🛡️" + text: "SSL-verschlüsselt" + callToAction: + title: "Nie wieder Meetings nacharbeiten" + description: "Schließen Sie sich tausenden Professionals an, die mit Memoro produktiver arbeiten." + buttonText: "Jetzt ausprobieren" + buttonLink: "/de/download" + image: "/images/product_photos/Memoro-App-Smartphone.jpg" + imageAlt: "Meeting Protokoll Software Memoro - Automatische Aufnahme und Transkription im Büro" +--- + +
    + +
    + + + + + + + + + + + + + +
    + +
    + +
    + +
    + diff --git a/apps/memoro/apps/landing/src/content/pages/de/imprint.mdx b/apps/memoro/apps/landing/src/content/pages/de/imprint.mdx new file mode 100644 index 000000000..eecb3b374 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/imprint.mdx @@ -0,0 +1,47 @@ +--- +title: "Impressum | Memoro" +description: "Die Impressumseite von Memoro" +lang: "de" +type: "legal" +lastUpdated: 2024-02-22 +sections: + hero: + title: "Impressum" + subtitle: "Rechtliche Informationen und Kontaktdaten" + contact: + company: "Memoro GmbH" + street: "Münzgasse 19" + city: "78462 Konstanz" + phone: "0049 176 444 343 85" + email: "kontakt@memoro.ai" + responsible: "Till Schneider" +--- + +## {frontmatter.sections.contact.company} + +{frontmatter.sections.contact.street} +{frontmatter.sections.contact.city} + +Telefon: [{frontmatter.sections.contact.phone}](tel:{frontmatter.sections.contact.phone.replace(/\s/g,)}) +E-Mail: [{frontmatter.sections.contact.email}](mailto:{frontmatter.sections.contact.email}) + +Redaktionell verantwortlich: {frontmatter.sections.contact.responsible} + +## EU-Streitschlichtung + +Die Europäische Kommission stellt eine Plattform zur Online-Streitbeilegung +(OS) bereit: [https://ec.europa.eu/consumers/odr/](https://ec.europa.eu/consumers/odr/). +Unsere E-Mail-Adresse finden Sie oben im Impressum. + +Verbraucher­streit­beilegung/Universal­schlichtungs­stelle: Wir sind nicht +bereit oder verpflichtet, an Streitbeilegungsverfahren vor einer +Verbraucherschlichtungsstelle teilzunehmen. + +## Zentrale Kontaktstelle nach dem Digital Services Act - DSA + +Unsere zentrale Kontaktstelle für Nutzer und Behörden nach Art. 11, 12 DSA (Verordnung (EU) 2022/265) erreichen Sie wie folgt: + +E-Mail: [{frontmatter.sections.contact.email}](mailto:{frontmatter.sections.contact.email}) +Telefon: [{frontmatter.sections.contact.phone}](tel:{frontmatter.sections.contact.phone.replace(/\s/g,)}) + +Die für den Kontakt zur Verfügung stehenden Sprachen sind: Deutsch, Englisch. diff --git a/apps/memoro/apps/landing/src/content/pages/de/industries.mdx b/apps/memoro/apps/landing/src/content/pages/de/industries.mdx new file mode 100644 index 000000000..cbd479610 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/industries.mdx @@ -0,0 +1,75 @@ +--- +title: "Meeting-Software für Büro, Bildung & Handwerk | Memoro" +description: "✓ Baustellendokumentation App ✓ Vorlesungsmitschrift Software ✓ Büro-Protokoll Tool ► Branchen-spezifische Lösungen für automatische Gesprächsdokumentation" +lang: "de" +type: "page" +lastUpdated: 2024-02-22 +sections: + hero: + title: "Meeting-Software für jede Branche" + subtitle: "Automatische Protokollerstellung und KI-Transkription für Büro, Bildung, Handwerk und mehr" + faq: + title: "Häufig gestellte Fragen" + items: + - question: "Wie unterstützt Memoro beim Wissensaustausch?" + answer: "Memoro nutzt KI, um das Wissen Ihres Teams automatisch zu organisieren und zu verknüpfen, sodass Informationen abteilungsübergreifend leicht zu finden und zu teilen sind." + - question: "Kann Memoro mit unseren bestehenden Tools integriert werden?" + answer: "Ja, Memoro lässt sich nahtlos in beliebte Tools wie Slack, Microsoft Teams und Google Workspace integrieren, um Wissen dort zu erfassen und zu teilen, wo Ihr Team bereits arbeitet." + - question: "Sind unsere Daten bei Memoro sicher?" + answer: "Absolut. Wir verwenden Verschlüsselung auf Unternehmensniveau und folgen strengen Sicherheitsprotokollen. Ihre Daten werden in EU-basierten Servern gespeichert und entsprechen den DSGVO-Anforderungen." + - question: "Wie schnell können wir loslegen?" + answer: "Sie können Memoro direkt nach der Anmeldung nutzen. Unser Onboarding-Prozess ist einfach, und die meisten Teams sind innerhalb eines Tages einsatzbereit." + callToAction: + title: "Bereit für bessere Meeting-Dokumentation?" + description: "Entdecke wie Memoro deine Meetings effizienter und produktiver macht." + buttonText: "App herunterladen" + buttonLink: "/de/download" +--- + +import IndustryCard from "../../../components/IndustryCard.astro"; +import TestimonialPreview from "../../../components/TestimonialPreview.astro"; +import FAQSection from "../../../components/FAQSection.astro"; +import CallToAction from "../../../components/CallToAction.astro"; + +export const IndustriesContent = ({ industries, lang }) => ( + <> +
    +

    {frontmatter.sections.hero.title}

    +

    + {frontmatter.sections.hero.subtitle} +

    +
    + +
    + {industries.map((industry) => ( + + ))} +
    + + + + + +
    + +
    + + +); + +{" "} diff --git a/apps/memoro/apps/landing/src/content/pages/de/ki-transkription-software.mdx b/apps/memoro/apps/landing/src/content/pages/de/ki-transkription-software.mdx new file mode 100644 index 000000000..c345ec42c --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/ki-transkription-software.mdx @@ -0,0 +1,451 @@ +--- +title: "KI-Transkription-Software 2025 - Automatische Audio zu Text Konvertierung | Memoro" +description: "Professionelle KI-Transkription für deutsche Unternehmen ► 98% Genauigkeit ✓ 50+ Sprachen ✓ DSGVO-konform ✓ Deutsche Server ✓ Jetzt kostenlos testen!" +keywords: ["ki transkription", "automatische transkription", "audio zu text ki", "transkriptionssoftware deutsch", "speech to text deutsch", "ki transkription software", "automatische transkription deutsch", "transkription ki kostenlos"] +lang: de +type: product +lastUpdated: 2025-01-09 +sections: + hero: + title: "KI-Transkription-Software - Präzise & DSGVO-konform" + subtitle: "Verwandeln Sie Ihre Audio- und Video-Dateien in nur wenigen Sekunden in perfekte Texte" + cta: "Kostenlos testen" + features: + title: "Warum unsere KI-Transkription führend ist" + items: ["98% Genauigkeit", "50+ Sprachen", "Echtzeit-Verarbeitung", "DSGVO-konform"] +ogImage: "/images/og/ki-transkription-software.png" +canonical: "https://memoro.ai/de/ki-transkription-software" +robots: "index, follow" +priority: 0.9 +changefreq: "monthly" +--- + +import Button from '../../../components/atoms/Button.astro'; +import { Icon } from 'astro-icon/components'; +import FAQ from '../../../components/FAQ.astro'; +import ComparisonTable from '../../../components/ComparisonTable.astro'; + +
    +
    +

    + KI-Transkription-Software für perfekte Audio-zu-Text Konvertierung +

    +

    + Verwandeln Sie Interviews, Podcasts, Vorlesungen und Meetings in präzise Texte. + Mit 98% Genauigkeit, 50+ Sprachen und vollständiger DSGVO-Konformität. +

    +
    + + +
    +
    +
    + + 600 Min kostenlos +
    +
    + + DSGVO-konform +
    +
    + + 5x schneller als manuell +
    +
    +
    +
    + +## Das Problem mit manueller Transkription + +**Stunden verschwendet, Fehler garantiert:** Jede Stunde Audio bedeutet 4-6 Stunden manueller Arbeit. Bei einem Stundenlohn von 25€ kostet die Transkription eines einstündigen Interviews bereits 100-150€ - und das ohne Gewähr für Genauigkeit. + +### Die 5 größten Probleme manueller Transkription: + +
    +
    +

    ⏰ Extrem zeitaufwendig

    +

    4-6 Stunden Arbeit für 1 Stunde Audio

    +
    +
    +

    💸 Hohe Kosten

    +

    100-150€ pro Stunde Audio bei Freelancern

    +
    +
    +

    ❌ Fehleranfällig

    +

    Müdigkeit führt zu Hörfehlern und Auslassungen

    +
    +
    +

    🔒 Datenschutz-Risiken

    +

    Sensible Inhalte an externe Dienstleister

    +
    +
    + +## Die Memoro KI-Transkription Lösung + +### Wie funktioniert KI-Transkription? + +
    +
    +
    +
    + +
    +

    1. Audio hochladen

    +

    MP3, WAV, MP4 oder live aufnehmen

    +
    +
    +
    + +
    +

    2. KI analysiert

    +

    Neurale Netzwerke erkennen Sprache präzise

    +
    +
    +
    + +
    +

    3. Perfekter Text

    +

    Formatiert, strukturiert und exportbereit

    +
    +
    +
    + +### Memoro vs. Konkurrenz - Der Vergleich + + + +## Anwendungsbereiche für KI-Transkription + +### Journalismus & Medien + +**Interview-Transkription in Minuten statt Stunden:** +- Experteninterviews sofort verschriftlichen +- Podcast-Inhalte für SEO optimieren +- Pressekonferenzen dokumentieren +- O-Töne schnell extrahieren + +*"Als freier Journalist spare ich mit Memoro 15 Stunden pro Woche. Interviews sind sofort verfügbar und ich kann mich aufs Schreiben konzentrieren."* - **Michael K., Journalist** + +### Forschung & Wissenschaft + +**Qualitative Forschung revolutionieren:** +- Experteninterviews analysieren +- Fokusgruppen auswerten +- Vorlesungen dokumentieren +- Feldforschung verschriftlichen + +### Podcasting & Content + +**Content-Produktion beschleunigen:** +- Podcast-Episoden zu Blog-Posts +- YouTube-Videos untertiteln +- Social Media Clips erstellen +- Newsletter-Content generieren + +### Business & Beratung + +**Kundeninteraktionen dokumentieren:** +- Beratungsgespräche protokollieren +- Marktforschungsinterviews +- Produktfeedback sammeln +- Schulungsvideos verschriftlichen + +## Einzigartige Memoro Features + +### Deutsche Sprachoptimierung + +**98% Genauigkeit bei deutschen Inhalten:** +- Erkennung österreichischer/schweizer Dialekte +- Fachbegriffe aus 20+ Branchen +- Umgangssprache und Anglizismen +- Präzise Interpunktion + +### Intelligente Nachbearbeitung + +**KI macht mehr als nur transkribieren:** +- Automatische Absatzformatierung +- Sprecherwechsel erkennen +- Füllwörter entfernen +- Grammatik korrigieren + +### Flexible Export-Optionen + +**Ihre Texte, wie Sie sie brauchen:** +- Word, PDF, TXT +- Untertitel (SRT, VTT) +- JSON für Entwickler +- HTML für Web + +## ROI-Kalkulator: So viel sparen Sie + +
    +

    Beispiel: Freier Journalist

    + +
    +
    +

    Vorher (Manuell)

    +
      +
    • • 5 Interviews à 60 Min/Woche
    • +
    • • 20 Stunden Transkription
    • +
    • • 20h × 25€ = 500€/Woche
    • +
    • 2.000€/Monat Zeitkosten
    • +
    +
    +
    +

    Nachher (Memoro)

    +
      +
    • • 5 Stunden × 1,20€ = 6€/Woche
    • +
    • • 1 Stunde Nachbearbeitung
    • +
    • • Gesamt: 31€/Woche
    • +
    • 124€/Monat
    • +
    +
    +
    + +
    +

    Ersparnis: 1.876€/Monat (94%)

    +
    +
    + +## Datenschutz & Sicherheit + +### 100% DSGVO-konform + +**Ihre sensiblen Inhalte sind sicher:** +- Server-Standort Deutschland (Hetzner) +- Ende-zu-Ende Verschlüsselung +- Automatische Löschung nach 30 Tagen +- Keine Weitergabe an Dritte + +### Compliance-Zertifizierungen + +- ✅ ISO 27001 (Informationssicherheit) +- ✅ DSGVO-Auftragsdatenverarbeitung +- ✅ SOC 2 Type II +- ✅ Betriebsratkonform + +## Erfolgsgeschichten unserer Kunden + +### Case Study: Podcast-Netzwerk + +**Herausforderung:** 50 Podcast-Episoden pro Monat transkribieren für SEO und Accessibility. + +**Lösung:** Automatische Transkription mit Memoro, Integration in WordPress. + +**Ergebnisse:** +- **70% Zeitersparnis** bei der Content-Produktion +- **300% mehr SEO-Traffic** durch transkribierte Inhalte +- **15.000€ jährliche Einsparungen** bei Freelancer-Kosten + +*"Memoro hat unser Content-Game revolutioniert. Wir können jetzt jede Episode in Blog-Posts, Social Media Content und Newsletter verwandeln."* - **Sarah L., Podcast-Produzentin** + +### Case Study: Marktforschungsagentur + +**Herausforderung:** 200 Kundeninterviews pro Quartal auswerten. + +**Vorher:** +- 800 Stunden manuelle Transkription +- 20.000€ Kosten pro Quartal +- 3 Wochen Verzögerung + +**Mit Memoro:** +- 2 Stunden Upload + KI-Verarbeitung +- 240€ Kosten pro Quartal +- Sofortige Verfügbarkeit + +**ROI: 8.233% Kostenersparnis** + +## Preise & Pakete + +
    +
    +

    Starter

    +

    Kostenlos

    +

    600 Minuten/Monat

    +
      +
    • ✅ 98% Genauigkeit
    • +
    • ✅ 10 Sprachen
    • +
    • ✅ Export als Text/Word
    • +
    • ✅ 30 Tage Speicherung
    • +
    +
    +
    +
    + Beliebteste +
    +

    Professional

    +

    €29/Monat

    +

    1.500 Minuten/Monat

    +
      +
    • ✅ Alles aus Starter
    • +
    • ✅ 50+ Sprachen
    • +
    • ✅ Alle Export-Formate
    • +
    • ✅ Prioritäts-Support
    • +
    • ✅ API-Zugang
    • +
    +
    +
    +

    Enterprise

    +

    €99/Monat

    +

    Unlimited Minuten

    +
      +
    • ✅ Alles aus Professional
    • +
    • ✅ Unbegrenzte Transkription
    • +
    • ✅ White-Label Option
    • +
    • ✅ Dedicated Support
    • +
    • ✅ Custom Integration
    • +
    +
    +
    + +## Integration & API + +### Nahtlose Workflows + +**Verbinden Sie Memoro mit Ihren Tools:** +- **Zapier:** 2.000+ App-Integrationen +- **WordPress:** Plugin für automatische Posts +- **Google Drive:** Direkte Dateisynchronisation +- **Slack:** Transkripte ins Team teilen + +### Developer-API + +```javascript +// Einfache API-Integration +const transcription = await memoro.transcribe({ + audioFile: 'interview.mp3', + language: 'de', + format: 'json' +}); +``` + +**Features:** +- REST API & WebSocket +- Echtzeit-Transkription +- Batch-Processing +- Webhook-Benachrichtigungen + +## Häufige Fragen zur KI-Transkription + + + +## Jetzt mit KI-Transkription starten + +
    +

    Bereit für 5x schnellere Transkription?

    +

    + Starten Sie jetzt mit 600 kostenlosen Minuten und erleben Sie die Zukunft der Audio-zu-Text Konvertierung. +

    +
    + + +
    +

    + Keine Kreditkarte erforderlich • 600 Min kostenlos • Jederzeit kündbar +

    +
    + +--- + +*Letzte Aktualisierung: Januar 2025 | Alle Preise zzgl. MwSt.* \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/de/meeting-protokoll-software.mdx b/apps/memoro/apps/landing/src/content/pages/de/meeting-protokoll-software.mdx new file mode 100644 index 000000000..86f725b5e --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/meeting-protokoll-software.mdx @@ -0,0 +1,329 @@ +--- +title: "Meeting-Protokoll Software | Automatische Protokollerstellung mit KI | Memoro" +description: "✓ Automatische Meeting-Protokolle in Minuten ✓ Sprechererkennung für 15+ Personen ✓ 80+ Sprachen ✓ DSGVO-konform ► Sparen Sie 3+ Stunden pro Woche. Jetzt kostenlos testen!" +lang: "de" +type: "landing" +lastUpdated: 2024-12-28 +sections: + hero: + title: "Die Meeting-Protokoll Software, die mitdenkt" + subtitle: "Automatische Protokollerstellung mit KI-Transkription. Konzentrieren Sie sich aufs Gespräch – Memoro erledigt die Dokumentation." + image: "/images/industries/Office-Businessman-Recording-Memoro-AI-App-Transcription.png" + imageAlt: "Memoro Meeting-Protokoll Software - Automatische Aufnahme und Transkription im Büro" + cta: + primary: + text: "Jetzt mit 150 Mana starten" + link: "/de/download" + secondary: + text: "Live-Demo ansehen" + videoId: "u05nEBNy7bk" + problems: + title: "Schluss mit stundenlanger Nacharbeit" + items: + - icon: "⏰" + title: "3+ Stunden pro Woche verschwendet" + description: "Manuelle Protokollerstellung kostet Zeit, die Sie produktiver nutzen könnten." + - icon: "😤" + title: "Unvollständige Notizen" + description: "Während Sie schreiben, verpassen Sie wichtige Gesprächsinhalte." + - icon: "🔍" + title: "Verlorene Informationen" + description: "Handschriftliche Notizen sind schwer durchsuchbar und gehen oft verloren." + solution: + title: "Ihre Lösung: Automatische Meeting-Protokolle mit Memoro" + subtitle: "Von der Aufnahme zum fertigen Protokoll in drei Schritten" + steps: + - number: "1" + title: "Ein Klick genügt" + description: "Starten Sie die Aufnahme zu Beginn des Meetings – egal ob vor Ort oder remote." + icon: "🎙️" + - number: "2" + title: "KI analysiert in Echtzeit" + description: "Automatische Sprechererkennung, Transkription und intelligente Strukturierung." + icon: "🤖" + - number: "3" + title: "Fertiges Protokoll sofort verfügbar" + description: "Strukturierte Zusammenfassung mit Aufgaben, Beschlüssen und Kernpunkten." + icon: "📄" + features: + title: "Funktionen der Meeting-Protokoll Software" + items: + - icon: "👥" + title: "Automatische Sprechererkennung" + description: "Erkennt bis zu 15 verschiedene Sprecher und ordnet Aussagen automatisch zu." + - icon: "🌍" + title: "80+ Sprachen" + description: "Transkription in über 80 Sprachen und Dialekten, inklusive Schweizerdeutsch." + - icon: "📝" + title: "Intelligente Aufgabenerkennung" + description: "Extrahiert automatisch To-Dos, Termine und Verantwortlichkeiten." + - icon: "🔒" + title: "DSGVO-konform & sicher" + description: "Deutsche Server, Ende-zu-Ende-Verschlüsselung, höchste Datenschutzstandards." + - icon: "📱" + title: "Offline-fähig" + description: "Funktioniert auch ohne Internet – perfekt für vertrauliche Meetings." + - icon: "🔄" + title: "Nahtlose Integration" + description: "Funktioniert mit Teams, Zoom, WebEx und allen anderen Meeting-Tools." + comparison: + title: "Meeting-Protokoll Software im Vergleich" + subtitle: "Warum Memoro die beste Wahl für deutsche Unternehmen ist" + competitors: + - name: "Memoro" + features: + - "✓ DSGVO-konform" + - "✓ Deutsche Server" + - "✓ Offline-Modus" + - "✓ 80+ Sprachen" + - "✓ Automatische Aufgaben" + - "✓ Ab 0€/Monat" + highlight: true + - name: "Otter.ai" + features: + - "✗ US-Server" + - "✗ Nur Cloud" + - "✗ Nur online" + - "○ Hauptsächlich Englisch" + - "○ Begrenzt" + - "Ab 16€/Monat" + - name: "Fireflies.ai" + features: + - "✗ US-Server" + - "✗ Datenschutz fraglich" + - "✗ Nur online" + - "○ Wenige Sprachen" + - "✓ Vorhanden" + - "Ab 18€/Monat" + testimonials: + title: "Was Nutzer über unsere Meeting-Protokoll Software sagen" + items: + - quote: "Seit wir Memoro nutzen, spare ich mindestens 10 Stunden pro Woche. Die automatische Aufgabenerkennung ist ein Game-Changer!" + author: "Sandra M." + role: "Projektmanagerin, DAX-Konzern" + rating: 5 + - quote: "Endlich kann ich mich voll auf die Gespräche konzentrieren, statt mitzuschreiben. Die Protokolle sind besser als handgeschriebene." + author: "Thomas K." + role: "Geschäftsführer, Mittelstand" + rating: 5 + - quote: "Die DSGVO-Konformität war für uns entscheidend. Mit Memoro haben wir eine sichere, deutsche Lösung gefunden." + author: "Julia R." + role: "Compliance Officer, Versicherung" + rating: 5 + useCases: + title: "Meeting-Protokoll Software für jeden Anwendungsfall" + cases: + - title: "Projekt-Meetings" + description: "Statusupdates, Meilensteine und Aufgaben automatisch dokumentiert" + icon: "📊" + - title: "Kunden-Gespräche" + description: "Anforderungen und Vereinbarungen rechtssicher festhalten" + icon: "🤝" + - title: "Team-Besprechungen" + description: "Wöchentliche Meetings effizient protokollieren" + icon: "👥" + - title: "Vorstandssitzungen" + description: "Formelle Protokolle mit allen Beschlüssen" + icon: "🏢" + - title: "Workshops & Trainings" + description: "Lerninhalte und Action Items erfassen" + icon: "🎓" + - title: "Remote-Meetings" + description: "Videokonferenzen lückenlos dokumentieren" + icon: "💻" + roi: + title: "Berechnen Sie Ihre Zeitersparnis" + subtitle: "Finden Sie heraus, wie viel Zeit und Geld Sie mit automatischen Meeting-Protokollen sparen" + calculator: + meetings_per_week: 10 + minutes_per_protocol: 30 + hourly_rate: 50 + result: + time_saved: "5 Stunden/Woche" + money_saved: "1.000€/Monat" + roi_text: "Die Investition amortisiert sich bereits nach 3 Tagen" + faq: + title: "Häufige Fragen zur Meeting-Protokoll Software" + items: + - question: "Wie genau ist die automatische Transkription?" + answer: "Memoro erreicht eine Genauigkeit von über 95% bei deutscher Sprache und guten Audiobedingungen. Die KI verbessert sich kontinuierlich durch maschinelles Lernen." + - question: "Funktioniert die Software auch bei Videokonferenzen?" + answer: "Ja, Memoro funktioniert perfekt mit Teams, Zoom, WebEx und allen anderen Videokonferenz-Tools. Die Software erkennt alle Sprecher automatisch." + - question: "Wie schnell ist das Protokoll verfügbar?" + answer: "Bei kurzen Meetings (bis 30 Minuten) ist das Protokoll sofort nach Beendigung verfügbar. Bei längeren Meetings dauert die Verarbeitung etwa 2-5 Minuten." + - question: "Kann ich das Format der Protokolle anpassen?" + answer: "Ja, Sie können aus verschiedenen Vorlagen wählen oder eigene Vorlagen erstellen. Die Protokolle können als Word, PDF oder Markdown exportiert werden." + - question: "Was kostet die Meeting-Protokoll Software?" + answer: "Memoro startet bei 0€/Monat mit 150 kostenlosen Minuten. Für Teams gibt es flexible Pakete ab 5,99€/Nutzer/Monat. Enterprise-Lösungen auf Anfrage." + - question: "Sind meine Daten sicher?" + answer: "Absolut. Memoro ist DSGVO-konform, nutzt deutsche Server und Ende-zu-Ende-Verschlüsselung. Ihre Daten gehören nur Ihnen." + cta: + title: "Starten Sie noch heute mit automatischen Meeting-Protokollen" + subtitle: "150 Mana geschenkt zum Ausprobieren – kostenlos starten" + button: + text: "Jetzt kostenlos starten" + link: "/de/download" + features: + - "✓ 150 Mana geschenkt" + - "✓ Voller Funktionsumfang" + - "✓ DSGVO-konform" + - "✓ Deutscher Support" +--- + +import HeroSection from "../../../components/HeroSection.astro"; +import FAQSection from "../../../components/FAQSection.astro"; +import CallToAction from "../../../components/CallToAction.astro"; +import ROICalculator from "../../../components/ROICalculator.astro"; + + + +## {frontmatter.sections.problems.title} + +
    + {frontmatter.sections.problems.items.map((item) => ( +
    +
    {item.icon}
    +

    {item.title}

    +

    {item.description}

    +
    + ))} +
    + +## {frontmatter.sections.solution.title} + +

    {frontmatter.sections.solution.subtitle}

    + +
    + {frontmatter.sections.solution.steps.map((step) => ( +
    +
    + {step.number} +
    +
    +

    + {step.icon} + {step.title} +

    +

    {step.description}

    +
    +
    + ))} +
    + +## {frontmatter.sections.features.title} + +
    + {frontmatter.sections.features.items.map((feature) => ( +
    +
    {feature.icon}
    +

    {feature.title}

    +

    {feature.description}

    +
    + ))} +
    + +## {frontmatter.sections.comparison.title} + +

    {frontmatter.sections.comparison.subtitle}

    + +
    + + + + + {frontmatter.sections.comparison.competitors.map((comp) => ( + + ))} + + + + {[0, 1, 2, 3, 4, 5].map((index) => ( + + + {frontmatter.sections.comparison.competitors.map((comp) => ( + + ))} + + ))} + +
    Funktion + {comp.name} +
    + {index === 0 && "DSGVO & Datenschutz"} + {index === 1 && "Server-Standort"} + {index === 2 && "Offline-Modus"} + {index === 3 && "Sprachunterstützung"} + {index === 4 && "Aufgabenerkennung"} + {index === 5 && "Preis"} + + {comp.features[index]} +
    +
    + +## {frontmatter.sections.testimonials.title} + +
    + {frontmatter.sections.testimonials.items.map((testimonial) => ( +
    +
    + {[...Array(testimonial.rating)].map(() => ( + + ))} +
    +

    "{testimonial.quote}"

    +
    +

    {testimonial.author}

    +

    {testimonial.role}

    +
    +
    + ))} +
    + +## {frontmatter.sections.useCases.title} + +
    + {frontmatter.sections.useCases.cases.map((useCase) => ( +
    +
    {useCase.icon}
    +

    {useCase.title}

    +

    {useCase.description}

    +
    + ))} +
    + + + + + +
    +

    {frontmatter.sections.cta.title}

    +

    {frontmatter.sections.cta.subtitle}

    + + + {frontmatter.sections.cta.button.text} + + +
    + {frontmatter.sections.cta.features.map((feature) => ( + {feature} + ))} +
    +
    \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/de/otter-ai-alternative.mdx b/apps/memoro/apps/landing/src/content/pages/de/otter-ai-alternative.mdx new file mode 100644 index 000000000..3690ba6f2 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/otter-ai-alternative.mdx @@ -0,0 +1,493 @@ +--- +title: "Otter.ai Alternative für Deutschland 2025 - DSGVO-konform & Made in Germany | Memoro" +description: "Die beste Otter.ai Alternative für deutsche Unternehmen ► 98% Genauigkeit ✓ Deutsche Server ✓ DSGVO-konform ✓ 50+ Sprachen ✓ Jetzt wechseln & sparen!" +keywords: ["otter.ai alternative", "otter.ai alternative deutschland", "otter alternative deutsch", "otter.ai alternative dsgvo", "meeting protokoll software", "transkriptionssoftware deutsch", "otter.ai konkurrenz", "bessere alternative zu otter"] +lang: de +type: comparison +lastUpdated: 2025-01-09 +sections: + hero: + title: "Otter.ai Alternative für Deutschland - 100% DSGVO-konform mit Memoro" + subtitle: "Warum 73% der deutschen Unternehmen von Otter.ai zu Memoro wechseln" + cta: "Kostenlos testen" + features: + title: "Die 5 größten Probleme mit Otter.ai in Deutschland" + items: ["Nur 3 Sprachen", "US-Server", "90% Genauigkeit", "Kein deutscher Support"] +ogImage: "/images/og/otter-alternative.png" +canonical: "https://memoro.ai/de/otter-ai-alternative" +robots: "index, follow" +priority: 0.9 +changefreq: "monthly" +--- + +import Button from '../../../components/atoms/Button.astro'; +import { Icon } from 'astro-icon/components'; +import FAQ from '../../../components/FAQ.astro'; +import ComparisonTable from '../../../components/ComparisonTable.astro'; +import TestimonialCard from '../../../components/TestimonialCard.astro'; +import MigrationGuide from '../../../components/MigrationGuide.astro'; +import ROICalculator from '../../../components/ROICalculator.astro'; + +# Otter.ai Alternative für Deutschland - 100% DSGVO-konform mit Memoro + +
    +
    +

    Warum 73% der deutschen Unternehmen von Otter.ai zu Memoro wechseln

    +

    + Erleben Sie überlegene Spracherkennung mit 98% Genauigkeit, deutsche Server und echten DSGVO-Schutz - + alles zu einem faireren Preis als Otter.ai. +

    +
    + + +
    +
    +
    + + 100% DSGVO +
    +
    + + Deutsche Server +
    +
    + + 4.9/5 Bewertung +
    +
    +
    +
    + Memoro Dashboard - Bessere Alternative zu Otter.ai +
    +
    + +## Die 5 größten Probleme mit Otter.ai in Deutschland + +
    +
    + +

    Nur 3 Sprachen

    +

    Otter.ai unterstützt nur Englisch, Spanisch und Französisch. Kein echtes Deutsch, keine Dialekte.

    +
    + +
    + +

    US-Server

    +

    Ihre sensiblen Meeting-Daten werden auf amerikanischen Servern gespeichert - ein DSGVO-Risiko.

    +
    + +
    + +

    Nur 300 Minuten kostenlos

    +

    Die kostenlose Version ist mit 300 Minuten pro Monat für Teams praktisch unbrauchbar.

    +
    + +
    + +

    90% Genauigkeit

    +

    Bei wichtigen Meetings bedeuten 10% Fehlerquote verpasste Details und Missverständnisse.

    +
    + +
    + +

    Kein deutscher Support

    +

    Support nur auf Englisch und in US-Zeitzonen - unpraktisch für deutsche Teams.

    +
    + +
    + +

    Versteckte Kosten

    +

    $16.99 USD pro Monat + Währungsgebühren + begrenzte Minuten = teurer als gedacht.

    +
    +
    + +## Memoro vs. Otter.ai - Der direkte Vergleich + + + +## Was echte Wechsler sagen + +
    + + + + + + + +
    + +## So einfach ist der Wechsel von Otter.ai zu Memoro + + + +## Exklusive Memoro-Features, die Otter.ai nicht bietet + +
    +

    Was Sie nur bei Memoro bekommen

    + +
    +
    + +

    Echte deutsche Spracherkennung

    +

    Trainiert auf deutschen Meetings, versteht Fachbegriffe und Dialekte perfekt.

    +
    + +
    + +

    Ende-zu-Ende Verschlüsselung

    +

    Ihre Daten sind bereits bei der Aufnahme verschlüsselt - maximale Sicherheit.

    +
    + +
    + +

    Offline-Aufnahme & Sync

    +

    Nehmen Sie ohne Internet auf und synchronisieren Sie später - perfekt für unterwegs.

    +
    + +
    + +

    Branchenspezifische Vorlagen

    +

    Vorgefertigte Templates für Ihre Branche - von Anwälten bis Zahnärzte.

    +
    + +
    + +

    KI-Assistent auf Deutsch

    +

    Fragen Sie die KI auf Deutsch und erhalten Sie präzise Antworten aus Ihren Meetings.

    +
    + +
    + +

    ISO 27001 zertifiziert

    +

    Höchste Sicherheitsstandards, von deutschen Auditoren geprüft und zertifiziert.

    +
    +
    +
    + +## ROI: So viel sparen Sie mit dem Wechsel + + + +## Warum Memoro die bessere Wahl für Deutschland ist + +
    +

    Die Vorteile auf einen Blick

    + +
    +
    +

    ✅ Memoro Vorteile

    +
      +
    • + + 100% DSGVO-konform mit deutschen Servern +
    • +
    • + + 98%+ Genauigkeit bei deutschen Gesprächen +
    • +
    • + + 50+ Sprachen inklusive alle deutschen Dialekte +
    • +
    • + + Deutscher Support in Ihrer Zeitzone +
    • +
    • + + Offline-Modus für maximale Flexibilität +
    • +
    • + + 600 Minuten kostenlos (doppelt so viel) +
    • +
    • + + Faire Preise in EUR ohne versteckte Kosten +
    • +
    +
    + +
    +

    ❌ Otter.ai Nachteile

    +
      +
    • + + US-Server mit DSGVO-Risiken +
    • +
    • + + Nur 90% Genauigkeit +
    • +
    • + + Nur 3 Sprachen (kein Deutsch) +
    • +
    • + + Support nur auf Englisch +
    • +
    • + + Kein Offline-Modus +
    • +
    • + + Nur 300 Minuten kostenlos +
    • +
    • + + Teure USD-Preise + Währungsgebühren +
    • +
    +
    +
    +
    + +## Häufig gestellte Fragen zum Wechsel + + + +## Überzeugt? Starten Sie jetzt mit Memoro! + +
    +

    + Schließen Sie sich 10.000+ deutschen Unternehmen an +

    +

    + Wechseln Sie heute von Otter.ai zu Memoro und erleben Sie den Unterschied +

    + +
    + + +
    + +
    +
    +
    98%+
    +
    Genauigkeit
    +
    +
    +
    50+
    +
    Sprachen
    +
    +
    +
    100%
    +
    DSGVO
    +
    +
    +
    24/7
    +
    Support
    +
    +
    +
    + +## Weitere Otter.ai Alternativen im Vergleich + +Sie möchten auch andere Alternativen kennenlernen? Hier finden Sie weitere Vergleiche: + + \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/de/presskit.mdx b/apps/memoro/apps/landing/src/content/pages/de/presskit.mdx new file mode 100644 index 000000000..a18925201 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/presskit.mdx @@ -0,0 +1,250 @@ +--- +title: "Pressekit | Memoro" +description: "Alle Ressourcen für Medienvertreter und Partner an einem Ort" +lang: "de" +type: "page" +lastUpdated: 2024-12-10 +sections: + hero: + title: "Pressekit" + subtitle: "Alle Ressourcen für Medienvertreter und Partner an einem Ort" + brandAssets: + title: "Logos und Markenelemente" + items: + - title: "Memoro Logo" + image: "/images/brand/logo.svg" + downloads: + - format: "SVG" + url: "/assets/press/memoro-logo.svg" + - format: "PNG" + url: "/assets/press/memoro-logo.png" + companyInfo: + title: "Unternehmensinformationen" + about: + title: "Über Memoro" + description: "Memoro ist eine innovative mobile Anwendung, die die Art und Weise revolutioniert, wie Menschen Gespräche dokumentieren und Gedanken festhalten. Entwickelt als Antwort auf die Herausforderungen des manuellen Mitschreibens und der Protokollführung, bietet Memoro eine intuitive Lösung für die automatisierte Erfassung, Transkription und Zusammenfassung von gesprochenen Inhalten." + facts: + title: "Fakten & Zahlen" + items: + - "Gründung: 2024" + - "Hauptsitz: Konstanz, Deutschland" + - "Mitarbeiter: 3-5 (Kernteam)" + - "Angemeldete Nutzer: 2.500+" + - "Unterstützte Sprachen: Über 80 Sprachen" + - "Zeitersparnis: 2-6 Stunden pro Woche" + pressContact: + title: "Pressekontakt" + intro: "Für Presseanfragen wenden Sie sich bitte an:" + contact: + name: "Aleksandra Vasileva" + role: "Marketing & Kommunikation" + email: "kontakt@memoro.ai" + phone: "" + callToAction: + title: "Bereit für effizientere Gesprächsdokumentation?" + description: "Schließen Sie sich Hunderten von Nutzern an, die bereits von Memoros KI-gestützter Dokumentationsplattform profitieren." + buttonText: "Jetzt starten" + buttonLink: "https://memoro.ai" +--- + +## {frontmatter.sections.brandAssets.title} + +
    + {frontmatter.sections.brandAssets.items.map((item) => ( +
    +

    {item.title}

    + {item.image && ( +
    + {item.title} +
    + )} + {item.description && ( +

    {item.description}

    + )} +
    + {item.downloads.map((download) => ( + + {download.text || download.format} + + ))} +
    +
    + ))} +
    + +## {frontmatter.sections.companyInfo.title} + +
    +
    +
    +

    + {frontmatter.sections.companyInfo.about.title} +

    +

    + {frontmatter.sections.companyInfo.about.description} +

    +
    +
    +

    + {frontmatter.sections.companyInfo.facts.title} +

    +
      + {frontmatter.sections.companyInfo.facts.items.map((item) => ( +
    • {item}
    • + ))} +
    +
    +
    +
    + + +## {frontmatter.sections.pressContact.title} + +
    +
    +

    {frontmatter.sections.pressContact.intro}

    +
    +

    + {frontmatter.sections.pressContact.contact.name} +

    +

    + {frontmatter.sections.pressContact.contact.role} +

    +

    + + {frontmatter.sections.pressContact.contact.email} + +

    +
    +
    +
    + +## Alle Kontaktmöglichkeiten + +
    +
    + {/* Direkte Kontakte */} +
    +
    +
    + 📞 +
    +

    Direkte Kontakte

    +
    + +
    + +
    + 📧 +
    +
    +

    E-Mail

    +

    kontakt@memoro.ai

    +
    +
    + + +
    + 💬 +
    +
    +

    WhatsApp

    +

    +41 79 370 88 99

    +
    +
    + + +
    + 🌐 +
    +
    +

    Website

    +

    www.memoro.ai

    +
    +
    + +
    +
    + 📍 +
    +
    +

    Hauptsitz

    +

    + Münzgasse 19
    + 78462 Konstanz
    + Deutschland +

    +
    +
    +
    +
    + + {/* Social Media */} + +
    +
    + diff --git a/apps/memoro/apps/landing/src/content/pages/de/prices.mdx b/apps/memoro/apps/landing/src/content/pages/de/prices.mdx new file mode 100644 index 000000000..fc93593e1 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/prices.mdx @@ -0,0 +1,292 @@ +--- +title: "Preise & Kostenlose Testversion - Meeting Software | Memoro" +description: "✓ 14 Tage kostenlos testen ✓ Keine Kreditkarte erforderlich ✓ Ab 0€/Monat ► Flexible Preispläne für automatische Meeting-Protokolle und KI-Transkription" +lang: "de" +type: "page" +lastUpdated: 2025-01-22 +sections: + hero: + title: "Wählen Sie Ihren Mana-Plan" + subtitle: "Monatliche Mana-Pakete für Ihre individuellen Bedürfnisse" + plans: + - id: "free" + name: "Mana Tropfen" + price: + monthly: 0 + yearly: 0 + priceUnit: "" + features: + - "150 Mana pro Monat" + - "Grundlegende Funktionen" + monthlyMana: 150 + canGiftMana: false + cta: "Kostenlos starten" + highlight: false + - id: "Mana_Stream_Small_v1" + name: "Kleiner Mana Stream" + price: + monthly: 5.99 + yearly: 47.99 + priceUnit: "/ Monat" + yearlyBreakdown: "(entspricht 3,99€ / Monat, 33% Rabatt)" + features: + - "600 Mana pro Monat" + - "Mana verschenken möglich" + - "Erweiterte Funktionen" + monthlyMana: 600 + canGiftMana: true + cta: "Kleiner Mana Stream wählen" + highlight: false + - id: "Mana_Stream_Medium_v1" + name: "Mittlerer Mana Stream" + price: + monthly: 14.99 + yearly: 119.99 + priceUnit: "/ Monat" + yearlyBreakdown: "(entspricht 9,99€ / Monat, 33% Rabatt)" + features: + - "1500 Mana pro Monat" + - "Mana verschenken möglich" + - "Prioritäts-Support" + - "Alle Premium-Funktionen" + monthlyMana: 1500 + canGiftMana: true + cta: "Mittlerer Mana Stream wählen" + highlight: false + - id: "Mana_Stream_Large_v1" + name: "Großer Mana Stream" + price: + monthly: 29.99 + yearly: 239.99 + priceUnit: "/ Monat" + yearlyBreakdown: "(entspricht 19,99€ / Monat, 33% Rabatt)" + features: + - "3000 Mana pro Monat" + - "Mana verschenken möglich" + - "Premium-Support" + - "Erweiterte Analysen" + - "Team-Funktionen" + monthlyMana: 3000 + canGiftMana: true + cta: "Großer Mana Stream wählen" + highlight: false + - id: "Mana_Stream_Giant_v1" + name: "Riesiger Mana Stream" + price: + monthly: 49.99 + yearly: 399.99 + priceUnit: "/ Monat" + yearlyBreakdown: "(entspricht 33,33€ / Monat, 33% Rabatt)" + features: + - "5000 Mana pro Monat" + - "Mana verschenken möglich" + - "VIP-Support" + - "Alle Funktionen" + - "Enterprise-Features" + - "API-Zugang" + monthlyMana: 5000 + canGiftMana: true + cta: "Riesiger Mana Stream wählen" + highlight: false + manaPotions: + title: "Mana Tränke" + subtitle: "Einmalige Mana-Pakete für zusätzliche Power" + items: + - id: "Mana_Potion_Small_v1" + name: "Kleiner Mana Trank" + manaAmount: 350 + price: 4.99 + popular: false + - id: "Mana_Potion_Medium_v1" + name: "Mittlerer Mana Trank" + manaAmount: 700 + price: 9.99 + popular: false + - id: "Mana_Potion_Large_v1" + name: "Großer Mana Trank" + manaAmount: 1400 + price: 19.99 + popular: false + - id: "Mana_Potion_Giant_v2" + name: "Riesiger Mana Trank" + manaAmount: 2800 + price: 39.99 + popular: false + comparison: + title: "Detaillierter Vergleich" + faq: + title: "Häufig gestellte Fragen" + items: + - question: "Was ist Mana und wie funktioniert es?" + answer: "Mana ist unsere flexible Währung für die Nutzung von Memoro. Jede Aktion (Aufnahme, Transkription, KI-Analyse) verbraucht Mana. Je nach Ihrem Plan erhalten Sie monatlich ein festes Kontingent an Mana." + - question: "Was passiert mit ungenutztem Mana?" + answer: "Ungenutztes Mana verfällt am Ende des Monats. Ihr Kontingent wird jeden Monat komplett erneuert. Mit Mana-Tränken können Sie zusätzliches Mana für den aktuellen Monat erwerben." + - question: "Kann ich Mana an andere Nutzer verschenken?" + answer: "Ja, ab dem Kleinen Mana Stream Plan können Sie Mana an andere Memoro-Nutzer verschenken. Dies ist ideal für Teams oder wenn Sie Freunden aushelfen möchten." + - question: "Wie spare ich mit der jährlichen Abrechnung?" + answer: "Mit einem Jahresabonnement sparen Sie 33% im Vergleich zur monatlichen Abrechnung. Der Betrag wird jährlich im Voraus berechnet." + - question: "Kann ich zwischen den Plänen wechseln?" + answer: "Ja, Sie können jederzeit zu einem höheren oder niedrigeren Plan wechseln. Bei einem Upgrade wird Ihr Mana-Kontingent entsprechend angepasst. Die Abrechnung erfolgt anteilig." + - question: "Was sind Mana Tränke?" + answer: "Mana Tränke sind einmalige Mana-Pakete, die Sie zusätzlich zu Ihrem monatlichen Kontingent kaufen können. Sie sind ideal, wenn Sie in einem Monat mehr Mana benötigen, ohne Ihr Abo zu ändern." + callToAction: + title: "Starten Sie noch heute mit Memoro" + description: "Erleben Sie die Zukunft der KI-gestützten Dokumentation. Beginnen Sie kostenlos mit dem Mana Tropfen Plan!" + buttonText: "Jetzt kostenlos starten" + buttonLink: "/de/download" +--- + +import FAQSection from "../../../components/FAQSection.astro"; +import CallToAction from "../../../components/CallToAction.astro"; +import PricingToggle from "../../../components/PricingToggle.astro"; +import PricingBreakdown from "../../../components/PricingBreakdown.astro"; + +
    +

    {frontmatter.sections.hero.title}

    +

    {frontmatter.sections.hero.subtitle}

    +
    + + + + + +
    +
    + {frontmatter.sections.plans.map((plan) => ( +
    + {plan.highlight && ( +
    + Beliebt +
    + )} +
    +

    {plan.name}

    +
    +
    + + {plan.price.monthly === 0 ? 'Kostenlos' : `${plan.price.monthly}€`} + + {plan.priceUnit && / Monat} +
    +
    + {plan.yearlyBreakdown && ( + + )} +
    + +
    +
    +

    Monatliches Mana

    +

    {plan.monthlyMana}

    +
    +
    + + + Herunterladen + +
    + ))} +
    +
    + +
    +
    +

    {frontmatter.sections.manaPotions.title}

    +

    1 Mana = 1 Cent

    +

    {frontmatter.sections.manaPotions.subtitle}

    +
    + +
    + {frontmatter.sections.manaPotions.items.map((potion) => ( +
    + {potion.popular && ( +
    + Beliebt +
    + )} +

    {potion.name}

    +

    {potion.manaAmount}

    +

    Mana

    +

    {potion.price}€

    +
    + ))} +
    +
    + +
    +

    {frontmatter.sections.comparison.title}

    +
    + + + + + {frontmatter.sections.plans.map((plan) => ( + + ))} + + + + + + {frontmatter.sections.plans.map((plan) => ( + + ))} + + + + {frontmatter.sections.plans.map((plan) => ( + + ))} + + + + {frontmatter.sections.plans.map((plan) => ( + + ))} + + +
    Feature{plan.name}
    Preis + + {plan.price.monthly === 0 ? '✓' : `${plan.price.monthly}€ / Monat`} + +
    Monatliches Mana{plan.monthlyMana}
    Mana verschenken + {plan.canGiftMana ? ( + + + + ) : ( + + + + )} +
    +
    +
    + + + + diff --git a/apps/memoro/apps/landing/src/content/pages/de/privacy.mdx b/apps/memoro/apps/landing/src/content/pages/de/privacy.mdx new file mode 100644 index 000000000..8ec69ed7b --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/privacy.mdx @@ -0,0 +1,341 @@ +--- +title: "Datenschutz | Memoro" +description: "Datenschutzerklärung der Memoro GmbH" +lang: "de" +type: "legal" +lastUpdated: 2024-08-27 +sections: + hero: + title: "Datenschutzerklärung" + subtitle: "Informationen zum Schutz Ihrer Daten" +--- + +## Datenschutzerklärung für die Memoro Website + +### 1. Einleitung + +Diese Datenschutzerklärung informiert Sie über die Art, den Umfang und Zweck der Verarbeitung personenbezogener Daten auf unserer Website www.memoro.ai. Wir nehmen den Schutz Ihrer persönlichen Daten sehr ernst und behandeln Ihre personenbezogenen Daten vertraulich und entsprechend der gesetzlichen Datenschutzvorschriften sowie dieser Datenschutzerklärung. + +### 2. Verantwortliche Stelle + +Verantwortlich für die Datenverarbeitung auf dieser Website ist: + +Memoro GmbH +Münzgasse 19 +78462 Konstanz +Telefon: 0049 176 444 343 85 +E-Mail: [kontakt@memoro.ai](mailto:kontakt@memoro.ai) + +Redaktionell verantwortlich: Till Schneider + +### 3. Arten der verarbeiteten Daten + +Wir verarbeiten personenbezogene Daten, die wir im Rahmen Ihrer Nutzung unserer Website erheben. Zu den verarbeiteten Daten gehören: + +- Nutzungsdaten (z.B. besuchte Webseiten, Zugriffszeiten) +- Meta-/Kommunikationsdaten (z.B. Geräte-Informationen, IP-Adressen) + +### 4. Zweck der Datenverarbeitung + +Wir verarbeiten Ihre personenbezogenen Daten zu folgenden Zwecken: + +- Bereitstellung und Optimierung unserer Website +- Durchführung von Analysen und Statistiken +- Marketing und Werbung +- Verbesserung der Nutzererfahrung + +### 5. Rechtsgrundlage der Verarbeitung + +Die Verarbeitung Ihrer personenbezogenen Daten erfolgt auf Grundlage Ihrer Einwilligung (Art. 6 Abs. 1 lit. a DSGVO) und/oder unserer berechtigten Interessen (Art. 6 Abs. 1 lit. f DSGVO), welche darin bestehen, unser Angebot zu optimieren und die Nutzererfahrung zu verbessern. + +### 6. Empfänger der Daten + +Ihre Daten können an folgende Empfänger weitergegeben werden: + +- Dienstleister und Auftragsverarbeiter, die für uns tätig sind + +### 7. Dauer der Speicherung + +Wir speichern Ihre personenbezogenen Daten nur so lange, wie es für die Zwecke, für die sie erhoben wurden, erforderlich ist oder wie es gesetzlich vorgeschrieben ist. + +### 8. Ihre Rechte + +Sie haben das Recht: + +- Auskunft über Ihre von uns verarbeiteten personenbezogenen Daten zu verlangen +- Die Berichtigung unrichtiger oder Vervollständigung Ihrer bei uns gespeicherten personenbezogenen Daten zu verlangen +- Die Löschung Ihrer bei uns gespeicherten personenbezogenen Daten zu verlangen +- Die Einschränkung der Verarbeitung Ihrer personenbezogenen Daten zu verlangen +- Ihre Einwilligung jederzeit zu widerrufen +- Sich bei einer Aufsichtsbehörde zu beschweren + +### 9. Einsatz von Analyse- und Tracking-Tools + +Wir setzen auf unserer Website verschiedene Analyse- und Tracking-Tools ein, um die Nutzung unserer Website zu analysieren und zu verbessern. Im Folgenden informieren wir Sie über diese Tools: + +#### 9.1. Vercel Analytics + +Wir nutzen Vercel Analytics, einen Webanalysedienst der Vercel Inc. Vercel Analytics verwendet keine Cookies und sammelt keine personenbezogenen Daten. Es werden lediglich anonymisierte Nutzungsdaten erfasst, um die Performance und Nutzung unserer Website zu verbessern. + +#### 9.2. Google Analytics + +Wir nutzen Google Analytics, einen Webanalysedienst der Google LLC. Google Analytics verwendet Cookies, die eine Analyse der Benutzung der Website durch Sie ermöglichen. Die durch das Cookie erzeugten Informationen über Ihre Benutzung dieser Website werden in der Regel an einen Server von Google in den USA übertragen und dort gespeichert. + +Wir haben die IP-Anonymisierung aktiviert. Dadurch wird Ihre IP-Adresse von Google innerhalb von Mitgliedstaaten der Europäischen Union oder in anderen Vertragsstaaten des Abkommens über den Europäischen Wirtschaftsraum zuvor gekürzt. + +Mehr Informationen zum Umgang mit Nutzerdaten bei Google Analytics finden Sie in der Datenschutzerklärung von Google: [https://policies.google.com/privacy](https://policies.google.com/privacy) + +#### 9.3. Plausible Analytics + +Wir verwenden Plausible Analytics, einen datenschutzfreundlichen Webanalysedienst. Plausible Analytics sammelt keine personenbezogenen Daten und verwendet keine Cookies. Es werden lediglich anonymisierte Nutzungsdaten erfasst, um die Performance und Nutzung unserer Website zu analysieren. + +Weitere Informationen zu Plausible Analytics und deren Datenschutzpraktiken finden Sie unter: [https://plausible.io/data-policy](https://plausible.io/data-policy) + +#### 9.4. Hotjar + +Wir nutzen Hotjar, um die Bedürfnisse unserer Nutzer besser zu verstehen und das Angebot auf dieser Website zu optimieren. Mithilfe der Technologie von Hotjar bekommen wir ein besseres Verständnis von den Erfahrungen unserer Nutzer (z.B. wieviel Zeit Nutzer auf welchen Seiten verbringen, welche Links sie anklicken, was sie mögen und was nicht etc.) und das hilft uns, unser Angebot am Feedback unserer Nutzer auszurichten. + +Hotjar arbeitet mit Cookies und anderen Technologien, um Informationen über das Verhalten unserer Nutzer und über ihre Endgeräte zu sammeln (insbesondere IP Adresse des Geräts (wird nur in anonymisierter Form erfasst und gespeichert), Bildschirmgröße, Gerätetyp (Unique Device Identifiers), Informationen über den verwendeten Browser, Standort (nur Land), zum Anzeigen unserer Website bevorzugte Sprache). + +Weitere Informationen finden Sie in der Datenschutzerklärung von Hotjar: [https://www.hotjar.com/legal/policies/privacy](https://www.hotjar.com/legal/policies/privacy) + +#### 9.5. PostHog + +Wir verwenden PostHog, eine Open-Source-Plattform für Produktanalysen. PostHog sammelt Informationen über die Nutzung unserer Website, einschließlich Seitenaufrufe, Klicks und Benutzerinteraktionen. Diese Daten helfen uns, unser Produkt zu verbessern und die Benutzererfahrung zu optimieren. + +PostHog verwendet Cookies, um Benutzer über mehrere Besuche hinweg zu identifizieren. Sie können die Verwendung von Cookies in Ihren Browsereinstellungen deaktivieren. + +Wir nutzen die EU-Cloud von PostHog, was bedeutet, dass alle von PostHog gesammelten Daten innerhalb der Europäischen Union verarbeitet und gespeichert werden. Dies trägt dazu bei, die Einhaltung der Datenschutz-Grundverordnung (DSGVO) zu gewährleisten. + +Weitere Informationen zu den Datenschutzpraktiken von PostHog finden Sie unter: [https://posthog.com/privacy](https://posthog.com/privacy) + +### 10. Opt-Out-Möglichkeiten + +Sie haben jederzeit die Möglichkeit, der Datenerfassung durch die oben genannten Dienste zu widersprechen: + +- Für Google Analytics können Sie das Browser-Add-on zur Deaktivierung von Google Analytics verwenden: [https://tools.google.com/dlpage/gaoptout](https://tools.google.com/dlpage/gaoptout) +- Für Hotjar können Sie das Opt-Out hier setzen: [https://www.hotjar.com/legal/compliance/opt-out](https://www.hotjar.com/legal/compliance/opt-out) +- Für PostHog können Sie das Tracking in Ihren Kontoeinstellungen deaktivieren oder Cookies in Ihrem Browser blockieren + +### 11. Änderungen dieser Datenschutzerklärung + +Wir behalten uns vor, diese Datenschutzerklärung anzupassen, damit sie stets den aktuellen rechtlichen Anforderungen entspricht oder um Änderungen unserer Leistungen in der Datenschutzerklärung umzusetzen, z.B. bei der Einführung neuer Services. Für Ihren erneuten Besuch gilt dann die neue Datenschutzerklärung. + +### 12. Fragen zum Datenschutz + +Wenn Sie Fragen zum Datenschutz haben, schreiben Sie uns bitte eine E-Mail oder wenden Sie sich direkt an die für den Datenschutz verantwortliche Person in unserer Organisation: + +Memoro GmbH +z.Hd. Datenschutzbeauftragter +Münzgasse 19 +78462 Konstanz +E-Mail: [datenschutz@memoro.ai](mailto:datenschutz@memoro.ai) + +Stand: 15.12.2024 + +--- + +## Datenschutzerklärung für die Memoro App + +### Einleitung + +Wir freuen uns sehr über Ihr Interesse an unserer App. Datenschutz hat einen besonders hohen Stellenwert für die Geschäftsleitung der Memoro GmbH. Diese Datenschutzerklärung soll Sie über die Art, den Umfang und den Zweck der Erhebung und Verwendung personenbezogener Daten bei der Nutzung unserer App informieren. + +**Unser Versprechen**: Wir werden Ihre Daten niemals einsehen oder verkaufen. Wir setzen gezielt auf Lösungen, die höchsten europäischen Datenschutzstandards entsprechen und reduzieren stetig die Abhängigkeit von außereuropäischen Anbietern. + +### 1. Verantwortlicher + +Verantwortlicher im Sinne der Datenschutz-Grundverordnung (DSGVO) ist die: + +Memoro GmbH +Münzgasse 19 +78462 Konstanz +Telefon: 0049 176 444 343 85 +E-Mail: [kontakt@memoro.ai](mailto:kontakt@memoro.ai) + +### 2. Begriffsbestimmungen + +Diese Datenschutzerklärung beruht auf den Begrifflichkeiten der Datenschutz-Grundverordnung (DSGVO). Unsere Datenschutzerklärung soll sowohl für die Öffentlichkeit als auch für unsere Kunden und Geschäftspartner einfach lesbar und verständlich sein. Um dies zu gewährleisten, möchten wir vorab die verwendeten Begrifflichkeiten erläutern: + +- **personenbezogene Daten**: Alle Informationen, die sich auf eine identifizierte oder identifizierbare natürliche Person beziehen. +- **betroffene Person**: Jede identifizierte oder identifizierbare natürliche Person, deren personenbezogene Daten von dem Verantwortlichen verarbeitet werden. +- **Verarbeitung**: Jeder mit oder ohne Hilfe automatisierter Verfahren ausgeführte Vorgang im Zusammenhang mit personenbezogenen Daten. +- **Einschränkung der Verarbeitung**: Die Markierung gespeicherter personenbezogener Daten mit dem Ziel, ihre künftige Verarbeitung einzuschränken. +- **Profiling**: Jede Art der automatisierten Verarbeitung personenbezogener Daten zur Bewertung bestimmter persönlicher Aspekte. +- **Pseudonymisierung**: Die Verarbeitung personenbezogener Daten in einer Weise, auf welche die personenbezogenen Daten ohne Hinzuziehung zusätzlicher Informationen nicht mehr einer spezifischen betroffenen Person zugeordnet werden können. +- **Verantwortlicher**: Die natürliche oder juristische Person, die allein oder gemeinsam mit anderen über die Zwecke und Mittel der Verarbeitung von personenbezogenen Daten entscheidet. +- **Auftragsverarbeiter**: Eine natürliche oder juristische Person, die personenbezogene Daten im Auftrag des Verantwortlichen verarbeitet. +- **Empfänger**: Eine natürliche oder juristische Person, der personenbezogene Daten offengelegt werden. +- **Dritter**: Eine natürliche oder juristische Person außer der betroffenen Person, dem Verantwortlichen, dem Auftragsverarbeiter und den Personen, die unter der unmittelbaren Verantwortung des Verantwortlichen oder des Auftragsverarbeiters befugt sind, die personenbezogenen Daten zu verarbeiten. +- **Einwilligung**: Jede freiwillig für den bestimmten Fall in informierter Weise und unmissverständlich abgegebene Willensbekundung der betroffenen Person, mit der diese zu verstehen gibt, dass sie mit der Verarbeitung der sie betreffenden personenbezogenen Daten einverstanden ist. + +### 3. Zwecke der Datenverarbeitung + +#### 3.1. Verarbeitung von Sprachaufnahmen, Transkriptionen und Zusammenfassungen + +**Zweck der Datenverarbeitung**: Die Verarbeitung Ihrer Sprachaufnahmen erfolgt zum Zweck der Transkription und anschließenden Erstellung von Zusammenfassungen ("Memories"). Diese Datenverarbeitung erfolgt ausschließlich zu Ihrem persönlichen Gebrauch und die Ergebnisse werden nur Ihnen zur Verfügung gestellt. + +**Art der verarbeiteten Daten**: +- Sprachaufnahmen +- Transkriptionen der Sprachaufnahmen +- Zusammenfassungen der Transkriptionen ("Memories") + +**Der Weg Ihrer Daten**: +1. **Aufnahme**: Die Audiodatei wird zunächst sicher auf Ihrem Endgerät gespeichert +2. **Upload**: Verschlüsselte Übertragung zu Supabase (Frankfurt, Deutschland) +3. **Transkription**: Verarbeitung durch Microsoft Azure (Schweden, EU) +4. **Konvertierung** (bei Bedarf): Google Cloud (Frankfurt, Deutschland) +5. **Analyse**: Erstellung von "Memories" durch Google Gemini (Belgien, EU) oder Azure OpenAI (Schweden, EU) +6. **Speicherung**: Finale Analysen in Supabase (Frankfurt, Deutschland) + +**Hinweis zur Aufnahme anderer Personen**: Bitte beachten Sie, dass die Aufnahme von Sprachaufnahmen anderer Personen ohne deren ausdrückliche Einwilligung gegen die DSGVO verstößt. Sie sind verpflichtet, sicherzustellen, dass alle aufgenommenen Personen ihre Einwilligung zur Verarbeitung der Sprachaufnahmen gegeben haben. Es liegt in Ihrer Verantwortung, diese Einwilligung einzuholen und nachzuweisen. + +#### 3.2. Verarbeitung von Nutzungsdaten + +**Zweck der Datenverarbeitung**: Die Nutzungsdaten werden erhoben, um die Funktionalität und Benutzerfreundlichkeit unserer App zu verbessern. + +**Art der verarbeiteten Daten**: +- IP-Adresse des Mobilgeräts +- Gerätetyp +- Eindeutige Gerätekennung +- Betriebssystem des Mobilgeräts +- Typ des mobilen Internetbrowsers +- Eindeutige Gerätekennungen +- Diagnosedaten + +### 4. Rechtsgrundlagen der Verarbeitung + +- Art. 6 I lit. a DSGVO dient unserem Unternehmen als Rechtsgrundlage für Verarbeitungsvorgänge, bei denen wir eine Einwilligung für einen bestimmten Verarbeitungszweck einholen. +- Ist die Verarbeitung personenbezogener Daten zur Erfüllung eines Vertrags, dessen Vertragspartei die betroffene Person ist, erforderlich, wie dies beispielsweise bei Verarbeitungsvorgängen der Fall ist, die für eine Lieferung von Waren oder die Erbringung einer sonstigen Leistung oder Gegenleistung notwendig sind, so beruht die Verarbeitung auf Art. 6 I lit. b DSGVO. +- Unterliegt unser Unternehmen einer rechtlichen Verpflichtung, durch welche eine Verarbeitung von personenbezogenen Daten erforderlich wird, wie beispielsweise zur Erfüllung steuerlicher Pflichten, so basiert die Verarbeitung auf Art. 6 I lit. c DSGVO. +- In seltenen Fällen könnte die Verarbeitung von personenbezogenen Daten erforderlich werden, um lebenswichtige Interessen der betroffenen Person oder einer anderen natürlichen Person zu schützen. Dann würde die Verarbeitung auf Art. 6 I lit. d DSGVO beruhen. +- Letztlich könnten Verarbeitungsvorgänge auf Art. 6 I lit. f DSGVO beruhen. Auf dieser Rechtsgrundlage basieren Verarbeitungsvorgänge, die von keiner der vorgenannten Rechtsgrundlagen erfasst werden, wenn die Verarbeitung zur Wahrung eines berechtigten Interesses unseres Unternehmens oder eines Dritten erforderlich ist. +- Falls besondere Kategorien personenbezogener Daten gemäß Artikel 9 DSGVO verarbeitet werden, erfolgt die Verarbeitung ebenfalls auf Grundlage Ihrer ausdrücklichen Einwilligung gemäß Artikel 9 Absatz 2 Buchstabe a DSGVO. + +### 5. Datensicherheit + +Wir verwenden umfangreiche technische und organisatorische Maßnahmen zum Schutz Ihrer Daten: + +- **Verschlüsselung**: AES-256 für gespeicherte Daten, TLS 1.2/1.3 für Datenübertragung +- **Zugriffskontrolle**: Multi-Faktor-Authentifizierung, rollenbasierte Berechtigungen +- **Backup-Strategie**: 3-2-1-Backup-Strategie mit täglichen verschlüsselten Backups +- **Zertifizierungen**: Unsere Dienstleister sind SOC 2 Type II, ISO 27001 und DSGVO-konform + +### 6. Dauer der Speicherung + +Das Kriterium für die Dauer der Speicherung von personenbezogenen Daten ist die jeweilige gesetzliche Aufbewahrungsfrist. Nach Ablauf der Frist werden die entsprechenden Daten routinemäßig gelöscht, sofern sie nicht mehr zur Vertragserfüllung erforderlich sind. + +**Speicher- und Löschfristen**: +- **Inhaltsdaten** (Aufnahmen, Transkripte, Memories): Solange das Nutzerkonto besteht; unverzügliche Löschung bei Nutzeranfrage +- **Account-Daten**: Löschung innerhalb von 30 Tagen nach Löschanfrage +- **Technische Protokolldaten**: Maximal 90 Tage +- **Produktanalysedaten** (PostHog): Maximal 12 Monate +- **Backups**: Maximal 30 Tage Vorhaltezeit +- **KI-Verarbeitungscache** (Google/Azure): Automatische Löschung nach maximal 30 Tagen + +**Sonderregelungen für Organisationskunden**: Individuelle automatische Löschfristen können im Rahmen eines Auftragsverarbeitungsvertrags (AVV) vereinbart werden. + +### 7. Gesetzliche oder vertragliche Vorschriften zur Bereitstellung der personenbezogenen Daten + +Wir klären Sie darüber auf, dass die Bereitstellung personenbezogener Daten zum Teil gesetzlich vorgeschrieben ist (z.B. Steuervorschriften) oder sich auch aus vertraglichen Regelungen (z.B. Angaben zum Vertragspartner) ergeben kann. Mitunter kann es zu einem Vertragsschluss erforderlich sein, dass eine betroffene Person uns personenbezogene Daten zur Verfügung stellt, die in der Folge durch uns verarbeitet werden müssen. Eine Nichtbereitstellung der personenbezogenen Daten hätte zur Folge, dass der Vertrag mit dem Betroffenen nicht geschlossen werden könnte. + +### 8. Service Provider + +Wir beschäftigen Drittunternehmen und Einzelpersonen, um unsere App zu unterstützen („Service Provider"), Dienstleistungen in unserem Auftrag zu erbringen, servicebezogene Dienstleistungen durchzuführen oder uns bei der Analyse der Nutzung unserer App zu unterstützen. + +Diese Dritten haben nur Zugriff auf Ihre personenbezogenen Daten, um diese Aufgaben in unserem Auftrag zu erledigen, und sind verpflichtet, diese nicht für andere Zwecke zu verwenden oder offenzulegen. + +### 9. Nutzung von Cloud-Diensten und Datenverarbeitung + +Wir nutzen ausgewählte Cloud-Dienste, um unsere Datenverarbeitung effizient und sicher zu gestalten. Alle Dienstleister sind DSGVO-konform und fungieren als Auftragsverarbeiter gemäß Art. 28 DSGVO. + +#### 9.1. Supabase (Backend & Datenbank) +**Serverstandort**: Frankfurt, Deutschland +**Zweck**: Speicherung aller Inhaltsdaten, Account-Daten und Authentifizierung +**Compliance**: SOC 2 Type II zertifiziert, DSGVO-konform +**Hinweis**: Supabase hat seinen Sitz in den USA. Ein Zugriff aus den USA kann in eng begrenzten Fällen (Support, Wartung) erfolgen. Dies ist durch Standardvertragsklauseln (SCCs) abgesichert. + +#### 9.2. Microsoft Azure +**Serverstandort**: Schweden, EU +**Zweck**: Sprachtranskription (Azure Speech) und KI-Analyse (Azure OpenAI) +**Compliance**: ISO 27001, SOC 1/2/3, DSGVO-konform +**Garantie**: Keine Nutzung Ihrer Daten für Modelltraining; Löschung nach max. 30 Tagen + +#### 9.3. Google Cloud +**Serverstandorte**: +- Frankfurt, Deutschland (Dateikonvertierung) +- Belgien, EU (Google Gemini KI-Analyse) +**Compliance**: ISO 27001, SOC 1/2/3, DSGVO-konform +**Garantie**: Keine Nutzung Ihrer Daten für Modelltraining; Löschung nach max. 30 Tagen + +#### 9.4. PostHog (Produktanalyse) +**Serverstandort**: Frankfurt, Deutschland (EU-Hosting) +**Zweck**: Anonymisierte Nutzungsanalyse zur Verbesserung der App +**Compliance**: SOC 2 Type II, DSGVO-konform +**Besonderheit**: Für Organisationskunden vollständig deaktivierbar + +#### 9.5. Firebase-Dienste + +Firebase (Google Ireland Limited) wird für folgende Funktionen genutzt: +- Firebase Analytics: zur Analyse des Nutzerverhaltens +- Firebase Cloud Messaging: zum Versenden von Push-Benachrichtigungen +- Firebase Realtime Database: zur Speicherung und Synchronisierung von Daten +- Firebase Storage: zur Sicherung und Speicherung von Medien +- Firebase Crashlytics: zur Erkennung und Analyse von App-Fehlern + +Alle Firebase-Dienste sind DSGVO-konform und die Datenverarbeitung erfolgt innerhalb der EU. + +### 10. Transparenzhinweis zu internationalen Datentransfers + +Obwohl Ihre Daten physisch in der EU gespeichert und verarbeitet werden, möchten wir Sie transparent über mögliche internationale Aspekte informieren: + +- **US-Dienstleister**: Einige unserer Auftragsverarbeiter (Supabase, PostHog) haben ihren Hauptsitz in den USA +- **Schutzmaßnahmen**: Alle Datentransfers sind durch EU-Standardvertragsklauseln (SCCs) und zusätzliche technische Maßnahmen abgesichert +- **Restrisiko**: Ein theoretisches Zugriff durch US-Behörden (z.B. via CLOUD Act) kann rechtlich nicht vollständig ausgeschlossen werden +- **Ihre Kontrolle**: Sie können jederzeit die Löschung Ihrer Daten verlangen + +### 11. Nutzung von Google Analytics + +Google Analytics ist ein Webanalysedienst, der von Google angeboten wird und den Website-Traffic nachverfolgt und berichtet. Google nutzt die gesammelten Daten, um die Nutzung unserer App zu überwachen und zu analysieren. Diese Daten werden mit anderen Google-Diensten geteilt. Google kann die gesammelten Daten nutzen, um die Anzeigen in seinem eigenen Werbenetzwerk zu kontextualisieren und zu personalisieren. + +Für weitere Informationen zu den Datenschutzpraktiken von Google besuchen Sie bitte die Google Datenschutz- und Nutzungsbedingungen-Webseite: [Google Privacy & Terms](https://policies.google.com/privacy) + +Wir empfehlen Ihnen außerdem, die Datenschutzbestimmungen von Google zum Schutz Ihrer Daten zu überprüfen: [Google Analytics Safeguarding Your Data](https://support.google.com/analytics/answer/6004245). + +### 12. Bestehen einer automatisierten Entscheidungsfindung + +Als verantwortungsbewusstes Unternehmen verzichten wir auf eine automatische Entscheidungsfindung oder ein Profiling. + +### 13. Rechte der betroffenen Person + +Sie haben das Recht: + +- auf Auskunft über die bei uns über Sie gespeicherten personenbezogenen Daten (Artikel 15 DSGVO), +- auf Berichtigung unrichtiger Daten (Artikel 16 DSGVO), +- auf Löschung (Artikel 17 DSGVO), +- auf Einschränkung der Verarbeitung (Artikel 18 DSGVO), +- auf Datenübertragbarkeit (Artikel 20 DSGVO), +- auf Widerspruch gegen die Verarbeitung (Artikel 21 DSGVO), +- Ihre Einwilligung jederzeit zu widerrufen, ohne dass die Rechtmäßigkeit der aufgrund der Einwilligung bis zum Widerruf erfolgten Verarbeitung berührt wird (Artikel 7 Absatz 3 DSGVO), +- sich bei einer Aufsichtsbehörde zu beschweren (Artikel 77 DSGVO). + +**Widerruf der Einwilligung**: + +Sie können Ihre Einwilligung zur Verarbeitung Ihrer personenbezogenen Daten jederzeit widerrufen. Der Widerruf kann schriftlich oder per E-Mail an die oben genannten Kontaktdaten erfolgen. Ihre Daten werden nach dem Widerruf unverzüglich gelöscht, sofern keine gesetzlichen Aufbewahrungspflichten entgegenstehen. + +### 14. Besondere Hinweise für Organisationskunden + +Für Unternehmens- und Organisationskunden bieten wir maßgeschneiderte Datenschutzlösungen: + +- **Individuelle Auftragsverarbeitungsverträge (AVV)** gemäß Art. 28 DSGVO +- **Automatische Löschfristen** nach Ihren Compliance-Anforderungen +- **Deaktivierung von Analysediensten** (z.B. PostHog) auf Wunsch +- **Angepasste Datenverarbeitungsprozesse** nach Ihren Vorgaben + +Kontaktieren Sie uns unter [kontakt@memoro.ai](mailto:kontakt@memoro.ai) für weitere Informationen. + +### 15. Änderungen der Datenschutzerklärung + +Wir behalten uns vor, diese Datenschutzerklärung bei Bedarf zu aktualisieren. Über wesentliche Änderungen informieren wir Sie durch einen Hinweis in der App. + +Stand: 15.12.2024 \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/de/sprachaufnahme-app-business.mdx b/apps/memoro/apps/landing/src/content/pages/de/sprachaufnahme-app-business.mdx new file mode 100644 index 000000000..f81a5657d --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/sprachaufnahme-app-business.mdx @@ -0,0 +1,576 @@ +--- +title: "Sprachaufnahme App für Business 2025 - Professionelle Audio-Dokumentation | Memoro" +description: "Die Business Sprachaufnahme App für professionelle Dokumentation ► Offline-Modus ✓ Automatische Transkription ✓ DSGVO-konform ✓ Für Außendienst & Teams" +keywords: ["sprachaufnahme app business", "business sprachaufnahme", "diktiergerät app unternehmen", "audio aufnahme business", "sprachmemo business", "voice recording app business", "geschäftliche sprachaufnahme", "mobile dokumentation"] +lang: de +type: product +lastUpdated: 2025-01-09 +sections: + hero: + title: "Sprachaufnahme App für Business - Immer & überall professionell dokumentieren" + subtitle: "Von Außendienst bis Vorstand - Erfassen Sie wichtige Gespräche und Ideen professionell" + cta: "App kostenlos testen" + features: + title: "Warum Memoro die führende Business Sprachaufnahme App ist" + items: ["Offline-Aufnahme", "Automatische Transkription", "Ende-zu-Ende Verschlüsselung", "Team-Integration"] +ogImage: "/images/og/sprachaufnahme-app-business.png" +canonical: "https://memoro.ai/de/sprachaufnahme-app-business" +robots: "index, follow" +priority: 0.9 +changefreq: "monthly" +--- + +import Button from '../../../components/atoms/Button.astro'; +import { Icon } from 'astro-icon/components'; +import FAQ from '../../../components/FAQ.astro'; +import ComparisonTable from '../../../components/ComparisonTable.astro'; + +
    +
    +

    + Business Sprachaufnahme App - Professionell dokumentieren, überall +

    +

    + Erfassen Sie Kundentermine, Ideen und wichtige Gespräche professionell. + Mit Offline-Modus, automatischer Transkription und höchster Sicherheit. +

    +
    + + +
    +
    +
    + + Funktioniert offline +
    +
    + + Verschlüsselt & sicher +
    +
    + + Sofortige Transkription +
    +
    +
    +
    + +## Das Problem mit herkömmlichen Sprachaufnahme Apps + +**Unprofessionell, unsicher, unbrauchbar:** Die meisten Sprachaufnahme Apps sind für private Notizen gedacht. Im Business-Kontext fehlen wichtige Funktionen wie Verschlüsselung, Transkription und Team-Integration. + +### Die 6 größten Probleme herkömmlicher Voice-Apps: + +
    +
    +

    🔓 Keine Verschlüsselung

    +

    Sensible Business-Inhalte ungeschützt gespeichert

    +
    +
    +

    📱 Nur Audio-Dateien

    +

    Keine automatische Verschriftlichung - Stunden manueller Arbeit

    +
    +
    +

    👥 Keine Team-Features

    +

    Aufnahmen bleiben auf einem Gerät isoliert

    +
    +
    +

    🌐 Internet-abhängig

    +

    Funktioniert nicht im Außendienst ohne Verbindung

    +
    +
    +

    🔍 Nicht durchsuchbar

    +

    Wichtige Inhalte verschwinden in Audio-Archiven

    +
    +
    +

    ⚖️ DSGVO-Probleme

    +

    Unklarer Datenschutz bei US-Anbietern

    +
    +
    + +## Die Memoro Business Sprachaufnahme App + +### Entwickelt für professionelle Ansprüche + +
    +
    +
    +
    + +
    +

    Aufnehmen

    +

    HD-Qualität auch offline

    +
    +
    +
    + +
    +

    Verschlüsseln

    +

    Ende-zu-Ende sicher

    +
    +
    +
    + +
    +

    Transkribieren

    +

    KI erstellt Text

    +
    +
    +
    + +
    +

    Teilen

    +

    Team & CRM Integration

    +
    +
    +
    + +### Business Features im Überblick + +#### 📱 **Offline-First Design** +- Aufnahmen funktionieren ohne Internet +- Automatische Synchronisation bei Verbindung +- Keine verpassten Gespräche mehr +- Perfekt für Außendienst & Reisen + +#### 🔐 **Enterprise-Grade Security** +- Ende-zu-Ende Verschlüsselung (AES-256) +- Deutsche Server (Frankfurt) +- Zero-Knowledge Architektur +- DSGVO-konform by Design + +#### 🤖 **KI-Powered Transkription** +- 98% Genauigkeit bei deutschen Inhalten +- Sprecher-Erkennung für Meetings +- Automatische Zusammenfassungen +- Action Items & Keywords + +#### 👥 **Team-Collaboration** +- Aufnahmen mit Team teilen +- Kommentare & Notizen hinzufügen +- Berechtigungsmanagement +- Integration in Workflows + +## Business Sprachaufnahme Apps im Vergleich + + + +## Anwendungsbereiche für Business Sprachaufnahmen + +### 🚗 **Außendienst & Vertrieb** + +**Kundentermine professionell dokumentieren:** +- Beratungsgespräche vollständig erfassen +- Follow-up Actions sofort notieren +- Kundenfeedback strukturiert sammeln +- Fahrtzeiten produktiv nutzen + +*"Als Versicherungsvertreter bin ich täglich beim Kunden. Mit Memoro kann ich alle Gespräche diskret aufnehmen und habe später die perfekte Grundlage für Angebote."* - **Peter S., Außendienst-Leiter** + +#### Typischer Workflow: +``` +08:00 - Fahrt zum Kunden + → Tagesplanung per Sprachmemo + +09:30 - Kundentermin + → Gespräch diskret aufnehmen + +10:30 - Fahrt zum nächsten Termin + → Transkript bereits verfügbar + → Follow-ups per Sprache diktieren + +11:00 - Zurück im Büro + → CRM automatisch aktualisiert +``` + +### 💼 **Beratung & Consulting** + +**Beratungsmeetings optimal nutzen:** +- Kein Mitschreiben während wichtiger Diskussionen +- Vollständige Dokumentation für Compliance +- Präzise Grundlage für Rechnungsstellung +- Wissensmanagement für das Team + +#### Case Study: Unternehmensberatung +**Situation:** 15 Senior Consultants, 200+ Kundentermine/Monat +**Challenge:** Protokolle für Haftung & Abrechnung notwendig +**Lösung:** Business Sprachaufnahme mit automatischer Transkription + +**Vorher:** +- 30 Min Nachbereitung pro 60 Min Meeting +- Unvollständige Protokolle +- 50h/Woche für Administration + +**Mit Memoro:** +- 5 Min Review pro Meeting +- Vollständige, durchsuchbare Protokolle +- 12h/Woche für Administration + +**ROI:** 76% Zeitersparnis = 38h/Woche × 75€ = 2.850€/Woche eingespart + +### 🏗️ **Projekt & Bauleitung** + +**Baufortschritte und Mängel dokumentieren:** +- Begehungen mit Sprachmemos +- Mängellisten per Audio erstellen +- Subunternehmer-Briefings aufnehmen +- Sicherheitshinweise dokumentieren + +### 🎓 **Training & Qualitätssicherung** + +**Schulungen und Reviews optimieren:** +- Kundengespräche für Training nutzen +- Mitarbeiterfeedback strukturiert sammeln +- Best Practices als Audio-Bibliothek +- Qualitätskontrollen dokumentieren + +### 📈 **Management & Führung** + +**Executive Workflows digitalisieren:** +- Strategiediskussionen vollständig erfassen +- Board Meeting Vorbereitung +- Delegations-Briefings per Audio +- Spontane Ideen sofort festhalten + +## Einzigartige Business-Features + +### 🔐 **Enterprise Security Standards** + +**Bank-Grade Verschlüsselung:** +- AES-256 Verschlüsselung in Echtzeit +- Zero-Knowledge Server-Architektur +- Lokale Schlüssel-Generierung +- Automatische Löschung nach X Tagen + +**Compliance & Audit:** +- ISO 27001 zertifizierte Infrastruktur +- DSGVO-Auftragsdatenverarbeitung +- Audit-Logs für alle Aktionen +- Betriebsratkonform einsetzbar + +### 📊 **Business Intelligence** + +**Insights aus Ihren Aufnahmen:** +- Häufigste Kundenanfragen identifizieren +- Gesprächsdauer und -qualität analysieren +- Keyword-Trends erkennen +- Team-Performance bewerten + +### 🔗 **Nahtlose Integration** + +**In Ihre bestehenden Tools:** + +#### CRM-Systeme +- **Salesforce:** Aufnahmen als Aktivitäten +- **HubSpot:** Automatische Kontakt-Zuordnung +- **Pipedrive:** Deals mit Audio-Notizen +- **Microsoft Dynamics:** Vollständige Integration + +#### Projektmanagement +- **Asana:** Tasks aus Action Items +- **Monday.com:** Projekte mit Audio-Updates +- **Notion:** Knowledge Base mit Transkripten + +#### Team-Kommunikation +- **Slack:** Audio-Summaries in Channels +- **Microsoft Teams:** Integration in Chats +- **Email:** Automatische Versendung + +## Mobile Apps für alle Plattformen + +### 📱 **iOS App Features** +- Native iOS Integration +- Siri Shortcuts für schnelle Aufnahmen +- Apple Watch App für diskrete Bedienung +- CarPlay Integration für Vertrieb +- Background-Aufnahme möglich + +### 🤖 **Android App Features** +- Material Design 3.0 +- Google Assistant Integration +- Android Auto Unterstützung +- Widget für Homescreen +- Tasker/Automation Support + +### 💻 **Desktop & Web** +- Native Windows/Mac Apps +- Progressive Web App (PWA) +- Browser-Extension für schnelle Aufnahmen +- Synchronisation zwischen allen Geräten + +## Preise für Business-Kunden + +
    +
    +

    Einzelnutzer

    +

    €19/Monat

    +

    Pro Nutzer

    +
      +
    • ✅ Unlimited Aufnahmen
    • +
    • ✅ Automatische Transkription
    • +
    • ✅ Ende-zu-Ende Verschlüsselung
    • +
    • ✅ Mobile + Desktop Apps
    • +
    • ✅ Basis CRM-Integration
    • +
    • ✅ 1 Jahr Speicherung
    • +
    +
    +
    +
    + Für Teams +
    +

    Team

    +

    €49/Monat

    +

    Bis 10 Nutzer

    +
      +
    • ✅ Alles aus Einzelnutzer
    • +
    • ✅ Team-Sharing & Kommentare
    • +
    • ✅ Erweiterte CRM-Integration
    • +
    • ✅ Admin Dashboard
    • +
    • ✅ Benutzer-Management
    • +
    • ✅ Priority Support
    • +
    • ✅ 3 Jahre Speicherung
    • +
    +
    +
    +

    Enterprise

    +

    €199/Monat

    +

    Unlimited Nutzer

    +
      +
    • ✅ Alles aus Team
    • +
    • ✅ White-Label Branding
    • +
    • ✅ Custom Integrations
    • +
    • ✅ Dedicated Support
    • +
    • ✅ SLA-Garantien
    • +
    • ✅ On-Premise Option
    • +
    • ✅ Unlimited Speicherung
    • +
    +
    +
    + +## ROI-Berechnung für Ihr Unternehmen + +
    +

    Beispiel: 20-köpfiges Vertriebsteam

    + +
    +
    +

    Kosten ohne Memoro (pro Monat)

    +
      +
    • • 20 Mitarbeiter × 25 Kundentermine
    • +
    • • 30 Min Nachbereitung pro Termin
    • +
    • • 20 × 25 × 0,5h = 250h Nachbearbeitung
    • +
    • • 250h × 35€ = 8.750€/Monat
    • +
    +
    +
    +

    Kosten mit Memoro (pro Monat)

    +
      +
    • • Team-Lizenz: 199€
    • +
    • • 5 Min Review pro Termin
    • +
    • • 20 × 25 × 0,08h = 40h Review
    • +
    • • 40h × 35€ + 199€ = 1.599€/Monat
    • +
    +
    +
    + +
    +

    Ersparnis: 7.151€/Monat (82%)

    +

    ROI bereits ab dem ersten Monat: 3.593%

    +
    +
    + +## Erfolgsgeschichten unserer Business-Kunden + +### Case Study: Immobilien-Unternehmen + +**Kunde:** Immobilienmakler mit 25 Standorten +**Challenge:** 800 Besichtigungen/Monat, inkonsistente Dokumentation + +**Vorher:** +- Handschriftliche Notizen bei Besichtigungen +- 50% der Interessenten-Details gingen verloren +- Nachfass-Termine wurden vergessen +- 15 Min Nachbereitung pro Besichtigung + +**Mit Business Sprachaufnahme:** +- Diskrete Audio-Aufnahme bei Besichtigung +- Interessenten-Details automatisch im CRM +- Follow-up Erinnerungen automatisch +- 2 Min Review pro Besichtigung + +**Ergebnisse nach 6 Monaten:** +- **87% Zeitersparnis** bei Dokumentation +- **34% höhere Abschlussrate** durch besseres Follow-up +- **2,3 Mio€ zusätzlicher Umsatz** durch weniger verlorene Leads + +*"Memoro hat unser Business transformiert. Wir verlieren keinen Interessenten mehr und unsere Makler können sich voll auf die Beratung konzentrieren."* - **Julia K., Geschäftsführerin** + +### Case Study: Consulting-Boutique + +**Situation:** 12 Senior Berater, 150 Kundentermine/Monat +**Problem:** Dokumentation für Haftung & Abrechnung aufwendig + +**Impact der Business Sprachaufnahme:** +- **Vorher:** 45 Min Nachbereitung pro Stunde Meeting +- **Nachher:** 5 Min Review + automatische Dokumentation +- **Zeitersparnis:** 89% weniger Administrationsaufwand +- **Qualitätsverbesserung:** Vollständige, rechtssichere Protokolle + +**Finanzieller Impact:** +- 40h/Woche weniger Administration +- 40h × 120€ Beraterstundensatz = 4.800€/Woche +- **Jährliche Ersparnis: 249.600€** + +## Häufige Fragen zur Business Sprachaufnahme + + + +## Jetzt Business Sprachaufnahme starten + +
    +

    Revolutionieren Sie Ihre Business-Dokumentation

    +

    + Starten Sie noch heute und erleben Sie, wie professionelle Sprachaufnahme + Ihre Produktivität und Kundenkommunikation verbessert. +

    +
    + + +
    +

    + Keine Kreditkarte • Sofort einsatzbereit • Für alle Team-Größen +

    +
    + +## Apps Download + +
    +
    + +

    iOS App

    +

    Für iPhone & iPad

    + +
    +
    + +

    Android App

    +

    Für Android Geräte

    + +
    +
    + +--- + +*Letzte Aktualisierung: Januar 2025 | Alle Preise zzgl. MwSt.* \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/de/team.mdx b/apps/memoro/apps/landing/src/content/pages/de/team.mdx new file mode 100644 index 000000000..6d9f5b17e --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/team.mdx @@ -0,0 +1,99 @@ +--- +title: "Team | Memoro" +description: "Lernen Sie das Team hinter Memoro kennen" +lang: "de" +type: "page" +lastUpdated: 2024-02-22 +sections: + hero: + title: "Team" + subtitle: "Lernen Sie das Team hinter Memoro kennen" + categories: + kernteam: + title: "Kernteam" + description: "Die treibende Kraft hinter Memoro - unser engagiertes Team arbeitet täglich daran, Ihre Gespräche noch besser zu dokumentieren." + freelance: + title: "Freelance" + description: "Externe Experten, die mit ihrem Fachwissen und ihrer Kreativität wichtige Beiträge zu unserem Erfolg leisten." + mentoren: + title: "Mentoren" + description: "Erfahrene Berater, die uns mit wertvollen Einblicken und strategischer Führung unterstützen." + unterstuetzer: + title: "Unterstützer" + description: "Partner und Förderer, die an unsere Vision glauben und uns auf unserem Weg begleiten." + alumni: + title: "Alumni" + description: "Ehemalige Teammitglieder, die wichtige Grundsteine für Memoro gelegt haben." + faq: + title: "Häufig gestellte Fragen zum Team" + items: + - question: "Wie kann ich Teil des Teams werden?" + answer: "Wir sind immer auf der Suche nach talentierten Menschen! Aktuelle Stellenangebote finden Sie auf unserer Karriereseite oder schicken Sie uns eine Initiative Bewerbung an jobs@memoro.app" + - question: "Bietet ihr Praktika oder Werkstudentenstellen an?" + answer: "Ja, wir bieten regelmäßig Praktika und Werkstudentenstellen in verschiedenen Bereichen an. Perfekt für Studierende, die praktische Erfahrung sammeln möchten." + - question: "Arbeitet ihr remote oder vor Ort?" + answer: "Wir sind ein hybrides Team. Während unser Hauptbüro in Berlin ist, arbeiten einige Teammitglieder auch remote. Wir legen Wert auf flexible Arbeitsmodelle." + - question: "Was macht die Arbeit bei Memoro besonders?" + answer: "Bei uns arbeiten Sie an innovativen Lösungen für digitales Lernen. Wir bieten eine offene Unternehmenskultur, regelmäßige Team-Events und spannende technische Herausforderungen." +--- + +import TeamCard from "../../../components/TeamCard.astro"; +import FAQSection from "../../../components/FAQSection.astro"; + +export const TeamContent = ({ team, lang }) => { + // Kategorien definieren + const categories = frontmatter.sections.categories; + +// Team nach Kategorien gruppieren +const teamByCategory = Object.keys(categories).reduce((acc, category) => { +acc[category] = team.filter((member) => member.data.category === category); +return acc; +}, {}); + +return ( + +<> +
    +

    +{frontmatter.sections.hero.title} +

    +

    +{frontmatter.sections.hero.subtitle} +

    +
    + + {Object.entries(teamByCategory).map( + ([category, members]) => + members.length > 0 && ( +
    +

    + {categories[category]} +

    +
    + {members.map((member) => ( + + ))} +
    +
    + ) + )} + + + + +); +}; + +{" "} diff --git a/apps/memoro/apps/landing/src/content/pages/de/terms.mdx b/apps/memoro/apps/landing/src/content/pages/de/terms.mdx new file mode 100644 index 000000000..6756f0a26 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/terms.mdx @@ -0,0 +1,111 @@ +--- +title: "AGB | Memoro" +description: "Allgemeine Geschäftsbedingungen der Memoro GmbH" +lang: "de" +type: "legal" +lastUpdated: 2024-12-11 +sections: + hero: + title: "Allgemeine Geschäftsbedingungen" + subtitle: "Nutzungsbedingungen für die Memoro App und Website" +--- + +## Allgemeine Geschäftsbedingungen der App Memoro + +### 1. Geltungsbereich + +Die vorliegenden Allgemeinen Geschäftsbedingungen (nachfolgend "AGB") gelten für die Nutzung der App "Memoro" und die Inanspruchnahme aller über die App angebotenen Dienste. + +### 2. Registrierung und Account + +Um die Dienste von Memoro nutzen zu können, muss sich der Nutzer in der App registrieren und einen persönlichen Account erstellen. Bei der Registrierung müssen wahrheitsgemäße und aktuelle Informationen angegeben werden. + +### 3. Dienstleistungen von Memoro + +Memoro bietet Dienstleistungen im Bereich der Audiotranskription und Zusammenfassungen an. Die Nutzer können ihre Audiodateien über die App hochladen und die entsprechenden Transkriptionen und/oder Zusammenfassungen anfordern. + +### 4. Nutzerverantwortung + +Die Nutzer sind für die verantwortungsvolle Nutzung der App und der Dienste verantwortlich. Dies schließt das Hochladen von illegalen oder rechtlich geschützten Inhalten ohne die erforderliche Berechtigung ein. Jeder Nutzer trägt die volle Verantwortung für die Inhalte, die er in der App hochlädt. + +### 5. Urheberrecht + +Die Urheberrechte an den hochgeladenen Inhalten verbleiben beim Nutzer. Memoro beansprucht kein Eigentum an den übermittelten Inhalten, benötigt jedoch das Recht, diese Inhalte für die Erbringung der Dienstleistungen zu verwenden. + +### 6. Haftungsausschluss + +Memoro haftet nicht für Fehler oder Ungenauigkeiten in den Transkriptionen oder Zusammenfassungen. Die Dienstleistungen werden "wie sie sind" und ohne jegliche Garantien angeboten. + +### 7. Datenschutz + +Memoro verpflichtet sich, die Privatsphäre der Nutzer zu schützen. Details zum Datenschutz sind in unserer Datenschutzerklärung beschrieben. + +### 8. Preise und Zahlungsbedingungen + +Die Preise für die Dienstleistungen von Memoro sind in der App angegeben. Alle Zahlungen sind vor Inanspruchnahme der Dienstleistungen fällig. Zahlungen erfolgen über die in der App angebotenen Zahlungsmethoden. + +### 9. Kündigung + +Memoro behält sich das Recht vor, Accounts bei Verstößen gegen diese AGB oder bei Missbrauch der Dienste zu sperren oder zu löschen. Nutzer können ihren Account jederzeit über die App löschen. + +### 10. Änderungen der AGB + +Memoro kann diese AGB von Zeit zu Zeit ändern. Die Nutzer werden über wesentliche Änderungen in Kenntnis gesetzt. Die fortgesetzte Nutzung der App nach solchen Änderungen gilt als Zustimmung zu den überarbeiteten Bedingungen. + +### 11. Anwendbares Recht und Gerichtsstand + +Diese AGB unterliegen dem Deutschen Recht. Gerichtsstand ist Freiburg, Baden-Württemberg. + +--- + +## Allgemeine Geschäftsbedingungen der Memoro Website + +### 1. Geltungsbereich + +Diese Allgemeinen Geschäftsbedingungen („AGB") gelten für den Geschäftsbereich der Memoro GmbH, Münzgasse 19, 78462 Konstanz (nachfolgend „Firma"). Die Firma besitzt und betreibt die Plattform www.memoro.ai und erbringt darauf entgeltliche und unentgeltliche Dienstleistungen im Zusammenhang mit der Gründung von Firmen, dem Erstellen von Verträgen, der Durchführung von Handelsregisteränderungen sowie der Durchführung von Kursen. Zudem bietet die Firma Beratungsdienstleistungen an und erteilt Lizenzrechte. Des Weiteren verkauft die Firma Produkte im obengenannten Bereich. Diese AGB gelten für die obengenannten Bereiche sowie die weiteren Dienstleistungen, welche die Firma direkt und indirekt gegenüber dem Kunden erbringt. + +### 2. Vertragsabschluss + +Der Vertragsabschluss kommt durch die Akzeptanz der Offerte der Firma, betreffend den Bezug von Dienstleistungen, Produkten oder Lizenzen durch den Kunden zustande. Der Vertrag kommt des Weiteren zustande, wenn der Kunde die von der Firma angebotenen Dienstleistungen in Anspruch nimmt oder Produkte der Firma bezieht oder benutzt (Lizenz). + +### 3. Preise + +Vorbehaltlich anderweitiger Offerten verstehen sich alle Preise in Schweizer Franken (CHF). Alle Preise verstehen sich exklusive allfällig anwendbarer Mehrwertsteuer (MWST.). Die Preise verstehen sich exklusive weiterer allfällig anwendbarer Steuern. Die Firma behält sich vor, die Preise jederzeit zu ändern. Es gelten die zum Zeitpunkt des Vertragsabschlusses gültigen Preise auf der Website www.memoro.ai oder gemäß der separaten Preisliste der Firma. Für den Kunden gelten die zum Zeitpunkt des Vertragsabschlusses gültigen Preise. + +### 4. Bezahlung + +Der Kunde ist verpflichtet, den in Rechnung gestellten Betrag innert 30 Tagen ab Rechnungsdatum zu bezahlen. Es sei denn, er habe den Betrag bereits beim Bestellvorgang via Kreditkarte, Paypal oder anderen Zahlungssystemen beglichen. Wird die Rechnung nicht binnen vorgenannter Zahlungsfrist beglichen, wird der Kunde abgemahnt. Begleicht der Kunde die Rechnung nicht binnen der angesetzten Mahnfrist fällt er automatisch in Verzug. Ab Zeitpunkt des Verzuges schuldet der Kunde Verzugszinsen in der Höhe von 5%. Die Firma behält sich vor, jederzeit ohne Angabe von Gründen Vorauskasse zu verlangen. Verrechnung des in Rechnung gestellten Betrages mit einer allfälligen Forderung des Kunden gegen die Firma ist nicht zulässig. Der Firma steht das Recht zu bei Zahlungsverzug die Dienstleistungserbringung, die Lieferung des Produkts oder die Gewährung der Lizenz zu verweigern. + +### 5. Pflichten der Firma + +#### 5.1. Dienstleistungserbringung + +Vorbehaltlich anderslautender Vereinbarung, erfüllt die Firma ihre Verpflichtung durch Erbringung der vereinbarten Dienstleistung. Die Dienstleistung beinhaltet die Leistungen, welche zum Zeitpunkt des Vertragsschlusses online publiziert sind oder waren. Ein Großteil der Dienstleistungen der Firma werden online erbracht. Für alle weiteren Dienstleistungen gilt der Sitz der Firma als Erfüllungsort, es sei denn es werden anderweitige Bestimmungen getroffen. + +#### 5.2. Hilfspersonen + +Die Parteien haben das ausdrückliche Recht, zur Erledigung ihrer vertragsgemäßen Pflichten Hilfspersonen beizuziehen. Sie haben sicherzustellen, dass der Beizug der Hilfsperson unter Einhaltung aller zwingenden gesetzlichen Bestimmungen und allfälliger Gesamtarbeitsverträge erfolgt. + +### 6. Lizenz + +#### 6.1. Nutzung + +Die Firma gewährt dem Kunden das Recht, die Dokumente der entsprechenden Vertragsboxen zu nutzen. Diese Nutzungsrechte sind nicht-exklusive, unübertragbar und auf die Nutzung durch den Kunden beschränkt. Die einzelnen Dokumente dienen als Vorlagen und dürfen vom Kunden lediglich als Vorlagen und für eigene Zwecke genutzt werden. Jegliche Weitergabe an Dritte sowie anderweitige Nutzung, kommerzieller oder anderer Natur ist untersagt. + +#### 6.2. Formatierung + +Sind die Dokumente in einem Format erstellt, welche die Nutzungsrechte des Kunden beschränken so entspricht dies dem Willen der Firma und eine Umformatierung ist nicht zulässig. + +#### 6.3. Befristung + +Der Inhalt steht dem Kunden für die vereinbarte Dauer zur Verfügung. Nach Ablauf dieser Frist hat der Kunde keinen Anspruch mehr auf den Inhalt der Vertragsboxen. + +### 7. Pflichten des Kunden + +#### 7.1. Ausübung der Nutzungsrechte + +Der Kunde ist verpflichtet, die Nutzungsrechte lediglich im gewährten Umfang auszuüben. Der Kunde ist für die sichere Aufbewahrung seiner Zugangsdaten und Passwörter vollumfänglich verantwortlich. Für den Inhalt der erfassten Daten und Informationen ist der Kunde selbst verantwortlich. Der Kunde ist verpflichtet sämtliche Vorkehrungen welche zur Erbringung der Dienstleistung durch die Firma erforderlich sind umgehend vorzunehmen. Der Kunde hat die Vorkehrungen am vereinbarten Ort zur vereinbarten Zeit und im vereinbarten Maß vorzunehmen. Je nach Umständen gehört dazu das Erbringen geeigneter Informationen und Unterlagen an die Firma. Der Kunde bestätigt mit dem Akzeptieren der vorliegenden AGB zudem, dass er über eine unbeschränkte Handlungsfähigkeit verfügt und volljährig ist. Der Kunde erklärt mit der Registrierung ausdrücklich, dass sämtliche gemachten Angaben der Wahrheit entsprechen, aktuell sind und mit den Rechten Dritter, den guten Sitten und dem Gesetz in Übereinstimmung sind. + +#### 7.2. Mitwirkungspflichten + +Der Kunde ist verpflichtet sämtliche Vorkehrungen welche zur Erbringung der Dienstleistung durch die Firma erforderlich sind umgehend vorzunehmen. Der Kunde hat die Vorkehrungen am vereinbarten Ort zur vereinbarten Zeit und im vereinbarten Maß vorzunehmen. Je nach Umständen gehört dazu das Erbringen geeigneter Informationen und Unterlagen für die Firma. Des Weiteren ist der Kunde zur umfassenden und prompten Mitwirkung verpflichtet. \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/de/testimonials.mdx b/apps/memoro/apps/landing/src/content/pages/de/testimonials.mdx new file mode 100644 index 000000000..79c03cf92 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/testimonials.mdx @@ -0,0 +1,52 @@ +--- +title: "Referenzen | Memoro" +description: "Erfahren Sie, was unsere Kunden über Memoro sagen" +lang: "de" +type: "page" +lastUpdated: 2024-02-22 +sections: + hero: + title: "Was unsere Kunden sagen" + subtitle: "Erfahren Sie, wie Memoro Teams und Unternehmen dabei hilft, effizienter zu lernen und zu arbeiten" + callToAction: + title: "Bereit für bessere Meeting-Dokumentation?" + description: "Entdecke wie Memoro deine Meetings effizienter und produktiver macht." + buttonText: "App herunterladen" + buttonLink: "/de/download" +--- + +import TestimonialCard from "../../../components/TestimonialCard.astro"; +import CallToAction from "../../../components/CallToAction.astro"; + +export const TestimonialsContent = ({ testimonials, lang }) => ( + <> +

    + {frontmatter.sections.hero.title} +

    + +
    + {testimonials.map((testimonial) => ( + + ))} +
    + +
    + +
    + + +); + +{" "} diff --git a/apps/memoro/apps/landing/src/content/pages/de/vergleich.mdx b/apps/memoro/apps/landing/src/content/pages/de/vergleich.mdx new file mode 100644 index 000000000..6e5a6860c --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/vergleich.mdx @@ -0,0 +1,650 @@ +--- +title: "Meeting-Software Vergleich 2025 - Die besten Alternativen im Test | Memoro" +description: "Umfassender Vergleich aller Meeting-Protokoll Software ► Otter.ai vs Fireflies vs Memoro ✓ DSGVO-Check ✓ Preise ✓ Features ✓ Finden Sie die beste Lösung!" +keywords: ["meeting software vergleich", "otter.ai alternative", "fireflies alternative", "meeting protokoll software test", "transkriptionssoftware vergleich", "dsgvo konforme meeting tools", "beste meeting software deutschland", "ki meeting assistant vergleich"] +lang: de +type: comparison-hub +lastUpdated: 2025-01-09 +sections: + hero: + title: "Die beste Meeting-Software für Deutschland - Großer Vergleich 2025" + subtitle: "Finden Sie die perfekte Meeting-Software für Ihr Unternehmen" + cta: "Jetzt vergleichen" + features: + title: "Top 10 Meeting-Software Anbieter im Überblick" + items: ["Memoro", "Otter.ai", "Fireflies.ai", "MeetGeek", "Sembly", "tl;dv", "Fathom", "Rev", "Notta", "Gong.io"] +ogImage: "/images/og/vergleich.png" +canonical: "https://memoro.ai/de/vergleich" +robots: "index, follow" +priority: 0.95 +changefreq: "weekly" +--- + +import Button from '../../../components/atoms/Button.astro'; +import { Icon } from 'astro-icon/components'; + +# Die beste Meeting-Software für Deutschland - Großer Vergleich 2025 + +
    +
    +

    Finden Sie die perfekte Meeting-Software für Ihr Unternehmen

    +

    + Vergleichen Sie die Top 10 Meeting-Protokoll Tools nach Funktionen, Preis, Datenschutz und mehr. + Speziell für deutsche Unternehmen mit DSGVO-Fokus. +

    +
    + + +
    +
    +
    + +## 🏆 Die Top 10 Meeting-Software Anbieter im Überblick + +
    +
    +
    + 🥇 +
    +

    Memoro

    +
    + ⭐⭐⭐⭐⭐ + (4.9/5) + Testsieger DSGVO +
    +
    +
    +
    ab €12.99
    +
    600 Min kostenlos
    +
    +
    +
    +
    +

    ✅ Vorteile:

    +
      +
    • • 100% DSGVO-konform
    • +
    • • Deutsche Server
    • +
    • • 98% Genauigkeit
    • +
    • • 50+ Sprachen
    • +
    +
    +
    +

    ⚠️ Nachteile:

    +
      +
    • • Noch nicht so bekannt
    • +
    +

    Ideal für: Deutsche Unternehmen mit Datenschutz-Fokus

    +
    +
    + +
    + +
    +
    +

    Otter.ai

    +
    ⭐⭐⭐⭐ (4.2/5)
    +

    ab $16.99 • 300 Min kostenlos

    +

    Ideal für: Englischsprachige Teams

    + +
    + +
    +

    Fireflies.ai

    +
    ⭐⭐⭐⭐ (4.1/5)
    +

    ab $18 • Limitiert kostenlos

    +

    ⚠️ DSGVO-Risiko

    + +
    + +
    +

    Sembly.ai

    +
    ⭐⭐⭐⭐ (4.3/5)
    +

    ab $10 • Begrenzt kostenlos

    +

    Ideal für: Große Teams

    + +
    + +
    +

    MeetGeek

    +
    ⭐⭐⭐⭐ (4.0/5)
    +

    ab $15 • 500 Min kostenlos

    +

    Ideal für: Sales-Teams

    + +
    + +
    +

    Fathom

    +
    ⭐⭐⭐⭐ (4.4/5)
    +

    ab $12 • Unlimited*

    +

    Ideal für: Kleine Teams

    + +
    + +
    +

    Gong.io

    +
    ⭐⭐⭐⭐⭐ (4.6/5)
    +

    ab $5000/Jahr

    +

    Ideal für: Enterprise

    + +
    +
    +
    + +## 🎯 Detaillierter Feature-Vergleich + +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FeatureMemoroOtter.aiFireflies.aiAndere
    Server-Standort🇩🇪 Deutschland🇺🇸 USA🇺🇸 USA🇺🇸 Meist USA
    DSGVO-konform✅ 100%❌ Risiko⚠️ Fraglich⚠️ Teilweise
    Deutsche Sprache✅ Perfekt❌ Nein⚠️ Basic⚠️ Unterschiedlich
    Genauigkeit98%+90%95%90-95%
    Kostenlose Minuten600/Monat300/MonatLimitiert100-500
    Deutscher Support✅ 24/7❌ Nein❌ Nein❌ Meist nicht
    Offline-Modus✅ Ja❌ Nein❌ Nein❌ Selten
    Preis (Pro Plan)€12.99$16.99$18$10-20
    +
    +
    + +## 🎯 Welche Software passt zu Ihnen? + +
    +
    + +

    Für deutsche Unternehmen

    +

    + Sie brauchen 100% DSGVO-Konformität, deutsche Server und Support? +

    +
    +

    👉 Empfehlung: Memoro

    +
      +
    • ✓ Deutsche Server & Rechtssicherheit
    • +
    • ✓ Perfekte deutsche Spracherkennung
    • +
    • ✓ Betriebsrat-konform
    • +
    +
    + +
    + +
    + +

    Für internationale Teams

    +

    + Sie arbeiten mit vielen Sprachen und globalen Teams? +

    +
    +

    👉 Alternativen:

    +
      +
    • • Notta (58 Sprachen, günstig)
    • +
    • • Fireflies (wenn DSGVO egal)
    • +
    • • Memoro (50+ Sprachen + DSGVO)
    • +
    +
    + +
    + +
    + +

    Für kleine Budgets

    +

    + Sie suchen eine günstige oder kostenlose Lösung? +

    +
    +

    👉 Budget-Optionen:

    +
      +
    • • Memoro (600 Min kostenlos)
    • +
    • • Fathom (Unlimited* mit Limits)
    • +
    • • Sembly ($10 Einstieg)
    • +
    +
    + +
    + +
    + +

    Für Enterprise

    +

    + Sie brauchen Enterprise-Features und Support? +

    +
    +

    👉 Enterprise-Lösungen:

    +
      +
    • • Gong.io (Revenue Intelligence)
    • +
    • • Memoro Enterprise (On-Premise)
    • +
    • • Sembly (HIPAA, SOC2)
    • +
    +
    + +
    +
    + +## ⚖️ DSGVO-Risiko-Bewertung + +
    +

    + + Vorsicht: Rechtliche Risiken bei US-Anbietern +

    + +
    +
    +

    🚫 Hohes Risiko

    +
      +
    • + + Otter.ai - US-Server, kein Deutsch +
    • +
    • + + Fireflies.ai - US-Datenverarbeitung +
    • +
    • + + Gong.io - Nur US-Markt +
    • +
    +
    + +
    +

    ⚠️ Mittleres Risiko

    +
      +
    • + + MeetGeek - Teilweise EU-Server +
    • +
    • + + Sembly - SOC2, aber US +
    • +
    • + + tl;dv - EU-Server optional +
    • +
    +
    + +
    +

    ✅ Kein Risiko

    +
      +
    • + + Memoro - 100% DSGVO, deutsche Server +
    • +
    • + Einziger Anbieter mit vollständiger deutscher Datenverarbeitung +
    • +
    +
    +
    + +
    +

    + ⚠️ Warnung: Bei DSGVO-Verstößen drohen Bußgelder bis zu 20 Mio. € oder 4% des Jahresumsatzes. + Prüfen Sie genau, wo Ihre Daten verarbeitet werden! +

    +
    +
    + +## 📈 Preis-Leistungs-Vergleich + +
    +

    Was bekommen Sie für Ihr Geld?

    + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    SoftwarePreis/MonatKostenlosPreis-LeistungVersteckte Kosten
    + 🥇 Memoro + €12.99600 Min +
    + {"⭐⭐⭐⭐⭐".split("").map(() => )} +
    +
    Keine
    Otter.ai$16.99300 Min +
    + {"⭐⭐⭐".split("").map(() => )} +
    +
    Währung, Limits
    Fireflies.ai$18Sehr limitiert +
    + {"⭐⭐".split("").map(() => )} +
    +
    Storage, Credits
    Sembly.ai$10Begrenzt +
    + {"⭐⭐⭐⭐".split("").map(() => )} +
    +
    Add-ons
    Gong.io$400+- +
    + {"⭐".split("").map(() => )} +
    +
    Setup, Training
    +
    + +
    +

    + 💡 Tipp: Achten Sie auf versteckte Kosten wie Währungsgebühren, + Storage-Limits, Credit-Systeme und Setup-Gebühren. Memoro hat transparente EUR-Preise ohne Überraschungen. +

    +
    +
    + +## 🚀 Migration von anderen Anbietern + + + +## ❓ Häufige Fragen zum Software-Vergleich + +
    +
    +
    + Welche Software ist wirklich DSGVO-konform? +

    + Nur Memoro bietet 100% DSGVO-Konformität mit deutschen Servern und vollständiger Datenverarbeitung in Deutschland. + Alle US-Anbieter (Otter, Fireflies, etc.) haben rechtliche Risiken durch den CLOUD Act. +

    +
    + +
    + Was ist die beste kostenlose Option? +

    + Memoro bietet mit 600 Minuten/Monat das großzügigste kostenlose Kontingent. + Fathom und tl;dv werben mit "unlimited", haben aber versteckte Einschränkungen. +

    +
    + +
    + Welche Software hat die beste deutsche Spracherkennung? +

    + Memoro wurde speziell für die deutsche Sprache optimiert und erreicht 98%+ Genauigkeit. + Otter.ai unterstützt gar kein Deutsch, andere Anbieter nur mit mäßiger Qualität. +

    +
    + +
    + Kann ich meine Daten von anderen Anbietern migrieren? +

    + Ja! Memoro bietet kostenlose Migration von allen gängigen Anbietern. + Export bei altem Anbieter → Import bei Memoro → Fertig in 30 Minuten. +

    +
    +
    + +
    +
    + Warum sind US-Anbieter problematisch? +

    + Nach dem Schrems-II-Urteil sind Datenübertragungen in die USA ohne zusätzliche Schutzmaßnahmen nicht DSGVO-konform. + US-Behörden können theoretisch auf Ihre Daten zugreifen (CLOUD Act). +

    +
    + +
    + Welche Software eignet sich für Unternehmen? +

    + Für deutsche Unternehmen: Memoro (DSGVO, Betriebsrat-konform). + Für internationale Konzerne: Gong.io (sehr teuer) oder Sembly (gutes Preis-Leistungs-Verhältnis). +

    +
    + +
    + Was bedeuten die Preisangaben? +

    + Die Preise sind Startpreise pro Nutzer/Monat. Viele Anbieter haben versteckte Kosten + (Storage-Limits, Credits, Währungsgebühren). Memoro hat transparente EUR-Preise ohne Überraschungen. +

    +
    + +
    + Welche Integrationen sind wichtig? +

    + Essentiell: Teams, Zoom, Google Meet. + Nice-to-have: Slack, CRM-Systeme, Kalender. + Alle Top-Anbieter unterstützen die wichtigsten Meeting-Plattformen. +

    +
    +
    +
    + +## 🎯 Unser Fazit: Die beste Meeting-Software 2025 + +
    +
    +

    + + Testsieger für deutsche Unternehmen: Memoro +

    + +
    +
    +
    98%+
    +
    Genauigkeit Deutsch
    +
    +
    +
    100%
    +
    DSGVO-konform
    +
    +
    +
    50+
    +
    Sprachen
    +
    +
    + +
    +

    Warum Memoro gewinnt:

    +
      +
    • + + Einziger Anbieter mit 100% deutscher Datenverarbeitung +
    • +
    • + + Beste Spracherkennung für Deutsch und Dialekte +
    • +
    • + + Großzügigstes kostenloses Kontingent (600 Min) +
    • +
    • + + Fairer Preis ohne versteckte Kosten in EUR +
    • +
    • + + Deutscher Support und Betriebsrat-konform +
    • +
    +
    + +
    + + +
    +
    +
    + +## 📚 Weitere Ressourcen + +
    +
    + +

    Detaillierte Einzelvergleiche

    + +
    + +
    + +

    Kostenrechner

    +

    + Berechnen Sie Ihre Ersparnis beim Wechsel zu Memoro +

    + +
    + +
    + +

    DSGVO-Guide

    +

    + Alles über Datenschutz bei Meeting-Software +

    + +
    +
    \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/de/vorstandsitzungen-protokoll-software.mdx b/apps/memoro/apps/landing/src/content/pages/de/vorstandsitzungen-protokoll-software.mdx new file mode 100644 index 000000000..563694405 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/de/vorstandsitzungen-protokoll-software.mdx @@ -0,0 +1,443 @@ +--- +title: "Vorstandsprotokoll Software | Rechtssicher & DSGVO-konform | Memoro" +description: "✓ Rechtssichere Vorstandsprotokolle automatisch erstellen ✓ DSGVO-konform ✓ Deutsche Server ✓ § 201 StGB konform ✓ Sprechererkennung ► Jetzt kostenlos testen!" +lang: "de" +type: "landing" +lastUpdated: 2025-01-30 +sections: + hero: + title: "Rechtssichere Vorstandsprotokolle – automatisch, vertraulich, DSGVO-konform" + subtitle: "Memoro dokumentiert Ihre Vorstandssitzungen mit höchster Vertraulichkeit. KI-gestützte Protokollerstellung auf deutschen Servern – rechtssicher nach § 201 StGB und DSGVO." + image: "/images/industries/Office-Businessman-Recording-Memoro-AI-App-Transcription.png" + imageAlt: "Memoro Vorstandsprotokoll Software - Automatische DSGVO-konforme Protokollerstellung" + cta: + primary: + text: "Kostenlos testen" + link: "/de/download" + secondary: + text: "Demo für Vorstände buchen" + link: "/de/contact" + trustBadges: + - icon: "🇩🇪" + text: "Deutsche Server" + - icon: "🔒" + text: "ISO 27001" + - icon: "✅" + text: "DSGVO-konform" + - icon: "⚖️" + text: "§ 201 StGB konform" + problems: + title: "Die Herausforderungen der Vorstandsprotokollierung" + items: + - icon: "⏱️" + title: "Zeitaufwand: 2-4 Stunden pro Sitzung" + description: "Manuelle Protokollierung bindet wertvolle Ressourcen, die produktiver genutzt werden könnten." + - icon: "🔐" + title: "Vertraulichkeit gefährdet" + description: "Externe Transkriptionsdienste mit US-Servern stellen ein Sicherheitsrisiko für sensible Vorstandsinformationen dar." + - icon: "⚖️" + title: "Rechtliche Haftungsrisiken" + description: "Fehlende oder unvollständige Protokolle können rechtliche Konsequenzen haben und die Nachweispflicht verletzen." + - icon: "📝" + title: "Verlust wichtiger Details" + description: "Manuelle Mitschriften verpassen kritische Beschlüsse, Abstimmungen und Diskussionspunkte." + solution: + title: "Memoro: Ihre Lösung für rechtssichere Vorstandsprotokolle" + subtitle: "Dokumentieren Sie Vorstandssitzungen vollständig und rechtssicher – automatisch, vertraulich, auf deutschen Servern" + steps: + - number: "1" + title: "Rechtssicher aufzeichnen" + description: "§ 201 StGB Einwilligungsmanagement integriert. Alle Teilnehmer werden automatisch informiert und dokumentiert." + icon: "📝" + - number: "2" + title: "KI analysiert vertraulich" + description: "Automatische Sprechererkennung, Beschlusserkennung und Strukturierung – alles auf deutschen Servern mit Zero-Knowledge-Verschlüsselung." + icon: "🔒" + - number: "3" + title: "Protokoll rechtssicher archivieren" + description: "Revisionssicheres Protokoll mit Audit-Trail, Versionierung und granularen Zugriffsrechten." + icon: "✅" + usps: + title: "Für Vorstandssitzungen entwickelt" + items: + - icon: "🔒" + title: "Maximale Vertraulichkeit" + description: "Zero-Knowledge-Verschlüsselung, deutsche Server, kein Zugriff Dritter auf Ihre Protokolle" + - icon: "⚖️" + title: "Rechtssicherheit garantiert" + description: "DSGVO-konform, § 201 StGB Einwilligungsmanagement integriert, lückenlose Audit-Trails, revisionssichere Archivierung" + - icon: "🎯" + title: "Präzise Vorstandsprotokolle" + description: "Automatische Sprechererkennung, Beschlusserkennung & Abstimmungsdokumentation, Action Item Tracking, Anwesenheitslisten" + - icon: "⚡" + title: "Zeit & Ressourcen sparen" + description: "90% Zeitersparnis bei Protokollerstellung, Protokoll direkt nach Meeting verfügbar, Integration mit Board Management Tools" + features: + title: "Funktionen für professionelle Vorstandsarbeit" + items: + - icon: "🌍" + title: "Mehrsprachige Transkription" + description: "Dokumentation internationaler Board Meetings in 80+ Sprachen" + - icon: "👥" + title: "20+ Sprecher Erkennung" + description: "Klare Zuordnung: wer hat was gesagt und beschlossen" + - icon: "📋" + title: "Custom Templates" + description: "Protokollvorlagen nach Corporate Guidelines" + - icon: "✅" + title: "Beschlusserkennung" + description: "KI identifiziert automatisch Beschlüsse und TO-DOs" + - icon: "🔐" + title: "Granulare Zugriffsrechte" + description: "Nur berechtigte Personen sehen vertrauliche Inhalte" + - icon: "📑" + title: "Versionierung" + description: "Alle Änderungen nachvollziehbar (Compliance)" + - icon: "📤" + title: "Export-Optionen" + description: "Word, PDF, Board Portal Integration" + - icon: "📱" + title: "Offline-Modus" + description: "Auch in geschlossenen Räumen nutzbar" + comparison: + title: "Memoro vs. traditionelle Lösungen vs. US-Anbieter" + subtitle: "Warum Memoro die beste Wahl für Vorstandssitzungen ist" + competitors: + - name: "Memoro" + features: + - "✅ Deutsche Server" + - "✅ DSGVO 100%" + - "✅ § 201 StGB" + - "✅ Zero-Knowledge" + - "✅ 20+ Sprecher" + - "✅ Deutscher Support" + - "⚡ 10 Min" + highlight: true + - name: "Manuelle Protokollierung" + features: + - "— Intern" + - "✅ Ja" + - "⚠️ Manuell" + - "✅ Intern" + - "❌ Nein" + - "— N/A" + - "⏱️ 2-4h" + - name: "US-Tools (Fathom, Fireflies)" + features: + - "❌ US-Server" + - "⚠️ Eingeschränkt" + - "❌ Nicht abgedeckt" + - "❌ US-Zugriff" + - "✅ Ja" + - "❌ Englisch" + - "⚡ 15 Min" + useCases: + title: "Memoro im Einsatz" + cases: + - title: "AG Vorstandssitzung" + description: "Börsennotierte Aktiengesellschaft: 7 Vorstände, 3h Sitzung, hochsensible Strategiethemen. Protokoll binnen 24h rechtssicher." + result: "Protokoll in 15 Min finalisiert, alle Beschlüsse dokumentiert, sichere Verteilung" + icon: "🏢" + - title: "Stiftungsvorstand" + description: "Gemeinnützige Stiftung: Vorstandssitzung mit 5 Mitgliedern, Compliance-Anforderungen für Prüfung." + result: "Revisionssicheres Protokoll, lückenlose Dokumentation für Wirtschaftsprüfer" + icon: "🏛️" + - title: "GmbH Gesellschafterversammlung" + description: "Mittelständische GmbH: Gesellschafterversammlung mit Beschlussfassung über Investitionen." + result: "Rechtssichere Abstimmungsdokumentation, Action Items automatisch zugewiesen" + icon: "🤝" + testimonials: + title: "Was Vorstände über Memoro sagen" + items: + - quote: "Als Vorstand trage ich hohe Verantwortung für die Protokollierung. Memoro gibt mir die Sicherheit, dass jeder Beschluss lückenlos dokumentiert ist – rechtssicher und vertraulich." + author: "Dr. M. Schmidt" + role: "Vorstand Mittelständische AG" + rating: 5 + - quote: "DSGVO-Konformität ist für uns nicht verhandelbar. Memoro auf deutschen Servern war die einzige Option, die unsere strengen Compliance-Anforderungen erfüllt." + author: "Corporate Secretary" + role: "DAX-Konzern" + rating: 5 + - quote: "Die automatische Beschlusserkennung spart uns enorm viel Zeit. Was früher 3 Stunden Nacharbeit kostete, ist jetzt in 10 Minuten erledigt." + author: "Julia R." + role: "Stiftungsvorstand" + rating: 5 + compliance: + title: "Höchste Sicherheitsstandards für Vorstandskommunikation" + technical: + title: "🔐 Technische Sicherheit" + items: + - "Ende-zu-Ende-Verschlüsselung (AES-256)" + - "ISO 27001 zertifizierte Rechenzentren" + - "Regelmäßige Penetration Tests" + - "Zero-Knowledge-Architektur" + legal: + title: "⚖️ Rechtliche Compliance" + items: + - "DSGVO Art. 32 konforme Verarbeitung" + - "§ 201 StGB Einwilligungsmanagement" + - "Auftragsverarbeitungsvertrag (AVV) inklusive" + - "Datenschutz-Folgenabschätzung verfügbar" + sovereignty: + title: "🇩🇪 Datensouveränität" + items: + - "Server-Standort: Frankfurt/München" + - "Kein Datentransfer außerhalb EU" + - "Deutsche Rechtsordnung" + - "SCHREMS II konform" + roi: + title: "Berechnen Sie Ihre Zeitersparnis" + subtitle: "Ermitteln Sie, wie viel Zeit und Geld Sie mit automatischen Vorstandsprotokollen sparen" + faq: + title: "Häufige Fragen von Vorständen" + items: + - question: "Ist Memoro rechtssicher für Vorstandssitzungen nach § 201 StGB?" + answer: "Ja. Memoro integriert Einwilligungsmanagement gemäß § 201 StGB. Vor jeder Aufnahme werden alle Teilnehmer informiert und müssen zustimmen – dokumentiert und nachweisbar. Dies erfüllt die gesetzlichen Anforderungen zum Schutz des gesprochenen Wortes." + - question: "Wo werden unsere vertraulichen Vorstandsprotokolle gespeichert?" + answer: "Ausschließlich auf ISO 27001-zertifizierten Servern in Deutschland (Frankfurt/München). Kein Zugriff durch Dritte, auch nicht durch Memoro-Mitarbeiter dank Zero-Knowledge-Architektur. Ihre Daten verlassen niemals die EU." + - question: "Wie funktioniert die Sprechererkennung bei wechselnden Teilnehmern?" + answer: "Memoro erkennt bis zu 20+ verschiedene Sprecher automatisch. Sie können Profile für wiederkehrende Vorstandsmitglieder anlegen, die das System lernt. Bei neuen Gästen erfolgt die Zuordnung nachträglich in nur 2 Minuten." + - question: "Ist eine Integration mit unserem Board Portal (z.B. Diligent, BoardEffect) möglich?" + answer: "Ja, Memoro bietet API-Integrationen und Export-Formate für gängige Board Management Systeme wie Diligent, BoardEffect und andere. Die Protokolle können direkt in Ihre bestehende Infrastruktur übertragen werden." + - question: "Was passiert, wenn die Internetverbindung während der Sitzung ausfällt?" + answer: "Memoro funktioniert vollständig offline. Die Aufnahme läuft unterbrechungsfrei weiter. Die Synchronisation und KI-Verarbeitung erfolgen automatisch bei der nächsten Internetverbindung." + - question: "Können wir eigene Protokollvorlagen für unsere Corporate Guidelines nutzen?" + answer: "Ja, Sie können beliebig viele eigene Protokollvorlagen hinterlegen, inklusive Ihres Brandings, der gewünschten Struktur und rechtlicher Klauseln. Unsere Templates sind vollständig anpassbar." + - question: "Wie lange werden Vorstandsprotokolle archiviert?" + answer: "Sie bestimmen die Aufbewahrungsfrist gemäß Ihrer individuellen Compliance-Anforderungen. Memoro bietet revisionssichere Langzeitarchivierung mit vollständiger Audit-Trail-Dokumentation für 10+ Jahre." + - question: "Wer hat Zugriff auf die Vorstandsprotokolle?" + answer: "Sie definieren granular, wer welche Protokolle sehen darf. Zugriffsrechte können pro Protokoll, Teilnehmer-Rolle oder Gremium vergeben werden. Alle Zugriffe werden protokolliert (Audit-Trail)." + - question: "Wie schnell ist das Protokoll nach einer Vorstandssitzung verfügbar?" + answer: "Bei einer typischen 2-3 stündigen Vorstandssitzung ist das vollständige Protokoll 5-10 Minuten nach Ende der Aufnahme verfügbar. Sie können dann sofort mit der Nachbearbeitung und Freigabe beginnen." + cta: + title: "Überzeugen Sie sich selbst – kostenlos und unverbindlich" + subtitle: "Testen Sie Memoro mit Ihrer nächsten Vorstandssitzung – DSGVO-konform und vertraulich" + button: + text: "30 Tage kostenlos testen" + link: "/de/download" + secondaryButton: + text: "Persönliche Demo buchen" + link: "/de/contact" + features: + - "✓ Keine Kreditkarte erforderlich" + - "✓ Voller Funktionsumfang" + - "✓ DSGVO-konform" + - "✓ Deutscher Enterprise-Support" +--- + +import HeroSection from "../../../components/HeroSection.astro"; +import FAQSection from "../../../components/FAQSection.astro"; +import CallToAction from "../../../components/CallToAction.astro"; +import ROICalculator from "../../../components/ROICalculator.astro"; + + + +

    {frontmatter.sections.problems.title}

    + +
    + {frontmatter.sections.problems.items.map((item) => ( +
    +
    {item.icon}
    +

    {item.title}

    +

    {item.description}

    +
    + ))} +
    + +

    {frontmatter.sections.solution.title}

    + +

    {frontmatter.sections.solution.subtitle}

    + +
    + {frontmatter.sections.solution.steps.map((step) => ( +
    +
    +
    + {step.number} +
    +
    {step.icon}
    +
    +

    {step.title}

    +

    {step.description}

    +
    + ))} +
    + +

    {frontmatter.sections.usps.title}

    + +
    + {frontmatter.sections.usps.items.map((usp) => ( +
    +
    {usp.icon}
    +

    {usp.title}

    +

    {usp.description}

    +
    + ))} +
    + +

    {frontmatter.sections.features.title}

    + +
    + {frontmatter.sections.features.items.map((feature) => ( +
    +
    {feature.icon}
    +

    {feature.title}

    +

    {feature.description}

    +
    + ))} +
    + +

    {frontmatter.sections.comparison.title}

    + +

    {frontmatter.sections.comparison.subtitle}

    + +
    + + + + + {frontmatter.sections.comparison.competitors.map((comp) => ( + + ))} + + + + {["Server-Standort", "DSGVO-Konformität", "§ 201 StGB", "Vertraulichkeit", "Sprechererkennung", "Support", "Zeitaufwand"].map((criterion, index) => ( + + + {frontmatter.sections.comparison.competitors.map((comp) => ( + + ))} + + ))} + +
    Kriterium + {comp.name} +
    {criterion} + {comp.features[index]} +
    +
    + +

    {frontmatter.sections.useCases.title}

    + +
    + {frontmatter.sections.useCases.cases.map((useCase) => ( +
    +
    {useCase.icon}
    +

    {useCase.title}

    +

    {useCase.description}

    +
    +

    Ergebnis:

    +

    {useCase.result}

    +
    +
    + ))} +
    + +

    {frontmatter.sections.testimonials.title}

    + +
    + {frontmatter.sections.testimonials.items.map((testimonial) => ( +
    +
    + {[...Array(testimonial.rating)].map(() => ( + + ))} +
    +

    "{testimonial.quote}"

    +
    +

    {testimonial.author}

    +

    {testimonial.role}

    +
    +
    + ))} +
    + +

    {frontmatter.sections.compliance.title}

    + +
    +
    +

    {frontmatter.sections.compliance.technical.title}

    +
      + {frontmatter.sections.compliance.technical.items.map((item) => ( +
    • + + {item} +
    • + ))} +
    +
    + +
    +

    {frontmatter.sections.compliance.legal.title}

    +
      + {frontmatter.sections.compliance.legal.items.map((item) => ( +
    • + + {item} +
    • + ))} +
    +
    + +
    +

    {frontmatter.sections.compliance.sovereignty.title}

    +
      + {frontmatter.sections.compliance.sovereignty.items.map((item) => ( +
    • + + {item} +
    • + ))} +
    +
    +
    + + + + + +
    +

    {frontmatter.sections.cta.title}

    +

    {frontmatter.sections.cta.subtitle}

    + + + +
    + {frontmatter.sections.cta.features.map((feature) => ( + {feature} + ))} +
    +
    diff --git a/apps/memoro/apps/landing/src/content/pages/en/about.mdx b/apps/memoro/apps/landing/src/content/pages/en/about.mdx new file mode 100644 index 000000000..aa13e1fc7 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/en/about.mdx @@ -0,0 +1,231 @@ +--- +title: "About Memoro | Revolutionizing Conversation Documentation" +description: "Learn about Memoro's mission to revolutionize how people document conversations and capture thoughts." +lang: "en" +type: "page" +lastUpdated: 2025-07-23 +sections: + hero: + title: "Revolutionizing Conversation Documentation and Thought Capture" + subtitle: "Memoro is the innovative solution for automated capture, transcription, and summarization of spoken content - Made in Germany." + image: "/images/product_photos/Memoro-App-Students-University-Recording.jpg" + imageAlt: "Students using Memoro for university recordings" + mission: + title: "Our Mission" + description: "As a response to the challenges of manual note-taking and protocol writing, Memoro offers an intuitive solution for automated capture, transcription, and summarization of spoken content. We democratize access to AI-powered documentation tools and enable people worldwide to focus on what matters most - the conversation itself." + features: + title: "Key Features" + items: + - "One-button recording for easy start and stop" + - "Automatic transcription with speaker recognition" + - "AI-powered summarization and extraction of tasks, appointments, and insights" + - "Multilingual support: 24 languages and 2 dialects" + - "Industry-specific blueprints for different professional groups" + timeline: + title: "Our Journey" + items: + - date: "2023" + title: "The Idea is Born" + description: "From frustration with manual protocol writing, the vision of Memoro emerges." + - date: "Spring 2024" + title: "First Beta Version" + description: "Release of beta with basic recording and transcription features." + - date: "Summer 2024" + title: "Mobile Apps" + description: "Launch of native iOS and Android apps on App Store and Google Play Store." + - date: "Fall 2024" + title: "AI Enhancements" + description: "Introduction of intelligent summaries and industry-specific blueprints." + - date: "Winter 2024" + title: "800 Active Users" + description: "Milestone reached with estimated time savings of 2-6 hours per week per user." + team: + title: "Meet the Team" + members: + - name: "Nils Weiser" + role: "CTO" + description: "Co-Founder of Codify AG and experienced Full-Stack Developer. CTO at Memoro since May 2025, bringing expertise in AI Agents, modern tech stacks, and 'Vibe Coding'." + image: "/images/team/Memoro-Team-Portrait-NilsWeiser.jpg" + slug: "nils-weiser" + social: + linkedin: "https://linkedin.com/in/nils-weiser" + github: "https://github.com/nilsweiser" + - name: "Tobias Müller" + role: "CTO & Full-Stack Developer" + description: "Expert in modern web technologies with focus on AI-powered assistants and cloud architectures." + image: "/images/team/Memoro-Team-Portrait-TobiasMueller.jpg" + slug: "tobias-mueller" + social: + linkedin: "https://linkedin.com/in/tobischneider" + cta: + text: "Meet the entire team" + link: "/en/team" + values: + title: "Our Values" + items: + - title: "Made in Germany" + description: "Developed to the highest security standards with GDPR-compliant data storage exclusively in Germany." + - title: "Inclusivity & Accessibility" + description: "Breaking down language barriers through multilingualism and support for people with diverse needs." + - title: "Time Savings & Efficiency" + description: "Reducing documentation work by up to 75% - more time for actual work." + callToAction: + title: "Ready to Transform Your Learning Experience?" + description: "Join thousands of users who are already benefiting from Memoro's AI-powered learning platform." + buttonText: "Get Started Now" + buttonLink: "/en/download" + stats: + title: "Memoro by the Numbers" + items: + - number: "800+" + label: "Active Users" + - number: "24" + label: "Supported Languages" + - number: "2-6h" + label: "Time Saved per Week" + - number: "75%" + label: "Less Documentation Effort" +--- + +import Timeline from "../../../components/Timeline.astro"; +import TeamMemberCard from "../../../components/TeamMemberCard.astro"; +import { Image } from 'astro:assets'; + +{/* Hero Image */} +
    + {frontmatter.sections.hero.imageAlt} +
    + +{/* Mission Section */} + +
    +
    +

    + {frontmatter.sections.mission.title} +

    +

    + {frontmatter.sections.mission.description} +

    +
    +
    + Memoro app for conference recordings +
    +
    + +{/* Image Gallery */} +
    + Memoro recording screen + Memoro transcript view + Memoro Mana system +
    + +{/* Timeline Section */} + +
    +

    + {frontmatter.sections.timeline.title} +

    + +
    + +{/* Team Section */} + +
    +

    + {frontmatter.sections.team.title} +

    +
    + {frontmatter.sections.team.members.map((member) => ( + + ))} +
    + +
    + +{/* Values Section */} + +
    +

    + {frontmatter.sections.values.title} +

    +
    + {frontmatter.sections.values.items.map((value) => ( +
    +

    {value.title}

    +

    {value.description}

    +
    + ))} +
    +
    + +{/* Stats Section */} +
    +

    + {frontmatter.sections.stats.title} +

    +
    + {frontmatter.sections.stats.items.map((stat) => ( +
    +
    {stat.number}
    +
    {stat.label}
    +
    + ))} +
    +
    + +{/* Company Culture Image */} +
    + Secure data center for privacy protection +
    {" "} diff --git a/apps/memoro/apps/landing/src/content/pages/en/automatische-meetingnotizen.mdx b/apps/memoro/apps/landing/src/content/pages/en/automatische-meetingnotizen.mdx new file mode 100644 index 000000000..2aa8db60d --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/en/automatische-meetingnotizen.mdx @@ -0,0 +1,358 @@ +--- +title: "Automatic Meeting Notes 2025 - AI Creates Perfect Meeting Protocols | Memoro" +description: "Never forget meeting notes again ► AI automatically creates protocols ✓ Recognize action items ✓ GDPR-compliant ✓ Try for free now!" +keywords: ["automatic meeting notes", "meeting notes automatic", "meeting notes software", "meeting protocol automatic", "ai meeting notes", "automatic meeting protocol", "meeting notes ai", "automatically create protocol"] +lang: en +type: product +lastUpdated: 2025-01-09 +sections: + hero: + title: "Automatic Meeting Notes - Never Forget Anything Again" + subtitle: "AI automatically creates structured notes and action items during your meeting" + cta: "Try for free" + features: + title: "Why automatic meeting notes are the future" + items: ["Action items automatically", "Structured protocols", "Real-time capture", "Team integration"] +ogImage: "/images/og/automatische-meetingnotizen.png" +canonical: "https://memoro.ai/en/automatische-meetingnotizen" +robots: "index, follow" +priority: 0.9 +changefreq: "monthly" +--- + +import Button from '../../../components/atoms/Button.astro'; +import { Icon } from 'astro-icon/components'; +import FAQ from '../../../components/FAQ.astro'; +import ComparisonTable from '../../../components/ComparisonTable.astro'; + +# Automatic Meeting Notes - Focus on What Matters + +**80% of meeting content gets lost:** Studies show that participants remember only 20% of discussed points after one week. Action items are forgotten, decisions disappear into thin air. + +## The Problem with Manual Meeting Notes + +
    +
    +

    📝 Incomplete Notes

    +

    While you note, you miss important details

    +
    +
    +

    ⏰ Time Effort After Meeting

    +

    30 min meeting = 15 min post-processing

    +
    +
    +

    🔍 Hard to Search

    +

    Handwritten notes are not searchable

    +
    +
    +

    👥 Only One Person Takes Notes

    +

    The "note-taker" cannot fully participate

    +
    +
    + +## Automatic Meeting Notes with AI + +### How It Works - The 3-Step Process + +
    +
    +
    +
    + +
    +

    1. Record Meeting

    +

    Simply press "Start" - AI listens

    +
    +
    +
    + +
    +

    2. AI Analyzes

    +

    Recognizes topics, decisions & action items

    +
    +
    +
    + +
    +

    3. Generate Notes

    +

    Structured protocols to all participants

    +
    +
    +
    + +### What Makes Our Automatic Meeting Notes Special? + +#### 🎯 **Intelligent Content Recognition** + +The AI automatically recognizes: +- **Decisions:** "We have decided that..." +- **Action Items:** "Thomas takes over by next week..." +- **Important Points:** "This is critical for project completion" +- **Deadlines:** "Deadline is March 15th" +- **Responsible:** Who does what by when + +#### 📊 **Structured Output** + +``` +Meeting: Product Planning Q2 2025 +Date: 09.01.2025, 2:00-3:30 PM +Participants: Sarah (PM), Michael (Dev), Lisa (Design) + +🎯 DECISIONS: +• Launch delayed by 2 weeks to week 15 +• Budget for external designers approved +• Feature X removed from V1 + +✅ ACTION ITEMS: +• Sarah: Adjust timeline (by 12.01.) +• Michael: Complete backend API (by 20.01.) +• Lisa: Create designer briefing (by 10.01.) + +💡 KEY POINTS: +• User feedback very positive (4.8/5 stars) +• Performance tests show optimization needs +• Marketing campaign can start as planned + +📅 NEXT MEETING: +16.01.2025, 2:00 PM - Final review before launch +``` + +## Use Cases for Automatic Meeting Notes + +### 🏢 **Project Management** + +**Automatically document all project updates:** +- Document stand-up meetings +- Evaluate sprint reviews +- Structure stakeholder updates +- Record risk discussions + +*"Since we use Memoro, no action items get lost anymore. Our project backlog is always up to date and everyone knows what to do."* - **Anna M., Project Lead** + +### 💼 **Sales & Customer Service** + +**Professionally document customer interactions:** +- Document customer meetings +- Record requirements conversations +- Automatically create follow-up actions +- Generate deal updates for CRM + +### 🎓 **Education & Training** + +**Record learning progress and decisions:** +- Document doctoral conversations +- Record training measures +- Evaluate mentoring sessions +- Structure research group meetings + +## Unique Memoro Features + +### 🤖 **AI Assistant for Follow-ups** + +**Automatic follow-up actions:** +- Email drafts for action items +- Calendar entries for deadlines +- Reminders for responsible parties +- Status updates for stakeholders + +### 📋 **Meeting Templates for Every Occasion** + +**15+ pre-made templates:** +- Daily Stand-up +- Sprint Planning +- Customer Consultation +- Board Meeting +- One-on-One +- Brainstorming +- Project Kickoff +- Retrospective + +### 🔗 **Seamless Team Integration** + +**Direct into your tools:** +- **Slack:** Automatically post notes to channel +- **Microsoft Teams:** Integration in team chat +- **Email:** Automatic sending to participants +- **Jira:** Create action items as tickets +- **Trello:** Automatically generate cards + +## ROI Calculator: Time Savings Through Automatic Notes + +
    +

    Example: Team of 8 People

    + +
    +
    +

    Before (Manual Notes)

    +
      +
    • • 5 meetings/week à 60 min
    • +
    • • 1 person notes (not fully present)
    • +
    • • 20 min post-processing/meeting
    • +
    • • 15 min distribution/meeting
    • +
    • 175 min/week effort
    • +
    +
    +
    +

    After (Memoro)

    +
      +
    • • Automatic capture
    • +
    • • Everyone can fully participate
    • +
    • • 2 min review/meeting
    • +
    • • Automatic distribution
    • +
    • 10 min/week effort
    • +
    +
    +
    + +
    +

    Time Savings: 165 min/week (94%)

    +

    = 2.75 hours more for productive work

    +
    +
    + +## Before vs. After + +
    +
    +

    ❌ Without Automatic Notes

    +
      +
    • • Note-taker writes - is distracted
    • +
    • • Important points are overheard
    • +
    • • After meeting: 20 min note processing
    • +
    • • Action items unclear or forgotten
    • +
    • Total: 95 min effort
    • +
    +
    +
    +

    ✅ With Automatic Notes

    +
      +
    • • Everyone can fully participate
    • +
    • • Nothing is forgotten or overheard
    • +
    • • After meeting: 2 min review suffices
    • +
    • • Action items clear with responsible parties
    • +
    • Total: 62 min effort
    • +
    +
    +
    + +## Success Stories + +### Case Study: Software Development Team + +**Challenge:** 12-person dev team with daily stand-ups and weekly plannings. Many action items were lost. + +**Before:** +- 30 min/day for notes and follow-ups +- 20% of action items forgotten +- Unclear responsibilities +- Time effort: 2.5h/week + +**With Automatic Meeting Notes:** +- 5 min/day for review +- 0% forgotten action items +- Automatic Jira tickets +- Time effort: 25 min/week + +**Results:** +- **90% time savings** in meeting administration +- **40% fewer follow-up meetings** needed +- **25% higher sprint velocity** through clearer tasks + +## Pricing & Packages + +
    +
    +

    Starter

    +

    Free

    +

    600 minutes/month

    +
      +
    • ✅ Automatic notes
    • +
    • ✅ Action item recognition
    • +
    • ✅ 5 meeting templates
    • +
    • ✅ Email export
    • +
    • ✅ 30 days storage
    • +
    +
    +
    +
    + For Teams +
    +

    Professional

    +

    €29/month

    +

    1,500 minutes/month

    +
      +
    • ✅ Everything from Starter
    • +
    • ✅ 15+ meeting templates
    • +
    • ✅ Slack/Teams integration
    • +
    • ✅ Automatic reminders
    • +
    • ✅ CRM integration
    • +
    • ✅ Unlimited storage
    • +
    +
    +
    +

    Enterprise

    +

    €99/month

    +

    Unlimited minutes

    +
      +
    • ✅ Everything from Professional
    • +
    • ✅ Unlimited meetings
    • +
    • ✅ Custom templates
    • +
    • ✅ API access
    • +
    • ✅ White-label option
    • +
    • ✅ Dedicated support
    • +
    +
    +
    + +## Frequently Asked Questions + + + +## Start with Automatic Meeting Notes Now + +
    +

    Ready for More Efficient Meetings?

    +

    + Start today and experience how automatic notes improve your meeting culture. +

    +
    + + +
    +

    + 600 min free • No credit card • For all team sizes +

    +
    + +--- + +*Last update: January 2025* diff --git a/apps/memoro/apps/landing/src/content/pages/en/blog.mdx b/apps/memoro/apps/landing/src/content/pages/en/blog.mdx new file mode 100644 index 000000000..bda1cb5fd --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/en/blog.mdx @@ -0,0 +1,43 @@ +--- +title: "Blog | Memoro" +description: "News, Updates, and Insights from Memoro" +lang: "en" +type: "page" +lastUpdated: 2024-02-22 +sections: + hero: + title: "Blog" + subtitle: "News, Updates, and Insights from Memoro" +--- + +import BlogCard from "../../../components/BlogCard.astro"; + +export const BlogContent = ({ posts, lang }) => ( + <> +
    +

    + {frontmatter.sections.hero.title} +

    +

    + {frontmatter.sections.hero.subtitle} +

    +
    + +
    + {posts.map((post) => ( + + ))} +
    + + +); + +{" "} diff --git a/apps/memoro/apps/landing/src/content/pages/en/board-meeting-minutes-software.mdx b/apps/memoro/apps/landing/src/content/pages/en/board-meeting-minutes-software.mdx new file mode 100644 index 000000000..6ba0b9563 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/en/board-meeting-minutes-software.mdx @@ -0,0 +1,443 @@ +--- +title: "Board Meeting Minutes Software | Legally Compliant & GDPR-Compliant | Memoro" +description: "✓ Automatically create legally compliant board minutes ✓ GDPR-compliant ✓ European servers ✓ Legal recording consent ✓ Speaker recognition ► Try free now!" +lang: "en" +type: "landing" +lastUpdated: 2025-01-30 +sections: + hero: + title: "Legally Compliant Board Minutes – Automatic, Confidential, GDPR-Compliant" + subtitle: "Memoro documents your board meetings with highest confidentiality. AI-powered protocol creation on European servers – legally compliant and GDPR-certified." + image: "/images/industries/Office-Businessman-Recording-Memoro-AI-App-Transcription.png" + imageAlt: "Memoro Board Meeting Minutes Software - Automatic GDPR-compliant Protocol Creation" + cta: + primary: + text: "Try for free" + link: "/en/download" + secondary: + text: "Book demo for boards" + link: "/en/contact" + trustBadges: + - icon: "🇪🇺" + text: "European Servers" + - icon: "🔒" + text: "ISO 27001" + - icon: "✅" + text: "GDPR-compliant" + - icon: "⚖️" + text: "Legal Compliance" + problems: + title: "The Challenges of Board Meeting Documentation" + items: + - icon: "⏱️" + title: "Time Effort: 2-4 Hours per Meeting" + description: "Manual documentation ties up valuable resources that could be used more productively." + - icon: "🔐" + title: "Confidentiality at Risk" + description: "External transcription services with US servers pose a security risk for sensitive board information." + - icon: "⚖️" + title: "Legal Liability Risks" + description: "Missing or incomplete minutes can have legal consequences and violate documentation requirements." + - icon: "📝" + title: "Loss of Important Details" + description: "Manual notes miss critical resolutions, votes, and discussion points." + solution: + title: "Memoro: Your Solution for Legally Compliant Board Minutes" + subtitle: "Document board meetings completely and legally – automatic, confidential, on European servers" + steps: + - number: "1" + title: "Record Legally Compliant" + description: "Integrated consent management. All participants are automatically informed and documented." + icon: "📝" + - number: "2" + title: "AI Analyzes Confidentially" + description: "Automatic speaker recognition, resolution detection, and structuring – all on European servers with zero-knowledge encryption." + icon: "🔒" + - number: "3" + title: "Archive Minutes Securely" + description: "Audit-proof protocol with audit trail, versioning, and granular access rights." + icon: "✅" + usps: + title: "Developed for Board Meetings" + items: + - icon: "🔒" + title: "Maximum Confidentiality" + description: "Zero-knowledge encryption, European servers, no third-party access to your minutes" + - icon: "⚖️" + title: "Legal Certainty Guaranteed" + description: "GDPR-compliant, integrated consent management, complete audit trails, audit-proof archiving" + - icon: "🎯" + title: "Precise Board Minutes" + description: "Automatic speaker recognition, resolution detection & vote documentation, action item tracking, attendance lists" + - icon: "⚡" + title: "Save Time & Resources" + description: "90% time savings in protocol creation, minutes available immediately after meeting, integration with board management tools" + features: + title: "Features for Professional Board Work" + items: + - icon: "🌍" + title: "Multilingual Transcription" + description: "Documentation of international board meetings in 80+ languages" + - icon: "👥" + title: "20+ Speaker Recognition" + description: "Clear attribution: who said and decided what" + - icon: "📋" + title: "Custom Templates" + description: "Protocol templates according to corporate guidelines" + - icon: "✅" + title: "Resolution Detection" + description: "AI automatically identifies resolutions and to-dos" + - icon: "🔐" + title: "Granular Access Rights" + description: "Only authorized persons see confidential content" + - icon: "📑" + title: "Versioning" + description: "All changes traceable (compliance)" + - icon: "📤" + title: "Export Options" + description: "Word, PDF, board portal integration" + - icon: "📱" + title: "Offline Mode" + description: "Also usable in closed rooms" + comparison: + title: "Memoro vs. Traditional Solutions vs. US Providers" + subtitle: "Why Memoro is the best choice for board meetings" + competitors: + - name: "Memoro" + features: + - "✅ European Servers" + - "✅ GDPR 100%" + - "✅ Legal Consent" + - "✅ Zero-Knowledge" + - "✅ 20+ Speakers" + - "✅ European Support" + - "⚡ 10 Min" + highlight: true + - name: "Manual Documentation" + features: + - "— Internal" + - "✅ Yes" + - "⚠️ Manual" + - "✅ Internal" + - "❌ No" + - "— N/A" + - "⏱️ 2-4h" + - name: "US Tools (Fathom, Fireflies)" + features: + - "❌ US Servers" + - "⚠️ Limited" + - "❌ Not Covered" + - "❌ US Access" + - "✅ Yes" + - "❌ English" + - "⚡ 15 Min" + useCases: + title: "Memoro in Action" + cases: + - title: "Public Company Board Meeting" + description: "Publicly traded corporation: 7 board members, 3h meeting, highly sensitive strategic topics. Minutes legally compliant within 24h." + result: "Minutes finalized in 15 min, all resolutions documented, secure distribution" + icon: "🏢" + - title: "Foundation Board" + description: "Non-profit foundation: Board meeting with 5 members, compliance requirements for audits." + result: "Audit-proof minutes, complete documentation for auditors" + icon: "🏛️" + - title: "LLC Shareholder Meeting" + description: "Mid-sized LLC: Shareholder meeting with resolutions on investments." + result: "Legally compliant vote documentation, action items automatically assigned" + icon: "🤝" + testimonials: + title: "What Board Members Say About Memoro" + items: + - quote: "As a board member, I bear high responsibility for documentation. Memoro gives me certainty that every resolution is completely documented – legally compliant and confidential." + author: "Dr. M. Schmidt" + role: "Board Member, Mid-sized Corporation" + rating: 5 + - quote: "GDPR compliance is non-negotiable for us. Memoro on European servers was the only option that meets our strict compliance requirements." + author: "Corporate Secretary" + role: "DAX Corporation" + rating: 5 + - quote: "Automatic resolution detection saves us enormous time. What used to cost 3 hours of post-processing is now done in 10 minutes." + author: "Julia R." + role: "Foundation Board" + rating: 5 + compliance: + title: "Highest Security Standards for Board Communication" + technical: + title: "🔐 Technical Security" + items: + - "End-to-end encryption (AES-256)" + - "ISO 27001 certified data centers" + - "Regular penetration tests" + - "Zero-knowledge architecture" + legal: + title: "⚖️ Legal Compliance" + items: + - "GDPR Art. 32 compliant processing" + - "Integrated consent management" + - "Data processing agreement (DPA) included" + - "Data protection impact assessment available" + sovereignty: + title: "🇪🇺 Data Sovereignty" + items: + - "Server location: Frankfurt/Munich" + - "No data transfer outside EU" + - "European jurisdiction" + - "SCHREMS II compliant" + roi: + title: "Calculate Your Time Savings" + subtitle: "Determine how much time and money you save with automatic board minutes" + faq: + title: "Frequently Asked Questions from Board Members" + items: + - question: "Is Memoro legally compliant for board meetings?" + answer: "Yes. Memoro integrates consent management for meeting recordings. Before each recording, all participants are informed and must consent – documented and verifiable. This meets legal requirements for recording conversations." + - question: "Where are our confidential board minutes stored?" + answer: "Exclusively on ISO 27001-certified servers in Europe (Frankfurt/Munich). No third-party access, not even by Memoro employees thanks to zero-knowledge architecture. Your data never leaves the EU." + - question: "How does speaker recognition work with changing participants?" + answer: "Memoro automatically recognizes up to 20+ different speakers. You can create profiles for recurring board members that the system learns. For new guests, assignment is done retrospectively in just 2 minutes." + - question: "Is integration with our board portal (e.g., Diligent, BoardEffect) possible?" + answer: "Yes, Memoro offers API integrations and export formats for common board management systems like Diligent, BoardEffect, and others. Minutes can be transferred directly to your existing infrastructure." + - question: "What happens if the internet connection fails during the meeting?" + answer: "Memoro works completely offline. Recording continues uninterrupted. Synchronization and AI processing happen automatically at the next internet connection." + - question: "Can we use our own protocol templates for our corporate guidelines?" + answer: "Yes, you can store any number of your own protocol templates, including your branding, desired structure, and legal clauses. Our templates are fully customizable." + - question: "How long are board minutes archived?" + answer: "You determine the retention period according to your individual compliance requirements. Memoro offers audit-proof long-term archiving with complete audit trail documentation for 10+ years." + - question: "Who has access to the board minutes?" + answer: "You define granularly who may see which minutes. Access rights can be granted per protocol, participant role, or committee. All access is logged (audit trail)." + - question: "How quickly are minutes available after a board meeting?" + answer: "For a typical 2-3 hour board meeting, the complete minutes are available 5-10 minutes after the end of recording. You can then immediately begin post-processing and approval." + cta: + title: "See for Yourself – Free and Non-binding" + subtitle: "Test Memoro with your next board meeting – GDPR-compliant and confidential" + button: + text: "Try 30 days free" + link: "/en/download" + secondaryButton: + text: "Book personal demo" + link: "/en/contact" + features: + - "✓ No credit card required" + - "✓ Full feature set" + - "✓ GDPR-compliant" + - "✓ European enterprise support" +--- + +import HeroSection from "../../../components/HeroSection.astro"; +import FAQSection from "../../../components/FAQSection.astro"; +import CallToAction from "../../../components/CallToAction.astro"; +import ROICalculator from "../../../components/ROICalculator.astro"; + + + +

    {frontmatter.sections.problems.title}

    + +
    + {frontmatter.sections.problems.items.map((item) => ( +
    +
    {item.icon}
    +

    {item.title}

    +

    {item.description}

    +
    + ))} +
    + +

    {frontmatter.sections.solution.title}

    + +

    {frontmatter.sections.solution.subtitle}

    + +
    + {frontmatter.sections.solution.steps.map((step) => ( +
    +
    +
    + {step.number} +
    +
    {step.icon}
    +
    +

    {step.title}

    +

    {step.description}

    +
    + ))} +
    + +

    {frontmatter.sections.usps.title}

    + +
    + {frontmatter.sections.usps.items.map((usp) => ( +
    +
    {usp.icon}
    +

    {usp.title}

    +

    {usp.description}

    +
    + ))} +
    + +

    {frontmatter.sections.features.title}

    + +
    + {frontmatter.sections.features.items.map((feature) => ( +
    +
    {feature.icon}
    +

    {feature.title}

    +

    {feature.description}

    +
    + ))} +
    + +

    {frontmatter.sections.comparison.title}

    + +

    {frontmatter.sections.comparison.subtitle}

    + +
    + + + + + {frontmatter.sections.comparison.competitors.map((comp) => ( + + ))} + + + + {["Server Location", "GDPR Compliance", "Legal Consent", "Confidentiality", "Speaker Recognition", "Support", "Time Effort"].map((criterion, index) => ( + + + {frontmatter.sections.comparison.competitors.map((comp) => ( + + ))} + + ))} + +
    Criteria + {comp.name} +
    {criterion} + {comp.features[index]} +
    +
    + +

    {frontmatter.sections.useCases.title}

    + +
    + {frontmatter.sections.useCases.cases.map((useCase) => ( +
    +
    {useCase.icon}
    +

    {useCase.title}

    +

    {useCase.description}

    +
    +

    Result:

    +

    {useCase.result}

    +
    +
    + ))} +
    + +

    {frontmatter.sections.testimonials.title}

    + +
    + {frontmatter.sections.testimonials.items.map((testimonial) => ( +
    +
    + {[...Array(testimonial.rating)].map(() => ( + + ))} +
    +

    "{testimonial.quote}"

    +
    +

    {testimonial.author}

    +

    {testimonial.role}

    +
    +
    + ))} +
    + +

    {frontmatter.sections.compliance.title}

    + +
    +
    +

    {frontmatter.sections.compliance.technical.title}

    +
      + {frontmatter.sections.compliance.technical.items.map((item) => ( +
    • + + {item} +
    • + ))} +
    +
    + +
    +

    {frontmatter.sections.compliance.legal.title}

    +
      + {frontmatter.sections.compliance.legal.items.map((item) => ( +
    • + + {item} +
    • + ))} +
    +
    + +
    +

    {frontmatter.sections.compliance.sovereignty.title}

    +
      + {frontmatter.sections.compliance.sovereignty.items.map((item) => ( +
    • + + {item} +
    • + ))} +
    +
    +
    + + + + + +
    +

    {frontmatter.sections.cta.title}

    +

    {frontmatter.sections.cta.subtitle}

    + + + +
    + {frontmatter.sections.cta.features.map((feature) => ( + {feature} + ))} +
    +
    diff --git a/apps/memoro/apps/landing/src/content/pages/en/business-voice-recording-app.mdx b/apps/memoro/apps/landing/src/content/pages/en/business-voice-recording-app.mdx new file mode 100644 index 000000000..c619281f4 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/en/business-voice-recording-app.mdx @@ -0,0 +1,577 @@ +--- +title: "Business Voice Recording App 2025 - Professional Audio Documentation | Memoro" +description: "The business voice recording app for professional documentation ► Offline mode ✓ Automatic transcription ✓ GDPR-compliant ✓ For field sales & teams" +keywords: ["business voice recording app", "business voice recording", "dictation app enterprise", "audio recording business", "voice memo business", "voice recording app business", "corporate voice recording", "mobile documentation"] +lang: en +type: product +lastUpdated: 2025-01-09 +sections: + hero: + title: "Voice Recording App for Business - Document Professionally, Anywhere" + subtitle: "From field sales to boardroom - Capture important conversations and ideas professionally" + cta: "Try app for free" + features: + title: "Why Memoro is the leading business voice recording app" + items: ["Offline recording", "Automatic transcription", "End-to-end encryption", "Team integration"] +ogImage: "/images/og/sprachaufnahme-app-business.png" +canonical: "https://memoro.ai/en/business-voice-recording-app" +robots: "index, follow" +priority: 0.9 +changefreq: "monthly" +--- + +import Button from '../../../components/atoms/Button.astro'; +import { Icon } from 'astro-icon/components'; +import FAQ from '../../../components/FAQ.astro'; +import ComparisonTable from '../../../components/ComparisonTable.astro'; + +
    +
    +

    + Business Voice Recording App - Document Professionally, Everywhere +

    +

    + Capture customer appointments, ideas, and important conversations professionally. + With offline mode, automatic transcription, and highest security. +

    +
    + + +
    +
    +
    + + Works offline +
    +
    + + Encrypted & secure +
    +
    + + Instant transcription +
    +
    +
    +
    + +## The Problem with Conventional Voice Recording Apps + +**Unprofessional, insecure, unusable:** Most voice recording apps are designed for private notes. In business contexts, important features like encryption, transcription, and team integration are missing. + +### The 6 Biggest Problems of Conventional Voice Apps: + +
    +
    +

    🔓 No Encryption

    +

    Sensitive business content stored unprotected

    +
    +
    +

    📱 Only Audio Files

    +

    No automatic transcription - hours of manual work

    +
    +
    +

    👥 No Team Features

    +

    Recordings remain isolated on one device

    +
    +
    +

    🌐 Internet-Dependent

    +

    Doesn't work in field sales without connection

    +
    +
    +

    🔍 Not Searchable

    +

    Important content disappears in audio archives

    +
    +
    +

    ⚖️ GDPR Problems

    +

    Unclear data protection with US providers

    +
    +
    + +## The Memoro Business Voice Recording App + +### Developed for Professional Requirements + +
    +
    +
    +
    + +
    +

    Record

    +

    HD quality even offline

    +
    +
    +
    + +
    +

    Encrypt

    +

    End-to-end secure

    +
    +
    +
    + +
    +

    Transcribe

    +

    AI creates text

    +
    +
    +
    + +
    +

    Share

    +

    Team & CRM integration

    +
    +
    +
    + +### Business Features Overview + +#### 📱 **Offline-First Design** +- Recordings work without internet +- Automatic synchronization when connected +- No more missed conversations +- Perfect for field sales & travel + +#### 🔐 **Enterprise-Grade Security** +- End-to-end encryption (AES-256) +- European servers (Frankfurt) +- Zero-knowledge architecture +- GDPR-compliant by design + +#### 🤖 **AI-Powered Transcription** +- 98% accuracy for English content +- Speaker recognition for meetings +- Automatic summaries +- Action items & keywords + +#### 👥 **Team Collaboration** +- Share recordings with team +- Add comments & notes +- Permission management +- Workflow integration + +## Business Voice Recording Apps Comparison + + + +## Use Cases for Business Voice Recording + +### 🚗 **Field Sales & Sales** + +**Document customer appointments professionally:** +- Capture consulting conversations completely +- Note follow-up actions immediately +- Collect customer feedback in structured way +- Use travel time productively + +*"As an insurance sales rep, I'm with customers daily. With Memoro, I can discreetly record all conversations and have the perfect basis for proposals later."* - **Peter S., Field Sales Manager** + +#### Typical Workflow: +``` +08:00 - Drive to customer + → Day planning via voice memo + +09:30 - Customer appointment + → Discreetly record conversation + +10:30 - Drive to next appointment + → Transcript already available + → Dictate follow-ups via voice + +11:00 - Back at office + → CRM automatically updated +``` + +### 💼 **Consulting & Advisory** + +**Optimize consulting meetings:** +- No note-taking during important discussions +- Complete documentation for compliance +- Precise basis for billing +- Knowledge management for the team + +#### Case Study: Business Consulting + +**Situation:** 15 senior consultants, 200+ customer appointments/month +**Challenge:** Protocols necessary for liability & billing +**Solution:** Business voice recording with automatic transcription + +**Before:** +- 30 min post-processing per 60 min meeting +- Incomplete protocols +- 50h/week for administration + +**With Memoro:** +- 5 min review per meeting +- Complete, searchable protocols +- 12h/week for administration + +**ROI:** 76% time savings = 38h/week × €75 = €2,850/week saved + +### 🏗️ **Project & Construction Management** + +**Document construction progress and defects:** +- Site inspections with voice memos +- Create defect lists via audio +- Record subcontractor briefings +- Document safety instructions + +### 🎓 **Training & Quality Assurance** + +**Optimize training and reviews:** +- Use customer conversations for training +- Collect employee feedback in structured way +- Build audio library of best practices +- Document quality controls + +### 📈 **Management & Leadership** + +**Digitize executive workflows:** +- Fully capture strategy discussions +- Board meeting preparation +- Delegation briefings via audio +- Capture spontaneous ideas immediately + +## Unique Business Features + +### 🔐 **Enterprise Security Standards** + +**Bank-grade encryption:** +- AES-256 encryption in real-time +- Zero-knowledge server architecture +- Local key generation +- Automatic deletion after X days + +**Compliance & Audit:** +- ISO 27001 certified infrastructure +- GDPR data processing agreement +- Audit logs for all actions +- Works council compliant deployment + +### 📊 **Business Intelligence** + +**Insights from your recordings:** +- Identify most frequent customer inquiries +- Analyze conversation duration and quality +- Recognize keyword trends +- Evaluate team performance + +### 🔗 **Seamless Integration** + +**Into your existing tools:** + +#### CRM Systems +- **Salesforce:** Recordings as activities +- **HubSpot:** Automatic contact assignment +- **Pipedrive:** Deals with audio notes +- **Microsoft Dynamics:** Full integration + +#### Project Management +- **Asana:** Tasks from action items +- **Monday.com:** Projects with audio updates +- **Notion:** Knowledge base with transcripts + +#### Team Communication +- **Slack:** Audio summaries in channels +- **Microsoft Teams:** Integration in chats +- **Email:** Automatic sending + +## Mobile Apps for All Platforms + +### 📱 **iOS App Features** +- Native iOS integration +- Siri shortcuts for quick recordings +- Apple Watch app for discreet operation +- CarPlay integration for sales +- Background recording possible + +### 🤖 **Android App Features** +- Material Design 3.0 +- Google Assistant integration +- Android Auto support +- Homescreen widget +- Tasker/Automation support + +### 💻 **Desktop & Web** +- Native Windows/Mac apps +- Progressive Web App (PWA) +- Browser extension for quick recordings +- Synchronization across all devices + +## Pricing for Business Customers + +
    +
    +

    Individual User

    +

    €19/month

    +

    Per user

    +
      +
    • ✅ Unlimited recordings
    • +
    • ✅ Automatic transcription
    • +
    • ✅ End-to-end encryption
    • +
    • ✅ Mobile + desktop apps
    • +
    • ✅ Basic CRM integration
    • +
    • ✅ 1 year storage
    • +
    +
    +
    +
    + For Teams +
    +

    Team

    +

    €49/month

    +

    Up to 10 users

    +
      +
    • ✅ Everything from Individual
    • +
    • ✅ Team sharing & comments
    • +
    • ✅ Extended CRM integration
    • +
    • ✅ Admin dashboard
    • +
    • ✅ User management
    • +
    • ✅ Priority support
    • +
    • ✅ 3 years storage
    • +
    +
    +
    +

    Enterprise

    +

    €199/month

    +

    Unlimited users

    +
      +
    • ✅ Everything from Team
    • +
    • ✅ White-label branding
    • +
    • ✅ Custom integrations
    • +
    • ✅ Dedicated support
    • +
    • ✅ SLA guarantees
    • +
    • ✅ On-premise option
    • +
    • ✅ Unlimited storage
    • +
    +
    +
    + +## ROI Calculation for Your Business + +
    +

    Example: 20-person Sales Team

    + +
    +
    +

    Costs without Memoro (per month)

    +
      +
    • • 20 employees × 25 customer appointments
    • +
    • • 30 min post-processing per appointment
    • +
    • • 20 × 25 × 0.5h = 250h post-processing
    • +
    • • 250h × €35 = €8,750/month
    • +
    +
    +
    +

    Costs with Memoro (per month)

    +
      +
    • • Team license: €199
    • +
    • • 5 min review per appointment
    • +
    • • 20 × 25 × 0.08h = 40h review
    • +
    • • 40h × €35 + €199 = €1,599/month
    • +
    +
    +
    + +
    +

    Savings: €7,151/month (82%)

    +

    ROI from first month: 3,593%

    +
    +
    + +## Success Stories from Our Business Customers + +### Case Study: Real Estate Company + +**Customer:** Real estate broker with 25 locations +**Challenge:** 800 viewings/month, inconsistent documentation + +**Before:** +- Handwritten notes during viewings +- 50% of prospect details were lost +- Follow-up appointments forgotten +- 15 min post-processing per viewing + +**With Business Voice Recording:** +- Discreet audio recording during viewing +- Prospect details automatically in CRM +- Follow-up reminders automatic +- 2 min review per viewing + +**Results after 6 months:** +- **87% time savings** in documentation +- **34% higher closing rate** through better follow-up +- **€2.3M additional revenue** from fewer lost leads + +*"Memoro has transformed our business. We don't lose any prospects anymore and our brokers can focus fully on consulting."* - **Julia K., CEO** + +### Case Study: Consulting Boutique + +**Situation:** 12 senior consultants, 150 customer appointments/month +**Problem:** Documentation for liability & billing time-consuming + +**Impact of Business Voice Recording:** +- **Before:** 45 min post-processing per hour meeting +- **After:** 5 min review + automatic documentation +- **Time Savings:** 89% less administrative effort +- **Quality Improvement:** Complete, legally compliant protocols + +**Financial Impact:** +- 40h/week less administration +- 40h × €120 consultant hourly rate = €4,800/week +- **Annual Savings: €249,600** + +## Frequently Asked Questions About Business Voice Recording + + + +## Start Business Voice Recording Now + +
    +

    Revolutionize Your Business Documentation

    +

    + Start today and experience how professional voice recording + improves your productivity and customer communication. +

    +
    + + +
    +

    + No credit card • Ready immediately • For all team sizes +

    +
    + +## App Downloads + +
    +
    + +

    iOS App

    +

    For iPhone & iPad

    + +
    +
    + +

    Android App

    +

    For Android devices

    + +
    +
    + +--- + +*Last update: January 2025 | All prices excl. VAT* diff --git a/apps/memoro/apps/landing/src/content/pages/en/contact.mdx b/apps/memoro/apps/landing/src/content/pages/en/contact.mdx new file mode 100644 index 000000000..4e74970b8 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/en/contact.mdx @@ -0,0 +1,339 @@ +--- +title: "Contact | Memoro" +description: "Contact us with any questions about Memoro. Our team is here to help." +lang: "en" +type: "contact" +lastUpdated: 2024-02-22 +sections: + hero: + title: "Contact Memoro" + subtitle: "Have questions? We're here to help." + contact: + company: "Memoro GmbH" + street: "Münzgasse 19" + city: "78462 Konstanz" + phone: "0049 176 444 343 85" + email: "kontakt@memoro.ai" + faq: + title: "Frequently Asked Questions" + items: + - question: "How quickly do you respond to inquiries?" + answer: "We strive to respond to all inquiries within 24 hours. Usually, you will receive a response much sooner." + - question: "Is phone support available?" + answer: "Currently, we do not offer phone support. For the best assistance, please contact us via email." + - question: "What information should I include in my support request?" + answer: "To help us assist you better, please include your account email, a clear description of your issue, and any relevant screenshots or error messages." + - question: "Do you offer support in other languages?" + answer: "Yes, we currently offer support in both English and German. Please feel free to write to us in either language." + callToAction: + title: "Ready for Better Meeting Documentation?" + description: "Discover how Memoro makes your meetings more efficient and productive." + buttonText: "Download App" + buttonLink: "/en/download" +--- + +import FAQSection from "../../../components/FAQSection.astro"; +import CallToAction from "../../../components/CallToAction.astro"; +import HeroSection from "../../../components/HeroSection.astro"; + + + +
    + {/* Contact Information Cards */} +
    + {/* Office Card */} +
    +
    +
    + + + + +
    +
    +

    Our Office

    +

    + {frontmatter.sections.contact.company}
    + {frontmatter.sections.contact.street}
    + {frontmatter.sections.contact.city} +

    +
    +
    +
    +

    + Please visit us by appointment only. +

    +
    +
    + + {/* Contact Details Card */} +
    +
    + {/* Phone */} +
    +
    + + + +
    +
    +

    Phone

    + + +
    +
    + + {/* Email */} + + + {/* Response Time */} +
    +
    + + + + Response time: Within 24 hours +
    +
    +
    +
    +
    + + {/* Contact Form Section */} +
    +
    +

    + Send Us a Message +

    +
    +
    +
    + + +
    +
    + + +
    +
    + +
    + + +
    + +
    + + +
    + +
    +

    + * Required fields +

    + +
    +
    +
    +
    + + {/* All Contact Options Section */} +
    +

    More Ways to Connect

    +
    + {/* Social Media Card */} + + + {/* App Downloads Card */} +
    +
    +
    + 📱 +
    +

    Memoro App

    +
    + +

    + Download the Memoro app and experience the future of conversation documentation. +

    + + +
    +
    +
    + + + +
    + +
    +
    diff --git a/apps/memoro/apps/landing/src/content/pages/en/faq.mdx b/apps/memoro/apps/landing/src/content/pages/en/faq.mdx new file mode 100644 index 000000000..d748ab71d --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/en/faq.mdx @@ -0,0 +1,29 @@ +--- +title: "FAQ | Memoro" +description: "Frequently Asked Questions about Memoro - Find answers to all your questions" +lang: "en" +type: "page" +lastUpdated: 2025-07-22 +sections: + hero: + title: "Frequently Asked Questions" + subtitle: "Find answers to the most frequently asked questions about Memoro" + callToAction: + title: "Still have questions?" + description: "Our support team is here to help." + buttonText: "Contact us" + buttonLink: "/en/contact" +--- + +export const FAQContent = ({ faqs }) => ( + <> +
    +

    {frontmatter.sections.hero.title}

    +

    + {frontmatter.sections.hero.subtitle} +

    +
    + +); + + \ No newline at end of file diff --git a/apps/memoro/apps/landing/src/content/pages/en/features.mdx b/apps/memoro/apps/landing/src/content/pages/en/features.mdx new file mode 100644 index 000000000..e9b377dbd --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/en/features.mdx @@ -0,0 +1,177 @@ +--- +title: "Features | Memoro" +description: "Discover the innovative features of Memoro for efficient learning and working" +lang: "en" +type: "page" +lastUpdated: 2024-02-22 +sections: + hero: + title: "Features" + subtitle: "Discover the innovative features of Memoro for efficient learning and working" + categories: + recording: + title: "Recording" + customization: + title: "Customization" + language: + title: "Languages" + organization: + title: "Organization" + sharing: + title: "Sharing" + faq: + title: "Frequently Asked Questions about Features" + items: + - question: "Which operating systems are supported?" + answer: "Memoro is available for macOS, Windows, and Linux. Additionally, we offer mobile apps for iOS and Android." + - question: "Can I use Memoro offline?" + answer: "Yes, you can use Memoro completely offline. Your notes are stored locally and automatically synchronized when an internet connection is available." + - question: "Is there a limit to the number of notes?" + answer: "No, you can create unlimited notes even in the free version." + - question: "How does AI assistance work?" + answer: "The AI assistance analyzes your notes and makes intelligent suggestions for connections, summaries, and flashcards. This feature is available in the Pro version." + callToAction: + title: "Ready for Better Meeting Documentation?" + description: "Discover how Memoro makes your meetings more efficient and productive." + buttonText: "Download App" + buttonLink: "/en/download" +--- + +import FeatureCard from "../../../components/FeatureCard.astro"; +import CallToAction from "../../../components/CallToAction.astro"; +import FAQSection from "../../../components/FAQSection.astro"; + + + +{/* Helper function to clean the slug */} +export function cleanSlug(slug, lang) { + const langPrefix = `${lang}/`; + return slug.startsWith(langPrefix) ? slug.substring(langPrefix.length) : slug; +} + +
    +

    {frontmatter.sections.hero.title}

    +

    + {frontmatter.sections.hero.subtitle} +

    +
    + +{/* Recording Features */} +
    +

    {frontmatter.sections.categories.recording.title}

    +
    + {props.features + .filter(feature => feature.data.category === 'recording') + .sort((a, b) => (parseInt(a.data.order) || 0) - (parseInt(b.data.order) || 0)) + .map(feature => ( +
    + +
    + ))} +
    +
    + +{/* Customization Features */} +
    +

    {frontmatter.sections.categories.customization.title}

    +
    + {props.features + .filter(feature => feature.data.category === 'customization') + .sort((a, b) => (parseInt(a.data.order) || 0) - (parseInt(b.data.order) || 0)) + .map(feature => ( +
    + +
    + ))} +
    +
    + +{/* Language Features */} +
    +

    {frontmatter.sections.categories.language.title}

    +
    + {props.features + .filter(feature => feature.data.category === 'language') + .sort((a, b) => (parseInt(a.data.order) || 0) - (parseInt(b.data.order) || 0)) + .map(feature => ( +
    + +
    + ))} +
    +
    + +{/* Organization Features */} +
    +

    {frontmatter.sections.categories.organization.title}

    +
    + {props.features + .filter(feature => feature.data.category === 'organization') + .sort((a, b) => (parseInt(a.data.order) || 0) - (parseInt(b.data.order) || 0)) + .map(feature => ( +
    + +
    + ))} +
    +
    + +{/* Sharing Features */} +
    +

    {frontmatter.sections.categories.sharing.title}

    +
    + {props.features + .filter(feature => feature.data.category === 'sharing') + .sort((a, b) => (parseInt(a.data.order) || 0) - (parseInt(b.data.order) || 0)) + .map(feature => ( +
    + +
    + ))} +
    +
    + + + +
    + +
    diff --git a/apps/memoro/apps/landing/src/content/pages/en/fireflies-ai-alternative.mdx b/apps/memoro/apps/landing/src/content/pages/en/fireflies-ai-alternative.mdx new file mode 100644 index 000000000..20d58f1f6 --- /dev/null +++ b/apps/memoro/apps/landing/src/content/pages/en/fireflies-ai-alternative.mdx @@ -0,0 +1,556 @@ +--- +title: "Fireflies.ai Alternative Europe - 100% GDPR-compliant & EU Servers | Memoro" +description: "The secure Fireflies.ai alternative for European companies ► European servers instead of US cloud ✓ GDPR-compliant ✓ No privacy risks ✓ Switch now!" +keywords: ["fireflies.ai alternative", "fireflies alternative europe", "fireflies.ai gdpr", "fireflies alternative gdpr", "meeting software gdpr compliant", "transcription european servers", "fireflies.ai privacy", "secure meeting software"] +lang: en +type: comparison +lastUpdated: 2025-01-09 +sections: + hero: + title: "Fireflies.ai Alternative - GDPR-compliant with European Servers" + subtitle: "The secure alternative to Fireflies.ai from Europe" + cta: "Start securely" + features: + title: "Why Memoro is more secure" + items: ["European servers", "GDPR-compliant", "End-to-end encryption"] +ogImage: "/images/og/fireflies-alternative.png" +canonical: "https://memoro.ai/en/fireflies-ai-alternative" +robots: "index, follow" +priority: 0.9 +changefreq: "monthly" +--- + +import Button from '../../../components/atoms/Button.astro'; +import { Icon } from 'astro-icon/components'; +import FAQ from '../../../components/FAQ.astro'; +import ComparisonTable from '../../../components/ComparisonTable.astro'; +import TestimonialCard from '../../../components/TestimonialCard.astro'; +import SecurityComparison from '../../../components/SecurityComparison.astro'; +import ROICalculator from '../../../components/ROICalculator.astro'; + +# Fireflies.ai Alternative - GDPR-compliant with European Servers + +
    +
    + +
    +

    ⚠️ Privacy Warning: Fireflies.ai Processes Your Data in the USA

    +

    + Fireflies.ai uses Google Cloud servers in the USA for data processing. + Even though they claim "GDPR-compliance", significant legal risks remain for European companies. +

    +

    + After the Schrems-II ruling by the ECJ, this can lead to fines up to €20 million or 4% of annual revenue. +

    +
    +
    +
    + +
    +
    +

    The Secure Alternative to Fireflies.ai from Europe

    +

    + Memoro offers all benefits of Fireflies.ai - but with 100% GDPR compliance, + European servers, and no privacy risks. Protect your company data and stay compliant. +

    +
    + + +
    +
    +
    + + European Servers +
    +
    + + ISO 27001 +
    +
    + + GDPR Certificate +
    +
    + + E2E Encryption +
    +
    +
    +
    + Memoro Security Dashboard - GDPR-compliant Alternative to Fireflies +
    +
    + +## The Privacy Problem with Fireflies.ai for European Companies + + +
    +

    🚨 Critical Privacy Risks with Fireflies.ai

    + +
    +

    1. US Data Processing Despite "GDPR-Compliance"

    +

    Fireflies optionally stores data in the EU, but processes it in the USA. + This potentially violates GDPR Art. 44-49.

    +
    + +
    +

    2. Google Cloud Infrastructure

    +

    Using Google Cloud means US authorities could theoretically access your data (CLOUD Act).

    +
    + +
    +

    3. Unclear Sub-Processors

    +

    Fireflies uses various US-based third parties for AI processing - often without transparent listing.

    +
    + +
    +

    4. Works Council & Co-Determination

    +

    US software with employee monitoring potential can lead to labor law problems in Europe.

    +
    + +
    +

    5. No Local Jurisdiction

    +

    In case of data breaches, you must sue in the USA - expensive and hopeless.

    +
    +
    + +
    +

    ✅ Memoro's Legally Secure Solution

    + +
    +

    1. 100% European Data Processing

    +

    All data is exclusively stored AND processed in Europe (Hetzner Datacenter).

    +
    + +
    +

    2. No US Cloud Providers

    +

    Completely European infrastructure without dependency on US companies.

    +
    + +
    +

    3. Transparent Data Processing

    +

    Clear Data Processing Agreements (DPA) under European law with all sub-processors.

    +
    + +
    +

    4. Works Council Compliant

    +

    Specifically developed for European co-determination - with works agreement templates.

    +
    + +
    +

    5. European Jurisdiction

    +

    For questions or issues, European law applies with local jurisdiction.

    +
    +
    +
    + +## Fireflies.ai vs. Memoro - The Compliance Comparison + + + +## What Data Protection Officers Say About Switching + +
    + + + + + + + +
    + +## Avoid Legal Risks - The Switch Guide + +
    +

    🔒 4 Steps to Legally Secure Meeting Documentation

    + +
    +
    +
    +
    1
    +
    +

    Terminate Fireflies.ai Legally

    +
      +
    • • Request data export (GDPR Art. 20)
    • +
    • • Demand deletion & get confirmation
    • +
    • • Keep documentation for compliance
    • +
    +
    +
    +
    + +
    +
    +
    2
    +
    +

    Set Up Memoro GDPR-Compliant

    +
      +
    • • Sign DPA (Data Processing Agreement)
    • +
    • • Document technical measures
    • +
    • • Obtain employee consents
    • +
    +
    +
    +
    + +
    +
    +
    3
    +
    +

    Involve Works Council

    +
      +
    • • Use works agreement template
    • +
    • • European servers as argument
    • +
    • • No employee monitoring possible
    • +
    +
    +
    +
    + +
    +
    +
    4
    +
    +

    Document Compliance

    +
      +
    • • Update processing directory
    • +
    • • Data protection impact assessment
    • +
    • • Activate audit trail
    • +
    +
    +
    +
    +
    +
    + +## The True Costs of Privacy Violations + + + +## Memoro's Security Features in Detail + +
    +
    + +

    European Infrastructure

    +
      +
    • ✓ Hetzner Datacenter Germany
    • +
    • ✓ No US cloud services
    • +
    • ✓ Geo-redundant backups in EU
    • +
    • ✓ 99.9% availability SLA
    • +
    +
    + +
    + +

    Encryption

    +
      +
    • ✓ End-to-end encryption
    • +
    • ✓ AES-256 for data at rest
    • +
    • ✓ TLS 1.3 for transport
    • +
    • ✓ Zero-knowledge option
    • +
    +
    + +
    + +

    Certifications

    +
      +
    • ✓ ISO 27001 certified
    • +
    • ✓ GDPR certificate
    • +
    • ✓ German BSI compliant
    • +
    • ✓ TISAX Level 2 (Automotive)
    • +
    +
    + +
    + +

    Access Control

    +
      +
    • ✓ Role-based permissions (RBAC)
    • +
    • ✓ 2-factor authentication
    • +
    • ✓ SSO with SAML 2.0
    • +
    • ✓ Audit logs (immutable)
    • +
    +
    + +
    + +

    Legal Security

    +
      +
    • ✓ European DPA standard
    • +
    • ✓ Works agreement templates
    • +
    • ✓ Deletion concept per GDPR
    • +
    • ✓ European jurisdiction
    • +
    +
    + +
    + +

    Data Sovereignty

    +
      +
    • ✓ Exportable anytime
    • +
    • ✓ Immediate deletion possible
    • +
    • ✓ On-premise option available
    • +
    • ✓ No data for AI training
    • +
    +
    +
    + +## Frequently Asked Questions About Privacy Switch + +