mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-15 20:39:41 +02:00
- Set up 5 AI services on Windows GPU server (RTX 3090): - mana-llm (Port 3025): OpenAI-compatible LLM gateway via Ollama - mana-stt (Port 3020): WhisperX with word timestamps + speaker diarization - mana-tts (Port 3022): Kokoro (EN) + Edge TTS (DE) + Piper (local DE) - mana-image-gen (Port 3023): FLUX.2 klein 4B image generation - Ollama (Port 11434): gemma3:4b/12b, qwen2.5-coder:14b, nomic-embed-text - Add @manacore/shared-gpu TypeScript client package with SttClient, TtsClient, ImageClient - Add CUDA-compatible whisper_service using faster-whisper for Windows - Configure public access via Cloudflare Tunnel (gpu-llm/stt/tts/img.mana.how) - Add Loki log aggregator (Docker on Mac Mini) + log shipper on GPU server - Add GPU scrape targets to Prometheus/VictoriaMetrics config - Add Grafana Loki datasource for GPU service logs - Add health check with auto-restart, log rotation, and log shipping - Document complete setup: Always-On config, troubleshooting, architecture Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
24 lines
572 B
TypeScript
24 lines
572 B
TypeScript
export { GpuClient } from './gpu-client';
|
|
export { SttClient } from './stt-client';
|
|
export { TtsClient } from './tts-client';
|
|
export { ImageClient } from './image-client';
|
|
export { resolveServiceUrl } from './resolve-url';
|
|
export { GPU_PUBLIC_URLS, GPU_LAN_URLS } from './types';
|
|
export type {
|
|
// Config
|
|
GpuServiceConfig,
|
|
// STT
|
|
TranscriptionResult,
|
|
TranscribeOptions,
|
|
WordTimestamp,
|
|
Segment,
|
|
// TTS
|
|
SynthesizeOptions,
|
|
TTSVoice,
|
|
TTSVoiceType,
|
|
TTSHealthResponse,
|
|
// Image
|
|
GenerateImageOptions,
|
|
GenerateImageResult,
|
|
ImageGenHealthResponse,
|
|
} from './types';
|