feat(web): PillNav bar mode, fullscreen, local STT + mic button

PillNav overhaul:
- Dropdown-as-bar: theme/AI/sync/user menus render as horizontal
  bars in the bottom stack (PillDropdownBar) instead of floating
  popovers. New onOpenBar/activeBarId props on PillNavigation.
- iconOnly pills: tags/search/workbench-tabs pills show only icons.
  Home pill removed. New iconOnly flag on PillNavItem.
- Segmented toggle groups: items sharing a `group` id render as a
  single segmented pill (e.g. Light/Dark/System triple).
- Fullscreen mode: press "f" to hide all bottom chrome, Esc to exit.
- QuickInputBar + bottom bar visibility toggles via new pills.
- Progress ring on AI trigger pill during model download
  (conic-gradient ::after, follows pill border-radius).

@mana/local-stt — new package for browser-local speech-to-text:
- Whisper models via transformers.js v4 (WebGPU + WASM fallback)
- Same Web Worker architecture as @mana/local-llm
- Two models: Whisper Tiny (150 MB) and Whisper Small (950 MB)
- Reactive Svelte 5 bindings (getLocalSttStatus, loadLocalStt, transcribe)

Voice-to-text integration:
- useLocalStt() composable: mic capture via AudioContext +
  ScriptProcessor, resample to 16kHz mono, feed into Whisper worker
- Mic button in QuickInputBar (leftAction slot) with
  recording/loading/transcribing states + pulse animation
- Transcribed text injected into InputBar via new injectedText prop
- STT model selector in AI bar alongside LLM tier controls

Also: vite.config.ts server.fs.allow expanded to monorepo root
so workspace package workers resolve in dev.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Till JS 2026-04-12 16:05:43 +02:00
parent 8c2f9306e9
commit 3deee755b3
24 changed files with 2145 additions and 28 deletions

View file

@ -0,0 +1,195 @@
# `@mana/local-stt` — Browser-Local Speech-to-Text
Client-side speech-to-text that runs **entirely in the user's browser** via WebGPU (WASM fallback). No server roundtrips, no API keys, no audio leaving the device. Uses OpenAI's Whisper models through `@huggingface/transformers` v4 — the same library that powers `@mana/local-llm`.
**Don't confuse this with the server-side STT** (`services/mana-stt`). The server-side service runs Whisper on the GPU server (RTX 3090). This package is the **only** STT path that keeps audio on the user's device.
## What's in the box
| Field | Value |
|---|---|
| Engine library | [`@huggingface/transformers`](https://huggingface.co/docs/transformers.js/index) v4 (transformers.js) |
| Backend | WebGPU (primary), WASM (fallback) |
| Default model | `onnx-community/whisper-tiny` (~150 MB, multilingual) |
| Pipeline | `automatic-speech-recognition` (Whisper encoder-decoder) |
| Audio input | Float32Array, 16 kHz mono PCM |
| Chunking | 30s chunks with 5s stride overlap (handled by the pipeline) |
## Available models
| Key | Model | Size | English WER | Multilingual |
|-----|-------|------|------------|-------------|
| `whisper-tiny` | Whisper Tiny | ~150 MB | ~5.6% | Yes (auto-detect) |
| `whisper-tiny.en` | Whisper Tiny EN | ~150 MB | ~5.6% | No (English only) |
| `whisper-base` | Whisper Base | ~290 MB | ~4.3% | Yes |
| `whisper-base.en` | Whisper Base EN | ~290 MB | ~4.3% | No |
| `whisper-small` | Whisper Small | ~950 MB | ~3.4% | Yes |
Default is `whisper-tiny` — smallest, fastest, multilingual. Users can switch in settings.
## Architecture
Mirrors `@mana/local-llm` exactly:
```
Consumer (Svelte component)
svelte.svelte.ts — reactive status ($state), loadLocalStt(), transcribe()
engine.ts — main-thread proxy (LocalSttEngine singleton)
│ postMessage / onmessage
worker.ts — Web Worker entry point
engine-impl.ts — transformers.js pipeline('automatic-speech-recognition')
@huggingface/transformers — ONNX runtime (WebGPU or WASM)
```
The Web Worker isolates the heavy Whisper inference (~3-5s for 60s audio on WebGPU) from the main thread. Audio processing never blocks the UI.
## API surface (Svelte 5 usage)
```svelte
<script lang="ts">
import {
getLocalSttStatus,
loadLocalStt,
transcribe,
isLocalSttSupported,
MODELS,
DEFAULT_MODEL,
} from '@mana/local-stt';
const status = getLocalSttStatus();
const supported = isLocalSttSupported();
// Load on-demand (idempotent)
async function start() {
await loadLocalStt(DEFAULT_MODEL);
}
// Transcribe audio
let result = $state('');
async function handleAudio(pcm16k: Float32Array) {
const out = await transcribe({
audio: pcm16k,
language: 'de',
onChunk: (text) => { result += text; },
});
result = out.text;
}
</script>
{#if !supported}
<p>WebGPU not available.</p>
{:else if status.current.state === 'downloading'}
<p>Downloading: {(status.current.progress * 100).toFixed(0)}%</p>
{:else if status.current.state === 'ready'}
<button onclick={start}>Ready</button>
{/if}
```
Status union: `idle | checking | downloading | loading | ready | error` (same as `@mana/local-llm`).
## Audio input format
The `transcribe()` function expects **Float32Array of 16 kHz mono PCM** samples (values -1.0 to 1.0). The consumer is responsible for:
1. Capturing audio (e.g. `navigator.mediaDevices.getUserMedia`)
2. Extracting raw PCM from the `AudioContext`
3. Resampling to 16 kHz if the mic runs at a different rate (typically 44.1/48 kHz)
The high-level `useLocalStt()` composable in `apps/mana/apps/web/src/lib/components/voice/use-local-stt.svelte.ts` handles all of this automatically.
## High-level composable: `useLocalStt()`
Located at `apps/mana/apps/web/src/lib/components/voice/use-local-stt.svelte.ts`. Combines mic capture + resampling + transcription in one call:
```svelte
<script lang="ts">
import { useLocalStt } from '$lib/components/voice/use-local-stt.svelte';
const stt = useLocalStt({ language: 'de' });
// stt.state — 'idle' | 'loading' | 'recording' | 'transcribing'
// stt.text — final transcribed text
// stt.partial — streaming partial text (per chunk)
// stt.error — error message or null
// stt.toggle() — start recording or stop + transcribe
// stt.cancel() — abort without transcribing
</script>
<button onclick={() => stt.toggle()}>
{stt.state === 'recording' ? 'Stop' : 'Record'}
</button>
<p>{stt.text}</p>
```
Audio pipeline inside the composable:
```
getUserMedia (native sample rate, e.g. 48 kHz)
→ AudioContext + ScriptProcessorNode → collect Float32 chunks
→ on stop: merge all chunks + linear resample to 16 kHz mono
→ transcribe() via @mana/local-stt worker
→ text result
```
## UI integration
The QuickInputBar in `(app)/+layout.svelte` has a mic button (left slot) that uses `useLocalStt()`:
- **Idle**: Microphone icon
- **Loading**: Disabled, pulsing (model downloading)
- **Recording**: Red stop icon with pulse animation
- **Transcribing**: Disabled, fading
When transcription completes, the text is fed into `inputBarAdapter.onCreate()` — making it context-aware: on `/todo` it creates a task, on `/calendar` an event, on `/` it searches.
## CSP requirements
Same as `@mana/local-llm` — no new CSP rules needed. The existing config in `apps/mana/apps/web/src/hooks.server.ts` already allows:
- `script-src`: `'wasm-unsafe-eval'`, `https://cdn.jsdelivr.net`, `blob:`
- `connect-src`: `https://huggingface.co`, `https://*.huggingface.co`, `https://*.hf.co`, `https://cdn.jsdelivr.net`
## Browser cache
Models are cached in the browser Cache API under HuggingFace URLs (same as local-llm). `hasModelInCache(modelId)` probes for `config.json` to detect cached models. After first download, subsequent loads are instant.
## Browser support
- WebGPU: Chrome/Edge 113+, Safari 18+ (fastest, ~3-5s for 60s audio)
- WASM fallback: all modern browsers (~15-20s for 60s audio)
- Requires `getUserMedia` for mic access (HTTPS or localhost)
## Adding a new model
Add an entry to `src/models.ts`:
```ts
'whisper-medium': {
modelId: 'onnx-community/whisper-medium',
displayName: 'Whisper Medium',
dtype: 'fp32',
downloadSizeMb: 3000,
ramUsageMb: 4000,
},
```
The model must be an ONNX build on HuggingFace with a Whisper architecture.
## Relationship to existing voice features
| Component | Purpose | Uses local-stt? |
|-----------|---------|----------------|
| `voiceRecorder` singleton | Record audio as Blob (webm/opus) for server transcription | No |
| `VoiceCaptureBar` | UI bar for dreams/memoro voice capture → sends to mana-stt server | No |
| `useLocalStt()` | Record + transcribe entirely on-device | **Yes** |
| QuickInputBar mic button | Voice-to-text for any module via useLocalStt | **Yes** |
The existing `voiceRecorder` and `VoiceCaptureBar` are still used for features that need server-side processing (e.g. dreams with server STT). `useLocalStt()` is the privacy-first alternative that never sends audio off-device.

View file

@ -0,0 +1,26 @@
{
"name": "@mana/local-stt",
"version": "0.1.0",
"private": true,
"description": "Client-side speech-to-text via transformers.js (Whisper, WebGPU) with Svelte 5 reactive stores",
"main": "./src/index.ts",
"types": "./src/index.ts",
"exports": {
".": "./src/index.ts"
},
"scripts": {
"type-check": "tsc --noEmit",
"clean": "rm -rf dist"
},
"dependencies": {
"@huggingface/transformers": "^4.0.0"
},
"devDependencies": {
"@types/node": "^24.10.1",
"svelte": "^5.0.0",
"typescript": "^5.9.3"
},
"peerDependencies": {
"svelte": "^5.0.0"
}
}

View file

@ -0,0 +1,22 @@
/**
* Check if a Whisper model is already cached in the browser.
*
* Same approach as @mana/local-llm: probe for the model's config.json
* in the Cache API. Whisper models always have this file and it's
* downloaded first, so its presence reliably indicates "downloaded before".
*/
export async function hasModelInCache(modelId: string): Promise<boolean> {
if (typeof caches === 'undefined') return false;
try {
const cacheNames = await caches.keys();
const url = `https://huggingface.co/${modelId}/resolve/main/config.json`;
for (const name of cacheNames) {
const cache = await caches.open(name);
const match = await cache.match(url);
if (match) return true;
}
return false;
} catch {
return false;
}
}

View file

@ -0,0 +1,231 @@
/**
* LocalSttEngineImpl the actual transformers.js Whisper engine.
*
* Runs inside a Web Worker (worker.ts). The main thread never
* instantiates this directly it talks to a thin proxy in engine.ts
* that postMessages over to the worker.
*
* Whisper processes audio in 30-second chunks. For longer recordings
* the pipeline handles chunking internally via `chunk_length_s`.
* We expose pseudo-streaming by forwarding each chunk's text via
* the onChunk callback as it completes.
*/
import type {
TranscribeOptions,
TranscribeResult,
TranscribeSegment,
LoadingStatus,
SttModelConfig,
} from './types';
import { MODELS, DEFAULT_MODEL, type ModelKey } from './models';
type TransformersModule = typeof import('@huggingface/transformers');
// eslint-disable-next-line @typescript-eslint/no-explicit-any
type AnyPipeline = any;
export class LocalSttEngineImpl {
private pipeline: AnyPipeline = null;
private transformers: TransformersModule | null = null;
private loadPromise: Promise<void> | null = null;
private currentModel: ModelKey | null = null;
private _status: LoadingStatus = { state: 'idle' };
private statusListeners: Set<(status: LoadingStatus) => void> = new Set();
get status(): LoadingStatus {
return this._status;
}
get isReady(): boolean {
return this._status.state === 'ready';
}
get modelConfig(): SttModelConfig | null {
return this.currentModel ? MODELS[this.currentModel] : null;
}
onStatusChange(listener: (status: LoadingStatus) => void): () => void {
this.statusListeners.add(listener);
return () => this.statusListeners.delete(listener);
}
private setStatus(status: LoadingStatus) {
this._status = status;
for (const listener of this.statusListeners) {
listener(status);
}
}
static isSupported(): boolean {
return typeof navigator !== 'undefined' && 'gpu' in navigator;
}
async load(model: ModelKey = DEFAULT_MODEL): Promise<void> {
if (this.pipeline && this.currentModel === model) return;
if (this.loadPromise && this.currentModel === model) return this.loadPromise;
if (this.pipeline && this.currentModel !== model) {
await this.unload();
}
this.currentModel = model;
this.loadPromise = this._load(model);
return this.loadPromise;
}
private async _load(model: ModelKey): Promise<void> {
this.setStatus({ state: 'checking' });
try {
if (!this.transformers) {
this.transformers = await import('@huggingface/transformers');
}
const config = MODELS[model];
// Aggregated download progress tracking (same pattern as local-llm).
const fileProgress = new Map<string, { loaded: number; total: number }>();
const formatBytes = (bytes: number): string => {
if (bytes < 1024) return `${bytes} B`;
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(0)} KB`;
if (bytes < 1024 * 1024 * 1024) return `${(bytes / (1024 * 1024)).toFixed(0)} MB`;
return `${(bytes / (1024 * 1024 * 1024)).toFixed(2)} GB`;
};
const emitAggregate = () => {
let totalLoaded = 0;
let totalSize = 0;
for (const { loaded, total } of fileProgress.values()) {
totalLoaded += loaded;
totalSize += total;
}
const pct = totalSize > 0 ? totalLoaded / totalSize : 0;
this.setStatus({
state: 'downloading',
progress: pct,
text:
totalSize > 0
? `Downloading model (${(pct * 100).toFixed(0)}%, ${formatBytes(totalLoaded)} / ${formatBytes(totalSize)})`
: `Downloading model (${fileProgress.size} files queued)`,
});
};
const progressCallback = (report: {
status: string;
file?: string;
name?: string;
progress?: number;
loaded?: number;
total?: number;
}) => {
const file = report.file ?? report.name ?? '_unknown';
if (report.status === 'initiate') {
if (!fileProgress.has(file)) fileProgress.set(file, { loaded: 0, total: 0 });
emitAggregate();
} else if (report.status === 'download' || report.status === 'progress') {
fileProgress.set(file, {
loaded: report.loaded ?? 0,
total: report.total ?? fileProgress.get(file)?.total ?? 0,
});
emitAggregate();
} else if (report.status === 'done') {
const existing = fileProgress.get(file);
if (existing && existing.total > 0) {
fileProgress.set(file, { loaded: existing.total, total: existing.total });
}
emitAggregate();
}
};
this.setStatus({ state: 'loading', text: 'Loading Whisper pipeline…' });
// Use transformers.js pipeline() API for automatic-speech-recognition.
// This handles model + processor + tokenizer loading in one call.
// Device selection: try WebGPU first, fall back to WASM.
const device = LocalSttEngineImpl.isSupported() ? 'webgpu' : 'wasm';
this.pipeline = await this.transformers.pipeline(
'automatic-speech-recognition',
config.modelId,
{
dtype: config.dtype,
device,
progress_callback: progressCallback,
}
);
this.setStatus({ state: 'ready' });
} catch (err) {
const message = err instanceof Error ? err.message : String(err);
this.setStatus({ state: 'error', error: message });
this.loadPromise = null;
throw err;
}
}
async unload(): Promise<void> {
this.pipeline = null;
this.currentModel = null;
this.loadPromise = null;
this.setStatus({ state: 'idle' });
}
async transcribe(options: TranscribeOptions): Promise<TranscribeResult> {
if (!this.pipeline) {
await this.load();
}
const start = performance.now();
// Build pipeline options.
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const pipelineOpts: Record<string, any> = {
// Chunk long audio into 30s segments with 5s stride overlap.
chunk_length_s: 30,
stride_length_s: 5,
// Return timestamps if requested.
return_timestamps: options.timestamps ? true : false,
};
if (options.language) {
pipelineOpts.language = options.language;
}
// Callback for pseudo-streaming: transformers.js emits partial
// results per chunk via the `chunk_callback` option.
if (options.onChunk) {
const onChunk = options.onChunk;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
pipelineOpts.chunk_callback = (chunk: any) => {
if (chunk?.text) {
onChunk(chunk.text);
}
};
}
// Run the pipeline. Input is Float32Array of 16kHz mono PCM.
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const output: any = await this.pipeline(options.audio, pipelineOpts);
const latencyMs = Math.round(performance.now() - start);
// Parse output — the pipeline returns { text, chunks? } for
// automatic-speech-recognition with return_timestamps.
const text: string = output.text ?? '';
const language: string = options.language ?? 'auto';
let segments: TranscribeSegment[] | undefined;
if (options.timestamps && output.chunks) {
segments = output.chunks.map(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
(c: any) => ({
start: c.timestamp?.[0] ?? 0,
end: c.timestamp?.[1] ?? 0,
text: c.text ?? '',
})
);
}
return { text, language, segments, latencyMs };
}
}

View file

@ -0,0 +1,151 @@
/**
* LocalSttEngine main-thread proxy for the worker-hosted Whisper engine.
*
* Public API mirrors the engine-impl but all work happens in a Web Worker
* so audio processing doesn't block the UI thread.
*
* Lazy construction: the Worker is only instantiated on first method call.
* This keeps import-time side effects to zero (SSR-safe).
*/
import { MODELS, DEFAULT_MODEL, type ModelKey } from './models';
import type { TranscribeOptions, TranscribeResult, LoadingStatus, SttModelConfig } from './types';
import type { SerializableTranscribeOptions, WorkerRequest, WorkerResponse } from './worker';
interface PendingRequest {
resolve: (data: unknown) => void;
reject: (err: Error) => void;
onChunk?: (text: string) => void;
}
type DistributiveOmit<T, K extends keyof T> = T extends unknown ? Omit<T, K> : never;
type WorkerRequestPayload = DistributiveOmit<WorkerRequest, 'id'>;
export class LocalSttEngine {
private worker: Worker | null = null;
private pending = new Map<string, PendingRequest>();
private nextId = 0;
private currentModel: ModelKey | null = null;
private _status: LoadingStatus = { state: 'idle' };
private statusListeners: Set<(status: LoadingStatus) => void> = new Set();
get status(): LoadingStatus {
return this._status;
}
get isReady(): boolean {
return this._status.state === 'ready';
}
get modelConfig(): SttModelConfig | null {
return this.currentModel ? MODELS[this.currentModel] : null;
}
onStatusChange(listener: (status: LoadingStatus) => void): () => void {
this.statusListeners.add(listener);
return () => this.statusListeners.delete(listener);
}
private setStatus(status: LoadingStatus) {
this._status = status;
for (const listener of this.statusListeners) {
listener(status);
}
}
static isSupported(): boolean {
return typeof navigator !== 'undefined' && 'gpu' in navigator;
}
// ─── Worker management ──────────────────────────────────
private getWorker(): Worker {
if (this.worker) return this.worker;
if (typeof Worker === 'undefined') {
throw new Error('@mana/local-stt requires a browser environment (Worker is not defined)');
}
this.worker = new Worker(new URL('./worker.ts', import.meta.url), {
type: 'module',
name: 'mana-local-stt',
});
this.worker.addEventListener('message', this.handleWorkerMessage);
this.worker.addEventListener('error', (e) => {
const message = e.message || 'Worker crashed';
for (const [id, p] of this.pending) {
p.reject(new Error(`Worker error: ${message}`));
this.pending.delete(id);
}
this.setStatus({ state: 'error', error: message });
});
return this.worker;
}
private handleWorkerMessage = (event: MessageEvent<WorkerResponse>) => {
const msg = event.data;
if (msg.type === 'status') {
this.setStatus(msg.status);
return;
}
if (msg.type === 'chunk') {
const pending = this.pending.get(msg.id);
pending?.onChunk?.(msg.text);
return;
}
const pending = this.pending.get(msg.id);
if (!pending) return;
this.pending.delete(msg.id);
if (msg.type === 'result') {
pending.resolve(msg.data);
} else {
pending.reject(new Error(msg.message));
}
};
private postRequest<T>(req: WorkerRequestPayload, onChunk?: (text: string) => void): Promise<T> {
const id = `${++this.nextId}`;
const worker = this.getWorker();
return new Promise<T>((resolve, reject) => {
this.pending.set(id, {
resolve: (data) => resolve(data as T),
reject,
onChunk,
});
worker.postMessage({ ...req, id } as WorkerRequest);
});
}
// ─── Public API ──────────────────────────────────────────
async load(model: ModelKey = DEFAULT_MODEL): Promise<void> {
if (this.currentModel === model && this.isReady) return;
this.currentModel = model;
await this.postRequest<void>({ type: 'load', modelKey: model });
}
async unload(): Promise<void> {
if (!this.worker) return;
await this.postRequest<void>({ type: 'unload' });
this.currentModel = null;
this.worker.terminate();
this.worker = null;
this.pending.clear();
}
async transcribe(options: TranscribeOptions): Promise<TranscribeResult> {
const { onChunk, ...rest } = options;
const opts: SerializableTranscribeOptions = rest;
return this.postRequest<TranscribeResult>({ type: 'transcribe', opts }, onChunk);
}
}
/** Singleton instance for app-wide use */
export const localSTT = new LocalSttEngine();

View file

@ -0,0 +1,28 @@
// Engine
export { LocalSttEngine, localSTT } from './engine';
// Models
export { MODELS, DEFAULT_MODEL } from './models';
export type { ModelKey } from './models';
// Types
export type {
TranscribeOptions,
TranscribeResult,
TranscribeSegment,
SttModelConfig,
LoadingStatus,
TranscriptionStatus,
} from './types';
// Cache utilities
export { hasModelInCache } from './cache';
// Svelte 5 reactive helpers
export {
getLocalSttStatus,
loadLocalStt,
unloadLocalStt,
isLocalSttSupported,
transcribe,
} from './svelte.svelte';

View file

@ -0,0 +1,38 @@
import type { SttModelConfig } from './types';
/**
* Pre-configured Whisper models for client-side speech-to-text.
*
* All models are ONNX builds loaded via @huggingface/transformers (transformers.js)
* with the WebGPU backend. English-only variants are smaller and faster for
* single-language use; multilingual models auto-detect the spoken language.
*
* Model quality/size trade-off (English WER on LibriSpeech test-clean):
* tiny.en: ~5.6% 39M params, very fast, good enough for dictation
* base.en: ~4.3% 74M params, noticeably better on accents/noise
* small.en: ~3.4% 244M params, near-human accuracy, slower
* tiny: ~7.6% multilingual, auto-detects language
* base: ~5.0% multilingual
* small: ~3.9% multilingual
*/
export const MODELS = {
'whisper-tiny': {
modelId: 'onnx-community/whisper-tiny',
displayName: 'Whisper Tiny',
dtype: 'fp32',
downloadSizeMb: 150,
ramUsageMb: 300,
},
'whisper-small': {
modelId: 'onnx-community/whisper-small',
displayName: 'Whisper Small',
dtype: 'fp32',
downloadSizeMb: 950,
ramUsageMb: 1500,
},
} as const satisfies Record<string, SttModelConfig>;
export type ModelKey = keyof typeof MODELS;
export const DEFAULT_MODEL: ModelKey = 'whisper-tiny';

View file

@ -0,0 +1,56 @@
/**
* Svelte 5 reactive integration for LocalSttEngine.
*
* Usage in a Svelte component:
* import { getLocalSttStatus, loadLocalStt, transcribe } from '@mana/local-stt';
*
* const status = getLocalSttStatus();
* loadLocalStt();
* // use status.current reactively
*/
import { LocalSttEngine, localSTT } from './engine';
import type { LoadingStatus, TranscribeOptions, TranscribeResult } from './types';
import type { ModelKey } from './models';
let _status = $state<LoadingStatus>({ state: 'idle' });
localSTT.onStatusChange((s) => {
_status = s;
});
export function getLocalSttStatus(): { readonly current: LoadingStatus } {
return {
get current() {
return _status;
},
};
}
/**
* Load a Whisper model. Safe to call multiple times (idempotent).
*/
export async function loadLocalStt(model?: ModelKey): Promise<void> {
return localSTT.load(model);
}
/**
* Unload the model and free memory.
*/
export async function unloadLocalStt(): Promise<void> {
return localSTT.unload();
}
/**
* Check if WebGPU is available for accelerated STT.
*/
export function isLocalSttSupported(): boolean {
return LocalSttEngine.isSupported();
}
/**
* Transcribe audio to text.
*/
export async function transcribe(options: TranscribeOptions): Promise<TranscribeResult> {
return localSTT.transcribe(options);
}

View file

@ -0,0 +1,63 @@
/**
* Types for client-side speech-to-text inference.
*/
export interface TranscribeOptions {
/** Raw audio data (Float32Array of PCM samples at 16 kHz mono) */
audio: Float32Array;
/** Language code (e.g. 'de', 'en'). If omitted, auto-detected. */
language?: string;
/** Whether to return timestamps per segment */
timestamps?: boolean;
/** Callback for each transcribed chunk (pseudo-streaming) */
onChunk?: (text: string) => void;
}
export interface TranscribeResult {
/** Full transcribed text */
text: string;
/** Detected or forced language */
language: string;
/** Per-segment timestamps (if requested) */
segments?: TranscribeSegment[];
/** Transcription time in ms */
latencyMs: number;
}
export interface TranscribeSegment {
/** Start time in seconds */
start: number;
/** End time in seconds */
end: number;
/** Segment text */
text: string;
}
export interface SttModelConfig {
/** HuggingFace ONNX repo id */
modelId: string;
/** Human-readable name */
displayName: string;
/** Quantization level */
dtype: 'fp32' | 'fp16' | 'q8' | 'q4' | 'q4f16';
/** Approximate download size in MB */
downloadSizeMb: number;
/** Approximate RAM/VRAM usage in MB */
ramUsageMb: number;
/** Whether this is an English-only model */
englishOnly?: boolean;
}
export type LoadingStatus =
| { state: 'idle' }
| { state: 'checking' }
| { state: 'downloading'; progress: number; text: string }
| { state: 'loading'; text: string }
| { state: 'ready' }
| { state: 'error'; error: string };
export type TranscriptionStatus =
| { state: 'idle' }
| { state: 'recording' }
| { state: 'transcribing'; progress?: number }
| { state: 'done'; text: string };

View file

@ -0,0 +1,96 @@
/**
* Web Worker entry point for @mana/local-stt.
*
* Runs in a Dedicated Worker context, owns a single LocalSttEngineImpl
* instance, and exchanges messages with the main thread proxy (engine.ts).
*
* Protocol:
*
* Main Worker (WorkerRequest):
* { id, type: 'load', modelKey: ModelKey }
* { id, type: 'unload' }
* { id, type: 'transcribe', opts: SerializableTranscribeOptions }
* { id, type: 'isReady' }
*
* Worker Main (WorkerResponse):
* { id, type: 'result', data?: unknown }
* { id, type: 'error', message: string }
* { id, type: 'chunk', text: string } streaming chunk
* { type: 'status', status: LoadingStatus } broadcast, no id
*/
import { LocalSttEngineImpl } from './engine-impl';
import type { LoadingStatus, TranscribeOptions } from './types';
import type { ModelKey } from './models';
// ─── Protocol types (mirrored in engine.ts) ────────────────────
export type SerializableTranscribeOptions = Omit<TranscribeOptions, 'onChunk'>;
export type WorkerRequest =
| { id: string; type: 'load'; modelKey: ModelKey }
| { id: string; type: 'unload' }
| { id: string; type: 'transcribe'; opts: SerializableTranscribeOptions }
| { id: string; type: 'isReady' };
export type WorkerResponse =
| { id: string; type: 'result'; data?: unknown }
| { id: string; type: 'error'; message: string }
| { id: string; type: 'chunk'; text: string }
| { type: 'status'; status: LoadingStatus };
// ─── Worker setup ──────────────────────────────────────────────
const engine = new LocalSttEngineImpl();
// Forward all status changes to the main thread as broadcast messages.
engine.onStatusChange((status) => {
postMessage({ type: 'status', status } satisfies WorkerResponse);
});
self.addEventListener('message', async (event: MessageEvent<WorkerRequest>) => {
const req = event.data;
try {
switch (req.type) {
case 'load': {
await engine.load(req.modelKey);
postMessage({ id: req.id, type: 'result' } satisfies WorkerResponse);
break;
}
case 'unload': {
await engine.unload();
postMessage({ id: req.id, type: 'result' } satisfies WorkerResponse);
break;
}
case 'isReady': {
postMessage({
id: req.id,
type: 'result',
data: engine.isReady,
} satisfies WorkerResponse);
break;
}
case 'transcribe': {
const result = await engine.transcribe({
...req.opts,
onChunk: (text) => {
postMessage({ id: req.id, type: 'chunk', text } satisfies WorkerResponse);
},
});
postMessage({
id: req.id,
type: 'result',
data: result,
} satisfies WorkerResponse);
break;
}
}
} catch (err) {
const message = err instanceof Error ? err.message : String(err);
postMessage({ id: req.id, type: 'error', message } satisfies WorkerResponse);
}
});

View file

@ -0,0 +1,14 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "bundler",
"lib": ["ES2022", "DOM"],
"strict": true,
"noEmit": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*"],
"exclude": ["node_modules"]
}

View file

@ -100,6 +100,7 @@ export {
SidebarSection,
PillNavigation,
PillDropdown,
PillDropdownBar,
AppDrawer,
GlobalSpotlight,
createGlobalSpotlightState,
@ -129,6 +130,7 @@ export type {
PillNavItem,
PillDropdownItem,
PillNavElement,
PillBarConfig,
PillNavigationProps,
PillTabOption,
PillTabGroupConfig,

View file

@ -0,0 +1,510 @@
<script lang="ts">
import type { PillDropdownItem } from './types';
import {
Archive,
Bell,
Buildings,
CalendarBlank,
CaretDown,
ChartBar,
ChatCircle,
Check,
CheckCircle,
CheckSquare,
Clock,
Cloud,
Columns,
Compass,
CreditCard,
File,
FileText,
Fire,
Folder,
Gear,
Gift,
Globe,
GridFour,
Heart,
House,
Key,
List,
MagnifyingGlass,
Microphone,
Moon,
MusicNote,
MusicNotes,
Palette,
Playlist,
Plus,
Question,
Robot,
Scales,
ShareFat,
ShareNetwork,
Shield,
SignOut,
Sparkle,
Spiral,
Sun,
Tag,
Target,
Timer,
Trash,
Tray,
Upload,
User,
Users,
Waveform,
} from '@mana/shared-icons';
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const phosphorIcons: Record<string, any> = {
home: House,
users: Users,
user: User,
tag: Tag,
heart: Heart,
settings: Gear,
chat: ChatCircle,
'help-circle': Question,
help: Question,
'share-2': ShareNetwork,
bell: Bell,
clock: Clock,
timer: Timer,
target: Target,
globe: Globe,
inbox: Tray,
check: Check,
checkCircle: CheckCircle,
'check-square': CheckSquare,
plus: Plus,
columns: Columns,
kanban: Columns,
mic: Microphone,
calendar: CalendarBlank,
folder: Folder,
archive: Archive,
upload: Upload,
music: MusicNote,
document: File,
chart: ChartBar,
'bar-chart-3': ChartBar,
search: MagnifyingGlass,
list: List,
compass: Compass,
moon: Moon,
sun: Sun,
logout: SignOut,
chevronDown: CaretDown,
menu: List,
fire: Fire,
grid: GridFour,
gridSmall: GridFour,
palette: Palette,
creditCard: CreditCard,
building: Buildings,
scale: Scales,
robot: Robot,
key: Key,
shield: Shield,
gift: Gift,
'music-notes': MusicNotes,
playlist: Playlist,
waveform: Waveform,
'file-text': FileText,
sparkle: Sparkle,
sparkles: Sparkle,
spiral: Spiral,
share: ShareFat,
trash: Trash,
cloud: Cloud,
};
interface Props {
/** Items to render as pills in the bar */
items: PillDropdownItem[];
/** Label shown at the start of the bar (title of the opened dropdown) */
label?: string;
/** Icon rendered next to the label */
icon?: string;
/** Use 'static' when inside a flex container (bottom-stack pattern). */
positioning?: 'fixed' | 'static';
}
let { items, label, icon, positioning = 'static' }: Props = $props();
// A render element is either a single item, a divider/section-label, or a
// group of items that share the same `group` id (rendered as a segmented
// toggle pill).
type RenderElement =
| { kind: 'item'; id: string; item: PillDropdownItem }
| { kind: 'divider'; id: string }
| { kind: 'section-label'; id: string; label: string }
| { kind: 'group'; id: string; items: PillDropdownItem[] };
const renderElements = $derived.by<RenderElement[]>(() => {
const out: RenderElement[] = [];
// Track groups already emitted so we only render each once.
const emittedGroups = new Set<string>();
// First flatten submenus, then collect groups.
const flat: PillDropdownItem[] = [];
for (const item of items) {
if (item.submenu && item.submenu.length > 0) {
flat.push({ id: `${item.id}-section`, label: item.label, divider: true });
for (const child of item.submenu) flat.push(child);
} else {
flat.push(item);
}
}
for (const item of flat) {
if (item.divider) {
const hasLabel = !!item.label;
out.push(
hasLabel
? { kind: 'section-label', id: item.id, label: item.label }
: { kind: 'divider', id: item.id }
);
} else if (item.group) {
if (!emittedGroups.has(item.group)) {
emittedGroups.add(item.group);
const grouped = flat.filter((i) => i.group === item.group);
out.push({ kind: 'group', id: `group-${item.group}`, items: grouped });
}
} else {
out.push({ kind: 'item', id: item.id, item });
}
}
return out;
});
function handleClick(item: PillDropdownItem, event: MouseEvent) {
if (item.disabled || item.divider) return;
item.onClick?.(event);
}
</script>
<div class="dropdown-bar-wrapper" class:static={positioning === 'static'}>
<div class="dropdown-bar-container">
{#if label}
<div class="bar-label glass-pill">
{#if icon && phosphorIcons[icon]}
{@const IconComponent = phosphorIcons[icon]}
<IconComponent size={16} />
{/if}
<span>{label}</span>
</div>
{/if}
{#each renderElements as el (el.id)}
{#if el.kind === 'divider'}
<div class="bar-divider"></div>
{:else if el.kind === 'section-label'}
<div class="bar-section-label">{el.label}</div>
{:else if el.kind === 'group'}
<!-- Segmented toggle pill. If any label in the group is longer
than 10 chars the group shows icon+label; otherwise icon-only
(e.g. the Light/Dark/System triple). -->
{@const showLabels = el.items.some((i) => (i.label?.length ?? 0) > 10)}
<div class="segmented-toggle glass-pill" class:with-labels={showLabels}>
{#each el.items as gi (gi.id)}
<button
type="button"
class="segmented-btn"
class:active={gi.active}
class:has-progress={gi.progress != null}
disabled={gi.disabled}
onclick={(e) => handleClick(gi, e)}
title={gi.label}
>
{#if gi.progress != null}
<svg class="progress-ring-inline" viewBox="0 0 20 20">
<circle class="progress-bg" cx="10" cy="10" r="8" />
<circle
class="progress-fill"
cx="10"
cy="10"
r="8"
stroke-dasharray={8 * 2 * Math.PI}
stroke-dashoffset={8 * 2 * Math.PI * (1 - gi.progress)}
/>
</svg>
{:else if gi.icon && phosphorIcons[gi.icon]}
{@const GIcon = phosphorIcons[gi.icon]}
<GIcon size={16} class="segmented-icon" />
{/if}
{#if showLabels}
<span class="segmented-label">{gi.label}</span>
{/if}
</button>
{/each}
</div>
{:else}
{@const item = el.item}
<button
type="button"
class="bar-pill glass-pill"
class:active={item.active}
class:primary={item.primary}
class:danger={item.danger}
disabled={item.disabled}
onclick={(e) => handleClick(item, e)}
title={item.label}
>
{#if item.imageUrl}
<img src={item.imageUrl} alt="" class="bar-img" />
{:else if item.icon && phosphorIcons[item.icon]}
{@const IconComponent = phosphorIcons[item.icon]}
<IconComponent size={16} />
{/if}
<span>{item.label}</span>
</button>
{/if}
{/each}
</div>
</div>
<style>
.dropdown-bar-wrapper {
display: flex;
flex-direction: column;
align-items: stretch;
pointer-events: none;
}
.dropdown-bar-wrapper.static {
position: relative;
}
.dropdown-bar-container {
display: flex;
align-items: center;
gap: 0.5rem;
pointer-events: auto;
width: fit-content;
max-width: 100%;
margin-left: auto;
margin-right: auto;
padding: 0.5rem 2rem;
overflow-x: auto;
scrollbar-width: none;
-ms-overflow-style: none;
mask-image: linear-gradient(
to right,
transparent 0%,
black 2rem,
black calc(100% - 2rem),
transparent 100%
);
-webkit-mask-image: linear-gradient(
to right,
transparent 0%,
black 2rem,
black calc(100% - 2rem),
transparent 100%
);
}
.dropdown-bar-container::-webkit-scrollbar {
display: none;
}
.bar-pill {
display: inline-flex;
align-items: center;
gap: 0.375rem;
padding: 0.5rem 0.875rem;
border-radius: 9999px;
font-size: 0.8125rem;
font-weight: 500;
white-space: nowrap;
border: 1px solid hsl(var(--color-border));
background: hsl(var(--color-card));
color: hsl(var(--color-foreground));
cursor: pointer;
flex-shrink: 0;
transition: all 0.15s ease;
box-shadow:
0 1px 2px hsl(0 0% 0% / 0.05),
0 2px 6px hsl(0 0% 0% / 0.04);
}
.bar-pill:hover:not(:disabled) {
background: hsl(var(--color-surface-hover, var(--color-card)));
transform: translateY(-1px);
}
.bar-pill:disabled {
opacity: 0.5;
cursor: not-allowed;
}
.bar-pill.active {
background: color-mix(
in srgb,
var(--pill-primary-color, var(--color-primary-500, #f8d62b)) 20%,
white 80%
);
border-color: var(--pill-primary-color, var(--color-primary-500, rgba(248, 214, 43, 0.5)));
color: #1a1a1a;
}
:global(.dark) .bar-pill.active {
background: color-mix(
in srgb,
var(--pill-primary-color, var(--color-primary-500, #f8d62b)) 30%,
transparent 70%
);
color: var(--pill-primary-color, var(--color-primary-500, #f8d62b));
}
.bar-pill.primary {
background: var(--pill-primary-color, var(--color-primary-500, #6366f1));
border-color: transparent;
color: white;
}
.bar-pill.danger {
color: #dc2626;
}
:global(.dark) .bar-pill.danger {
color: #ef4444;
}
.bar-label {
display: inline-flex;
align-items: center;
gap: 0.375rem;
padding: 0.5rem 0.875rem;
border-radius: 9999px;
font-size: 0.8125rem;
font-weight: 600;
white-space: nowrap;
border: 1px solid hsl(var(--color-border));
background: hsl(var(--color-muted, var(--color-card)));
color: hsl(var(--color-foreground));
flex-shrink: 0;
}
.bar-divider {
width: 1px;
height: 1.5rem;
background: hsl(var(--color-border));
flex-shrink: 0;
margin: 0 0.125rem;
}
.bar-section-label {
display: inline-flex;
align-items: center;
padding: 0 0.5rem;
font-size: 0.6875rem;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.05em;
color: hsl(var(--color-muted-foreground));
flex-shrink: 0;
white-space: nowrap;
}
.bar-img {
width: 16px;
height: 16px;
border-radius: 4px;
object-fit: cover;
}
/* Segmented toggle pill (e.g. Light / Dark / System three-way) */
.segmented-toggle {
display: inline-flex;
align-items: center;
gap: 0.25rem;
padding: 0.25rem;
border-radius: 9999px;
border: 1px solid hsl(var(--color-border));
background: hsl(var(--color-card));
box-shadow:
0 1px 2px hsl(0 0% 0% / 0.05),
0 2px 6px hsl(0 0% 0% / 0.04);
flex-shrink: 0;
}
.segmented-btn {
display: flex;
align-items: center;
justify-content: center;
padding: 0.375rem;
border: none;
background: transparent;
border-radius: 9999px;
cursor: pointer;
color: hsl(var(--color-foreground));
transition: all 0.15s;
}
.segmented-btn:hover:not(.active):not(:disabled) {
background: hsl(var(--color-surface-hover, var(--color-muted)));
}
.segmented-btn.active {
background: color-mix(
in srgb,
var(--pill-primary-color, var(--color-primary-500, #6366f1)) 20%,
white 80%
);
}
:global(.dark) .segmented-btn.active {
background: color-mix(
in srgb,
var(--pill-primary-color, var(--color-primary-500, #6366f1)) 30%,
transparent 70%
);
}
.segmented-btn :global(.segmented-icon) {
width: 1rem;
height: 1rem;
}
/* When the group shows labels, give the buttons more padding */
.segmented-toggle.with-labels .segmented-btn {
gap: 0.375rem;
padding: 0.375rem 0.625rem;
}
.segmented-label {
font-size: 0.75rem;
font-weight: 500;
white-space: nowrap;
}
/* Inline progress ring (replaces icon when downloading) */
.progress-ring-inline {
width: 20px;
height: 20px;
transform: rotate(-90deg);
flex-shrink: 0;
}
.progress-bg {
fill: none;
stroke: hsl(var(--color-border));
stroke-width: 2;
}
.progress-fill {
fill: none;
stroke: var(--pill-primary-color, var(--color-primary-500, #6366f1));
stroke-width: 2.5;
stroke-linecap: round;
transition: stroke-dashoffset 0.3s ease;
}
.segmented-btn.has-progress {
position: relative;
}
</style>

View file

@ -7,6 +7,7 @@
PillTabGroupConfig,
PillTagSelectorConfig,
PillAppItem,
PillBarConfig,
SpotlightAction,
ContentSearcher,
} from './types';
@ -32,6 +33,7 @@
CheckCircle,
CheckSquare,
Clock,
Cloud,
Columns,
Compass,
CreditCard,
@ -140,6 +142,7 @@
share: ShareFat,
trash: Trash,
filter: Funnel,
cloud: Cloud,
};
// Convert app items to dropdown items (will be computed as derived)
@ -326,6 +329,14 @@
helpHref?: string;
/** Bottom offset from viewport bottom (default: '0px'). Use to position above other fixed bars. */
bottomOffset?: string;
/** When provided, dropdown triggers (theme, AI tier, sync, user menu) render
* as plain pills that call this callback with a bar config instead of
* opening their in-place PillDropdown popover. The host is expected to
* render the returned items in its own bar (e.g. bottom-stack). Pass null
* to request closing the active bar. */
onOpenBar?: (config: PillBarConfig | null) => void;
/** Id of the bar currently open in the host. Used to highlight the trigger pill. */
activeBarId?: string | null;
}
let {
@ -386,8 +397,192 @@
guestMenuLabel = 'Menü',
helpHref,
bottomOffset = '0px',
onOpenBar,
activeBarId = null,
}: Props = $props();
// Whether this nav should surface dropdowns as bars instead of popovers.
const barMode = $derived(!!onOpenBar);
// Build the flat PillDropdownItem list for each bar, matching what the
// equivalent PillDropdown would render. Mode toggles + variants + a11y
// toggles for theme; tier/sync items pass through; user menu is assembled
// from the same rules as the PillDropdown below.
const themeBarItems = $derived.by<PillDropdownItem[]>(() => {
const out: PillDropdownItem[] = [];
if (onThemeModeChange) {
out.push(
{
id: 'theme-mode-light',
label: 'Light',
icon: 'sun',
group: 'theme-mode',
onClick: () => onThemeModeChange('light'),
active: themeMode === 'light',
},
{
id: 'theme-mode-dark',
label: 'Dark',
icon: 'moon',
group: 'theme-mode',
onClick: () => onThemeModeChange('dark'),
active: themeMode === 'dark',
},
{
id: 'theme-mode-system',
label: 'System',
icon: 'settings',
group: 'theme-mode',
onClick: () => onThemeModeChange('system'),
active: themeMode === 'system',
}
);
}
if (themeVariantItems.length > 0) {
if (out.length > 0) out.push({ id: 'theme-variants-div', label: '', divider: true });
for (const v of themeVariantItems) out.push(v);
}
if (showA11yQuickToggles) {
out.push({ id: 'a11y-div', label: '', divider: true });
if (onA11yContrastChange) {
out.push({
id: 'a11y-contrast',
label: 'Hoher Kontrast',
icon: 'sun',
onClick: () => onA11yContrastChange(a11yContrast === 'high' ? 'normal' : 'high'),
active: a11yContrast === 'high',
});
}
if (onA11yReduceMotionChange) {
out.push({
id: 'a11y-reduce-motion',
label: 'Animationen reduzieren',
icon: 'check',
onClick: () => onA11yReduceMotionChange(!a11yReduceMotion),
active: a11yReduceMotion,
});
}
}
return out;
});
const userBarItems = $derived.by<PillDropdownItem[]>(() => {
const out: PillDropdownItem[] = [];
if (userEmail && profileHref) {
out.push({
id: 'profile',
label: 'Profil',
icon: 'user',
onClick: () => {
window.location.href = profileHref!;
},
active: currentPath === profileHref,
});
}
out.push({
id: 'settings',
label: 'Einstellungen',
icon: 'settings',
onClick: () => {
window.location.href = settingsHref;
},
active: currentPath === settingsHref,
});
if (userEmail && manaHref) {
out.push({
id: 'mana',
label: 'Mana',
icon: 'sparkle',
onClick: () => {
window.location.href = manaHref!;
},
active: currentPath === manaHref,
});
}
if (spiralHref) {
out.push({
id: 'spiral',
label: 'Spiral',
icon: 'spiral',
onClick: () => {
window.location.href = spiralHref!;
},
active: currentPath === spiralHref,
});
}
if (creditsHref) {
out.push({
id: 'credits',
label: 'Credits',
icon: 'creditCard',
onClick: () => {
window.location.href = creditsHref!;
},
active: currentPath === creditsHref,
});
}
if (userEmail && feedbackHref) {
out.push({
id: 'feedback',
label: 'Feedback',
icon: 'chat',
onClick: () => {
window.location.href = feedbackHref!;
},
active: currentPath === feedbackHref,
});
}
if (helpHref) {
out.push({
id: 'help',
label: 'Hilfe',
icon: 'help',
onClick: () => {
window.location.href = helpHref!;
},
active: currentPath === helpHref,
});
}
if (showLanguageSwitcher && languageItems.length > 0) {
out.push({ id: 'language-div', label: '', divider: true });
out.push({
id: 'language',
label: currentLanguageLabel,
submenu: languageItems.map((item) => ({ ...item, id: `lang-${item.id}` })),
});
}
out.push({ id: 'auth-div', label: '', divider: true });
if (userEmail && showLogout && onLogout) {
out.push({
id: 'logout',
label: 'Logout',
icon: 'logout',
onClick: () => onLogout!(),
danger: true,
});
} else if (!userEmail && loginHref) {
out.push({
id: 'login',
label: 'Anmelden',
icon: 'user',
primary: true,
onClick: () => {
window.location.href = loginHref!;
},
});
}
return out;
});
function toggleBar(config: PillBarConfig) {
if (!onOpenBar) return;
if (activeBarId === config.id) {
onOpenBar(null);
} else {
onOpenBar(config);
}
}
// Type guards for elements
function isTabGroup(element: PillNavElement): element is PillTabGroupConfig {
return 'type' in element && element.type === 'tabs';
@ -506,6 +701,9 @@
oncontextmenu={item.onContextMenu}
class="pill glass-pill"
class:active={item.active}
class:icon-only={item.iconOnly}
aria-label={item.iconOnly ? item.label : undefined}
title={item.iconOnly ? item.label : undefined}
>
{#if item.icon}
{#if item.icon === 'mana'}
@ -521,7 +719,9 @@
<IconComponent size={18} class="pill-icon" />
{/if}
{/if}
<span class="pill-label">{item.label}</span>
{#if !item.iconOnly}
<span class="pill-label">{item.label}</span>
{/if}
</button>
{:else}
<a
@ -529,6 +729,9 @@
oncontextmenu={item.onContextMenu}
class="pill glass-pill"
class:active={isActive(item.href)}
class:icon-only={item.iconOnly}
aria-label={item.iconOnly ? item.label : undefined}
title={item.iconOnly ? item.label : undefined}
>
{#if item.icon}
{#if item.icon === 'mana'}
@ -544,7 +747,9 @@
<IconComponent size={18} class="pill-icon" />
{/if}
{/if}
<span class="pill-label">{item.label}</span>
{#if !item.iconOnly}
<span class="pill-label">{item.label}</span>
{/if}
</a>
{/if}
{/each}
@ -587,7 +792,24 @@
{/each}
<!-- Theme Variant Selector -->
{#if showThemeVariants && themeVariantItems.length > 0}
{#if showThemeVariants && themeVariantItems.length > 0 && barMode}
{@const themeConfig = {
id: 'theme',
label: '',
icon: undefined,
items: themeBarItems,
}}
<button
type="button"
onclick={() => toggleBar(themeConfig)}
class="pill glass-pill icon-only"
class:active={activeBarId === 'theme'}
title={currentThemeVariantLabel}
aria-label={currentThemeVariantLabel}
>
<Palette size={18} class="pill-icon" />
</button>
{:else if showThemeVariants && themeVariantItems.length > 0}
<PillDropdown
items={themeVariantItems}
direction={dropdownDirection}
@ -679,7 +901,31 @@
{/if}
<!-- AI Tier Selector -->
{#if showAiTierSelector && aiTierItems.length > 0}
{#if showAiTierSelector && aiTierItems.length > 0 && barMode}
{@const aiProgress = aiTierItems.find((i) => i.progress != null)?.progress}
{@const aiConfig = {
id: 'ai',
label: '',
icon: undefined,
items: aiTierItems,
progress: aiProgress,
}}
{@const AiIcon = phosphorIcons[currentAiTierIcon]}
<button
type="button"
onclick={() => toggleBar(aiConfig)}
class="pill glass-pill icon-only"
class:active={activeBarId === 'ai'}
class:downloading={aiProgress != null}
title={currentAiTierLabel}
aria-label={currentAiTierLabel}
style={aiProgress != null ? `--progress: ${aiProgress}` : ''}
>
{#if AiIcon}
<AiIcon size={18} class="pill-icon" />
{/if}
</button>
{:else if showAiTierSelector && aiTierItems.length > 0}
<PillDropdown
items={aiTierItems}
direction={dropdownDirection}
@ -689,7 +935,24 @@
{/if}
<!-- Sync Status -->
{#if showSyncStatus && syncStatusItems.length > 0}
{#if showSyncStatus && syncStatusItems.length > 0 && barMode}
{@const syncConfig = {
id: 'sync',
label: currentSyncLabel,
icon: 'cloud',
items: syncStatusItems,
}}
<button
type="button"
onclick={() => toggleBar(syncConfig)}
class="pill glass-pill"
class:active={activeBarId === 'sync'}
title={currentSyncLabel}
>
<Cloud size={18} class="pill-icon" />
<span class="pill-label">{currentSyncLabel}</span>
</button>
{:else if showSyncStatus && syncStatusItems.length > 0}
<PillDropdown
items={syncStatusItems}
direction={dropdownDirection}
@ -718,7 +981,25 @@
guests. Auth-only items (profile/settings/logout) are filtered
out when userEmail is empty; spiral/credits/themes/help stay
available either way so guests can still navigate. -->
{#if userEmail || loginHref}
{#if (userEmail || loginHref) && barMode}
{@const userLabel = userEmail ? truncateEmail(userEmail) : guestMenuLabel}
{@const userConfig = {
id: 'user',
label: userLabel,
icon: 'user',
items: userBarItems,
}}
<button
type="button"
onclick={() => toggleBar(userConfig)}
class="pill glass-pill"
class:active={activeBarId === 'user'}
title={userLabel}
>
<User size={18} class="pill-icon" />
<span class="pill-label">{userLabel}</span>
</button>
{:else if userEmail || loginHref}
<PillDropdown
items={[
...(userEmail && profileHref
@ -1038,6 +1319,47 @@
display: inline;
}
/* Icon-only pill: square-ish shape, no label gap */
.pill.icon-only {
gap: 0;
padding: 0.5rem 0.625rem;
}
/* Progress ring on pill (used for download indicator).
Uses a conic-gradient border trick so it follows the pill's
own border-radius regardless of shape. */
.pill.downloading {
position: relative;
overflow: visible;
}
.pill.downloading::after {
content: '';
position: absolute;
inset: -3px;
border-radius: inherit;
border: 2.5px solid transparent;
background:
conic-gradient(
from 0deg,
var(--pill-primary-color, var(--color-primary-500, #6366f1))
calc(var(--progress) * 360deg),
transparent calc(var(--progress) * 360deg)
)
border-box,
linear-gradient(hsl(var(--color-card)), hsl(var(--color-card))) padding-box;
mask:
linear-gradient(#fff 0 0) content-box,
linear-gradient(#fff 0 0);
mask-composite: exclude;
-webkit-mask:
linear-gradient(#fff 0 0) content-box,
linear-gradient(#fff 0 0);
-webkit-mask-composite: xor;
pointer-events: none;
transition: none;
}
/* Transitions */
.pill-nav {
transition: all 0.3s ease;

View file

@ -4,6 +4,7 @@ export { default as Sidebar } from './Sidebar.svelte';
export { default as SidebarSection } from './SidebarSection.svelte';
export { default as PillNavigation } from './PillNavigation.svelte';
export { default as PillDropdown } from './PillDropdown.svelte';
export { default as PillDropdownBar } from './PillDropdownBar.svelte';
export { default as AppDrawer } from './AppDrawer.svelte';
export { default as GlobalSpotlight } from './GlobalSpotlight.svelte';
export { createGlobalSpotlightState } from './useGlobalSpotlight.svelte';
@ -43,6 +44,7 @@ export type {
PillTagSelectorConfig,
PillDivider,
PillNavElement,
PillBarConfig,
SpotlightAction,
ContentSearchResult,
ContentSearchGroup,

View file

@ -26,6 +26,23 @@ export interface PillNavItem {
active?: boolean;
/** Right-click handler for context menu */
onContextMenu?: (e: MouseEvent) => void;
/** Show only the icon (hide the label). Label is still used for aria-label/title. */
iconOnly?: boolean;
}
/** Config passed when a PillNavigation dropdown should surface as a bar
* in the host's bottom stack instead of an in-place popover. */
export interface PillBarConfig {
/** Stable id (e.g. 'theme', 'ai', 'sync', 'user') */
id: string;
/** Title shown at the start of the bar */
label: string;
/** Icon name shown next to the title */
icon?: string;
/** Items to render as pills */
items: PillDropdownItem[];
/** Progress value 01. When set, a progress ring is shown on the trigger pill. */
progress?: number;
}
export interface PillDropdownItem {
@ -51,6 +68,10 @@ export interface PillDropdownItem {
divider?: boolean;
/** Nested submenu items */
submenu?: PillDropdownItem[];
/** Group id — items sharing the same group are rendered as a segmented toggle pill */
group?: string;
/** Progress value 01. When set, a circular progress ring is rendered around the icon. */
progress?: number;
/** Whether to show a split button for opening in panel */
showSplitButton?: boolean;
/** Click handler for split button */

View file

@ -76,6 +76,9 @@
locale?: string;
/** Use 'static' when inside a flex container (bottom-stack pattern). Default: 'fixed'. */
positioning?: 'fixed' | 'static';
/** Externally injected text (e.g. from voice input). When this changes
* to a non-empty string, the input bar's query is set and focused. */
injectedText?: string;
}
let {
@ -106,6 +109,7 @@
highlightPatterns,
locale = 'de',
positioning = 'fixed',
injectedText,
}: Props = $props();
// Use settings for autoFocus
@ -125,6 +129,18 @@
// Whether search has been explicitly triggered in deferred mode
let searchTriggered = $state(false);
// External text injection (e.g. from voice-to-text). When the prop
// changes to a new non-empty value, set the search query and focus.
let lastInjected = '';
$effect(() => {
if (injectedText && injectedText !== lastInjected) {
lastInjected = injectedText;
searchQuery = injectedText;
// Focus the input so the user sees and can edit the text
requestAnimationFrame(() => inputElement?.focus());
}
});
// Context menu state
let contextMenuVisible = $state(false);
let contextMenuX = $state(0);