fix(workbench): dedup duplicate Home scenes accumulated by seeding race

The Home-seeder in workbench-scenes.svelte.ts writes new scenes without
spaceId, so the creating-hook stamps them with the _personal:<userId>
sentinel. The per-space dedup check filters by the real space UUID and
never finds them — every login adds another Home row, and every visit
to a non-personal Space (Brand/Family/Team) drops yet another seed
into the personal Space.

This is Schicht D-soft of the broader cleanup plan
(docs/plans/workbench-seeding-cleanup.md): a one-shot dedup pass that
collapses duplicate "Home" rows per spaceId, merging openApps from the
losers into the survivor (most apps wins, ties by most-recent
updatedAt) and soft-deleting the rest so mana-sync propagates the
cleanup to other devices. Touches only rows that look like fresh
default seeds — anything customized (description, wallpaper, agent
binding, scope tags, non-Home name) is left alone.

Wired in two places: a Dexie v48 upgrade so it runs once per device on
schema bump, and a belt-and-suspenders pass in (app)/+layout.svelte
right after reconcileSentinels() to catch the edge case where
sentinel-stamped rows just collapsed into the same UUID group as
already-reconciled rows.

The structural fix that prevents new duplicates from ever forming
(per-space-seeds registry + deterministic seed ids +
creating-hook hardening) ships in follow-up commits per the plan.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Till JS 2026-04-25 14:08:32 +02:00
parent ac12b61de2
commit d62ae8f1e3
5 changed files with 507 additions and 0 deletions

View file

@ -1079,6 +1079,29 @@ db.version(46).stores({
_scopeCursor: null, _scopeCursor: null,
}); });
// v48 — One-shot dedup of duplicate "Home" scenes that the seeding race
// in `stores/workbench-scenes.svelte.ts` has been accumulating since the
// Spaces-Foundation migration shipped 2026-04-22. The seeder writes new
// scenes without `spaceId`, so the creating-hook stamps them with the
// `_personal:<userId>` sentinel. The dedup check in
// `onActiveSpaceChanged` filters by the *real* space UUID and never
// finds them — every login adds another Home row.
//
// This upgrade is the soft cleanup. The structural fix (per-space-seeds
// registry + deterministic ids + creating-hook hardening) ships in
// follow-up commits — see docs/plans/workbench-seeding-cleanup.md.
//
// No schema/index change. The upgrade only soft-deletes the loser rows
// (sets `deletedAt`) so mana-sync propagates the cleanup to other
// devices instead of resurrecting them on next pull.
db.version(48).upgrade(async (tx) => {
const { dedupHomeScenesOn } = await import('./scope/dedup-workbench-scenes');
const removed = await dedupHomeScenesOn(tx.table('workbenchScenes'));
if (removed > 0) {
console.info(`[workbench-scenes v48] deduped ${removed} duplicate Home scenes`);
}
});
// ─── Sync Routing ────────────────────────────────────────── // ─── Sync Routing ──────────────────────────────────────────
// SYNC_APP_MAP, TABLE_TO_SYNC_NAME, TABLE_TO_APP, SYNC_NAME_TO_TABLE, // SYNC_APP_MAP, TABLE_TO_SYNC_NAME, TABLE_TO_APP, SYNC_NAME_TO_TABLE,
// toSyncName() and fromSyncName() are now derived from per-module // toSyncName() and fromSyncName() are now derived from per-module

View file

@ -0,0 +1,219 @@
/**
* Unit tests for `dedupHomeScenesOn` the soft-cleanup pass that
* collapses duplicate "Home" scenes accumulated by the seeding race
* (see docs/plans/workbench-seeding-cleanup.md).
*
* Uses an isolated Dexie db with just a `workbenchScenes` table so the
* test doesn't drag in `database.ts`'s side-effect imports (auth store,
* triggers, funnel tracking, ) the function under test only needs a
* Table reference, so a one-table fixture is enough.
*/
import 'fake-indexeddb/auto';
import { afterEach, beforeEach, describe, expect, it } from 'vitest';
import Dexie, { type Table } from 'dexie';
import type { LocalWorkbenchScene } from '$lib/types/workbench-scenes';
import { dedupHomeScenesOn } from './dedup-workbench-scenes';
// Public LocalWorkbenchScene doesn't carry the runtime-stamped scope
// fields (spaceId/authorId/visibility) — they're added by the creating
// hook. Tests need to set spaceId explicitly to drive grouping, so we
// model the row as the public shape plus an optional spaceId override.
type SceneRow = LocalWorkbenchScene & { spaceId?: string };
interface FixtureDb extends Dexie {
workbenchScenes: Table<SceneRow, string>;
}
let db: FixtureDb;
function makeDb(): FixtureDb {
const fresh = new Dexie(`dedup-test-${crypto.randomUUID()}`) as FixtureDb;
fresh.version(1).stores({ workbenchScenes: 'id, order' });
return fresh;
}
function makeScene(overrides: Partial<SceneRow>): SceneRow {
return {
id: 'scene-default',
name: 'Home',
openApps: [{ appId: 'todo' }, { appId: 'calendar' }, { appId: 'notes' }],
order: 0,
createdAt: '2026-04-25T10:00:00.000Z',
updatedAt: '2026-04-25T10:00:00.000Z',
spaceId: 'space-personal',
...overrides,
};
}
beforeEach(async () => {
db = makeDb();
await db.open();
});
afterEach(async () => {
db.close();
await Dexie.delete(db.name);
});
describe('dedupHomeScenesOn', () => {
it('returns 0 and changes nothing when there are no duplicates', async () => {
await db.workbenchScenes.add(makeScene({ id: 's1' }));
await db.workbenchScenes.add(makeScene({ id: 's2', spaceId: 'space-other' }));
const removed = await dedupHomeScenesOn(db.workbenchScenes);
expect(removed).toBe(0);
const remaining = await db.workbenchScenes
.toArray()
.then((rows) => rows.filter((r) => !r.deletedAt));
expect(remaining).toHaveLength(2);
});
it('keeps one survivor per (spaceId) group and soft-deletes the rest', async () => {
await db.workbenchScenes.bulkAdd([
makeScene({ id: 's1', updatedAt: '2026-04-25T09:00:00.000Z' }),
makeScene({ id: 's2', updatedAt: '2026-04-25T10:00:00.000Z' }),
makeScene({ id: 's3', updatedAt: '2026-04-25T11:00:00.000Z' }),
]);
const removed = await dedupHomeScenesOn(db.workbenchScenes);
expect(removed).toBe(2);
const all = await db.workbenchScenes.toArray();
const alive = all.filter((r) => !r.deletedAt);
const dead = all.filter((r) => r.deletedAt);
expect(alive).toHaveLength(1);
expect(dead).toHaveLength(2);
});
it('picks the survivor with the most openApps, then most recent updatedAt', async () => {
await db.workbenchScenes.bulkAdd([
makeScene({
id: 'older-richer',
openApps: [{ appId: 'todo' }, { appId: 'calendar' }, { appId: 'notes' }],
updatedAt: '2026-04-25T09:00:00.000Z',
}),
makeScene({
id: 'newer-leaner',
openApps: [{ appId: 'todo' }],
updatedAt: '2026-04-25T11:00:00.000Z',
}),
]);
await dedupHomeScenesOn(db.workbenchScenes);
const alive = await db.workbenchScenes
.toArray()
.then((rows) => rows.filter((r) => !r.deletedAt));
expect(alive.map((r) => r.id)).toEqual(['older-richer']);
});
it('merges openApps from losers into the survivor (dedup by appId)', async () => {
await db.workbenchScenes.bulkAdd([
makeScene({
id: 'survivor',
openApps: [{ appId: 'todo' }, { appId: 'calendar' }, { appId: 'notes' }],
}),
makeScene({
id: 'loser-extra',
openApps: [{ appId: 'notes' }, { appId: 'mood' }],
}),
]);
await dedupHomeScenesOn(db.workbenchScenes);
const survivor = await db.workbenchScenes.get('survivor');
expect(survivor?.openApps?.map((a) => a.appId).sort()).toEqual([
'calendar',
'mood',
'notes',
'todo',
]);
});
it('keeps groups separate by spaceId — no cross-space merging', async () => {
await db.workbenchScenes.bulkAdd([
makeScene({ id: 'a1', spaceId: 'space-A' }),
makeScene({ id: 'a2', spaceId: 'space-A' }),
makeScene({ id: 'b1', spaceId: 'space-B' }),
]);
const removed = await dedupHomeScenesOn(db.workbenchScenes);
expect(removed).toBe(1);
const alive = await db.workbenchScenes
.toArray()
.then((rows) => rows.filter((r) => !r.deletedAt));
expect(alive).toHaveLength(2);
expect(alive.map((r) => r.spaceId).sort()).toEqual(['space-A', 'space-B']);
});
it('leaves user-customized scenes alone (description / wallpaper / agent / scope)', async () => {
await db.workbenchScenes.bulkAdd([
makeScene({ id: 's1' }),
makeScene({ id: 's2', description: 'Mein Workspace' }),
makeScene({ id: 's3', viewingAsAgentId: 'agent-1' }),
makeScene({ id: 's4', scopeTagIds: ['tag-1'] }),
]);
const removed = await dedupHomeScenesOn(db.workbenchScenes);
// s1 is the only mergeable row in its group of 1 → no removal.
expect(removed).toBe(0);
const alive = await db.workbenchScenes
.toArray()
.then((rows) => rows.filter((r) => !r.deletedAt));
expect(alive).toHaveLength(4);
});
it('leaves non-Home scenes alone even when duplicated by name', async () => {
await db.workbenchScenes.bulkAdd([
makeScene({ id: 'd1', name: 'Deep Work' }),
makeScene({ id: 'd2', name: 'Deep Work' }),
]);
const removed = await dedupHomeScenesOn(db.workbenchScenes);
expect(removed).toBe(0);
});
it('skips already-tombstoned rows', async () => {
await db.workbenchScenes.bulkAdd([
makeScene({ id: 's1' }),
makeScene({ id: 's2', deletedAt: '2026-04-24T10:00:00.000Z' }),
]);
const removed = await dedupHomeScenesOn(db.workbenchScenes);
// Only one live row in the group → no removal.
expect(removed).toBe(0);
const stillDeleted = await db.workbenchScenes.get('s2');
expect(stillDeleted?.deletedAt).toBe('2026-04-24T10:00:00.000Z');
});
it('is idempotent — running twice produces the same end state', async () => {
await db.workbenchScenes.bulkAdd([
makeScene({ id: 's1' }),
makeScene({ id: 's2' }),
makeScene({ id: 's3' }),
]);
const firstRemoved = await dedupHomeScenesOn(db.workbenchScenes);
const secondRemoved = await dedupHomeScenesOn(db.workbenchScenes);
expect(firstRemoved).toBe(2);
expect(secondRemoved).toBe(0);
});
it('skips rows without a string spaceId (ambiguous group key)', async () => {
await db.workbenchScenes.bulkAdd([
makeScene({ id: 's1', spaceId: undefined }),
makeScene({ id: 's2', spaceId: undefined }),
]);
const removed = await dedupHomeScenesOn(db.workbenchScenes);
expect(removed).toBe(0);
});
});

View file

@ -0,0 +1,123 @@
/**
* Dedup pass for the `workbenchScenes` table collapses the duplicate
* "Home" scenes the seeding race in `workbench-scenes.svelte.ts` has been
* accumulating since the Spaces-Foundation migration shipped 2026-04-22.
*
* Background: the seeder writes rows without `spaceId`, so the Dexie
* creating-hook stamps `_personal:<userId>` (sentinel). The dedup check
* in `onActiveSpaceChanged` filters by the *real* space UUID and never
* finds them every login adds duplicates. Full root-cause + the
* upcoming structural fix (per-space-seeds registry + deterministic
* ids + creating-hook hardening) live in
* `docs/plans/workbench-seeding-cleanup.md`.
*
* This file is the soft cleanup: idempotent, content-aware, takes
* `name === 'Home'` rows that look like default seeds (no description /
* wallpaper / viewingAsAgentId / scopeTagIds i.e. nothing the user
* has customised), groups them by `spaceId`, picks one survivor per
* group, merges every loser's `openApps` into it, and soft-deletes the
* rest so mana-sync propagates the cleanup to other devices.
*
* Pure: takes a Dexie Table reference, never reaches into the live
* `db`. That keeps it import-cycle-free so it can run inside a
* `db.version(N).upgrade()` callback (where it gets `tx.table(...)`)
* AND from app-runtime callers (where they pass `db.table(...)`).
*/
import type { Table } from 'dexie';
import type { LocalWorkbenchScene, WorkbenchSceneApp } from '$lib/types/workbench-scenes';
const HOME_NAME = 'Home';
/**
* A scene is a candidate for merging when it looks like a fresh default
* "Home" seed anything the user might have set themselves disqualifies
* the row so we never destroy custom layouts.
*/
function isDefaultHomeSeed(row: LocalWorkbenchScene): boolean {
if (row.deletedAt) return false;
if (row.name !== HOME_NAME) return false;
if (row.description) return false;
if (row.wallpaper) return false;
if (row.viewingAsAgentId) return false;
if (row.scopeTagIds && row.scopeTagIds.length > 0) return false;
return true;
}
/**
* Run dedup on the given `workbenchScenes` table. Returns the number of
* rows soft-deleted. Idempotent safe to invoke repeatedly.
*
* The caller is expected to wrap this in a transaction when called
* outside of a Dexie `upgrade()` callback (upgrade callbacks already
* give a transaction-bound `tx.table()` reference).
*/
export async function dedupHomeScenesOn(
table: Table<LocalWorkbenchScene, string>
): Promise<number> {
const rows = await table.toArray();
// Bucket by spaceId. Rows without a spaceId can't be safely grouped
// (their target space is ambiguous) — skip them. Rows that look like
// user-customised scenes are also out, even if they happen to be
// named "Home", so a deliberate two-Home setup stays intact.
const groups = new Map<string, LocalWorkbenchScene[]>();
for (const row of rows) {
if (!isDefaultHomeSeed(row)) continue;
const spaceId = (row as { spaceId?: unknown }).spaceId;
if (typeof spaceId !== 'string' || !spaceId) continue;
let group = groups.get(spaceId);
if (!group) {
group = [];
groups.set(spaceId, group);
}
group.push(row);
}
const now = new Date().toISOString();
let removed = 0;
for (const group of groups.values()) {
if (group.length <= 1) continue;
// Survivor pick: the row with the most openApps wins (it's the
// most likely to carry the user's accumulated app additions),
// breaking ties by most-recent updatedAt.
group.sort((a, b) => {
const aLen = a.openApps?.length ?? 0;
const bLen = b.openApps?.length ?? 0;
if (aLen !== bLen) return bLen - aLen;
const aTime = a.updatedAt ?? '';
const bTime = b.updatedAt ?? '';
return bTime.localeCompare(aTime);
});
const [survivor, ...losers] = group;
// Merge every loser's openApps into the survivor, dedupe by
// appId so the user doesn't end up with two `todo` panels.
const merged: WorkbenchSceneApp[] = [...(survivor.openApps ?? [])];
const seen = new Set(merged.map((a) => a.appId));
for (const loser of losers) {
for (const app of loser.openApps ?? []) {
if (!seen.has(app.appId)) {
seen.add(app.appId);
merged.push(app);
}
}
}
const survivorAppCount = survivor.openApps?.length ?? 0;
if (merged.length !== survivorAppCount) {
await table.update(survivor.id, { openApps: merged, updatedAt: now });
}
// Soft-delete the losers via deletedAt so the unified sync engine
// propagates the dedup to other devices instead of resurrecting
// the rows on next pull.
for (const loser of losers) {
await table.update(loser.id, { deletedAt: now, updatedAt: now });
removed++;
}
}
return removed;
}

View file

@ -614,6 +614,23 @@
if (rewritten > 0) { if (rewritten > 0) {
console.info(`[spaces] reconciled ${rewritten} sentinel records to active space`); console.info(`[spaces] reconciled ${rewritten} sentinel records to active space`);
} }
// Belt-and-suspenders dedup of duplicate "Home" workbench
// scenes. The Dexie v48 upgrade already does one pass at
// schema-bump time; this second pass covers the edge case
// where reconcileSentinels just collapsed sentinel-stamped
// rows into the same space-id as already-reconciled rows,
// producing fresh duplicates. Idempotent — a no-op when
// nothing matches. The structural fix that prevents new
// duplicates ships separately, see
// docs/plans/workbench-seeding-cleanup.md.
const { dedupHomeScenesOn } = await import('$lib/data/scope/dedup-workbench-scenes');
const dedupedCount = await db.transaction('rw', 'workbenchScenes', () =>
dedupHomeScenesOn(db.table('workbenchScenes'))
);
if (dedupedCount > 0) {
console.info(`[workbench-scenes] deduped ${dedupedCount} duplicate Home scenes`);
}
} catch (err) { } catch (err) {
console.warn('[spaces] active-space boot failed — sync will use sentinel scope', err); console.warn('[spaces] active-space boot failed — sync will use sentinel scope', err);
} }

View file

@ -0,0 +1,125 @@
# Workbench-Seeding — Cleanup & Architektur-Hardening
## Status (2026-04-25)
Rückwirkende Aufräumarbeit für `workbenchScenes`. Adressiert einen Race-Condition-Bug, der seit dem Space-Migration-Sweep (2026-04-22) bei jedem Login zusätzliche "Home"-Scenes anlegt — und nimmt die Gelegenheit, die ganze Bug-Klasse strukturell zu eliminieren.
## Symptom
User-Reports: "viele Home-Scenes mit den immer gleichen Apps offen". Im IndexedDB der Personal-Space-Workbench kumulieren sich `name='Home'`-Scenes über die Sessions hinweg.
## Bug-Analyse — Warum es passiert
Drei Ursachen-Schichten greifen ineinander:
### 1. Footgun im Creating-Hook (`database.ts:1330`)
Wenn ein neuer Record auf einer space-scoped Tabelle ohne `spaceId` geschrieben wird, stempelt der Hook automatisch `spaceId = '_personal:<userId>'` (Sentinel). Ursprünglich als Migrations-Brücke für v28 gedacht, verschluckt das heute jeden Code-Pfad, der vergisst die `spaceId` explizit zu setzen.
### 2. Drei verteilte Seeding-Pfade (`workbench-scenes.svelte.ts`)
- Z. 293-296: `count === 0``ensureSeedScene()` in `initialize()`
- Z. 305-326: `onActiveSpaceChanged` Replay-on-Register feuert sofort → check `r.spaceId === space.id``ensureSeedScene()`
- Z. 305-326 erneut: Bei jedem späteren Space-Wechsel
`ensureSeedScene()` setzt **kein** `spaceId` → der Hook stempelt Sentinel → der Dedup-Filter sucht aber nach echter Space-UUID → schlägt immer fehl → seedet immer wieder.
### 3. Idempotenz auf zufälligen UUIDs
Jeder Seed bekommt `crypto.randomUUID()`. Das macht "ist schon da?"-Checks von Inhalts-Vergleichen abhängig statt von Primary-Key-Constraints. Jeder Race produziert eine neue Row, weil die DB nichts zu blocken hat.
### Race-Mechanik
`+layout.svelte:611` startet `loadActiveSpace().then(reconcileSentinels)` parallel zu `+page.svelte:69` `workbenchScenesStore.initialize()`. `reconcileSentinels` rewrited Sentinel-Rows pro Boot **einmal** zur echten Personal-Space-UUID. Mid-Session-Seeds (nach Reconcile geschrieben) bleiben bei Sentinel und werden erst beim NÄCHSTEN Boot reconciled. Resultat: jede Session fügt Duplikate hinzu.
Brand/Family/Team-Spaces verstärken den Effekt: jeder Wechsel dorthin findet keine Scene unter der Brand-UUID → seedet — landet aber per Sentinel-Stamping unter Personal-Space. Personal-Workbench füllt sich bei jedem Wechsel in einen anderen Space.
## Best-Practice-Lösung in vier Schichten
Statt nur den Symptom-Patch (`spaceId` durchreichen) werden die unterliegenden Footguns alle adressiert, sodass die ganze Bug-Klasse strukturell unmöglich wird.
### Schicht D-soft — Bestehende Duplikate aufräumen
**Ziel:** alle bereits angesammelten Home-Duplikate auf eine Survivor-Row pro Space reduzieren, ohne user-customisierte Scenes anzutasten.
- Neue Datei `data/scope/dedup-workbench-scenes.ts` exportiert `dedupHomeScenes(): Promise<number>`.
- Logik:
1. Alle nicht-tombstoned Rows lesen, gruppieren nach `(spaceId, name)`.
2. Nur Gruppen mit `name === 'Home'` UND `length > 1` UND keiner Row mit `description` / `wallpaper` / `viewingAsAgentId` / `scopeTagIds` (User-Customisierungen).
3. Survivor-Pick: meiste `openApps`, dann jüngstes `updatedAt`.
4. Merge: alle `openApps` der Verlierer (per `appId` deduped) in den Survivor übernehmen — nichts geht verloren.
5. Verlierer soft-deleten (`deletedAt = now`) damit mana-sync den Cleanup an andere Geräte propagiert.
- Aufruf-Stellen (idempotent — doppelter Lauf ist no-op):
- **Dexie v48 upgrade** in `database.ts`. Läuft genau einmal pro Device beim Schema-Bump.
- **`+layout.svelte` `handleAuthReady`** direkt nach `reconcileSentinels()`. Fängt den Edge-Case wo Sentinel-Rows nach dem Reconcile in die UUID-Gruppe wandern und dort neue Duplikate bilden.
- Tests: `data/scope/dedup-workbench-scenes.test.ts` deckt: identische Duplikate → 1 Survivor; openApps-Merge dedupt nach `appId`; verschiedene Spaces bleiben getrennt; user-customisierte Scenes (custom `description`, `wallpaper`, `viewingAsAgentId`) bleiben unangetastet; non-`Home`-Namen bleiben unangetastet.
### Schicht B + C — Zentrale Per-Space-Seeder-Registry mit deterministischen IDs
**Ziel:** alle Race-Pfade durch *einen* idempotenten Seeding-Eintrittspunkt ersetzen.
- Neues Modul `data/scope/per-space-seeds.ts`:
```ts
type Seeder = (spaceId: string) => Promise<void>;
const seeders = new Map<string, Seeder>();
export function registerSpaceSeed(name: string, fn: Seeder): void;
export async function runSpaceSeeds(spaceId: string): Promise<void>;
```
- Aufrufer-Hook: `setActiveSpace()` in `active-space.svelte.ts` ruft nach `notifyHandlers(space)` ein einziges `void runSpaceSeeds(space.id)`.
- Workbench-Modul registriert sich per Side-Effect-Import (bestehendes Muster wie bei `seed-registry.ts`):
```ts
registerSpaceSeed('workbench-home', async (spaceId) => {
const id = `seed-home-${spaceId}`;
await db.workbenchScenes.put({ id, spaceId, name: 'Home', openApps: DEFAULT_HOME_APPS, ... });
});
```
- Deterministische ID `seed-home-${spaceId}` macht den Seed nativ idempotent: zweite Ausführung überschreibt Bit-für-Bit identisch, kein Duplikat möglich. Race-Conditions strukturell ausgeschlossen.
- Aus `workbench-scenes.svelte.ts` entfernen:
- Z. 293-296 `count === 0` Block in `initialize()`.
- Z. 305-326 `onActiveSpaceChanged`-Handler (nur den Seed-Block, der LS-Read bleibt).
- `ensureSeedScene()` Funktion (nicht mehr nötig).
### Schicht A — Hook wirft statt Sentinel zu stempeln
**Ziel:** vergessene `spaceId`-Sets als hard-fail statt silent-corruption.
- `database.ts:1330` umstellen: wenn `spaceId` undefined/null AND Tabelle nicht in `USER_LEVEL_TABLES`:
```ts
throw new Error(
`[scope] write to space-scoped table '${tableName}' without spaceId. ` +
`Set spaceId explicitly or move the table to USER_LEVEL_TABLES.`
);
```
- `reconcileSentinels` darf bleiben (rewriten historischer Sentinel-Daten weiter, falls vorhanden) — neue Writes sehen den Sentinel-Pfad nie mehr.
- Erwartet: deckt 2-3 stille Bugs in anderen Modulen auf, die seit der v28-Migration unbemerkt durchgelaufen sind. Genau deshalb ist die Schicht wertvoll.
- Risk-Mitigation: vor Schicht A einen Audit-Lauf (`grep` + Code-Review) der bestehenden `.add(`-Stellen über alle Module — wer setzt `spaceId` nicht? Diese Stellen vorab fixen.
### Schicht D-hard — Cleanup als Schema-Invariante festschreiben
**Ziel:** den deterministischen Seed-ID-Vertrag im Code als fest erwarteten Zustand verankern.
- 1-2 Tage nach Schicht B+C+A. Soak-Zeit, damit alle Devices via Sync den dedup'ten State sehen.
- Dexie v49 (oder höher) Migration: alle `workbenchScenes` mit `name === 'Home'` und ohne ID-Prefix `seed-home-` umbenennen auf `seed-home-${spaceId}`. Falls Konflikt mit existierendem deterministischen Survivor: alte Row löschen.
- Code-Annahme: queries dürfen ab hier `db.workbenchScenes.get(\`seed-home-\${spaceId}\`)` direkt benutzen, ohne By-Name-Filter-Fallback.
## Reihenfolge
1. ✅ **Schicht D-soft** — räumt deine konkrete IndexedDB sofort auf, blockiert nichts. Risikoarm. **DIESES PR.**
2. **Schicht B + C** — fixt den Bug strukturell. Eigenes PR, nach D-soft soak (1 Tag).
3. **Schicht A** — Großputz. Eigenes PR. Nach Audit + Fix der vergessenen Call-Sites.
4. **Schicht D-hard** — Code-Annahme festschreiben. Eigenes PR.
## Erfolgskriterien
- Nach D-soft: User sieht in jedem Space genau eine `Home`-Scene mit allen openApps gemerged. Andere Custom-Scenes unverändert. Sync propagiert Cleanup an alle Devices.
- Nach B+C: Login → keine neuen Duplikate, egal welche Race-Reihenfolge. Space-Wechsel zu fremdem Space erstellt Home-Scene **dort**, nicht im Personal-Space.
- Nach A: jeder unbeabsichtigte `add()` ohne `spaceId` schlägt mit klarer Error-Message fehl.
- Nach D-hard: deterministische Seed-IDs sind der einzige Weg "Home" in DB zu finden.
## Risiken & Mitigations
- **D-soft soft-deletes durch sync gepullt** → andere Devices sehen plötzlich weniger Scenes. **Erwünscht** — dedup ist genau der Zweck. Sync handelt soft-deletes via `deletedAt` korrekt.
- **D-soft falsch-positiv: User hat zwei legitime "Home"-Scenes manuell angelegt** → die Heuristik ("kein description/wallpaper/agent/scope") schließt customisierte Rows aus. Reine Default-Duplikate werden gemerged. Edge-Case: User hat zwei identische Scenes "Home" beide mit Default-Apps absichtlich angelegt — sehr unwahrscheinlich, und Konsequenz (Merge auf eine Row mit Union der Apps) ist benign.
- **B+C: Schicht-A Wirkung vorgezogen** — wenn B+C zuerst kommt, fixed der Workbench-Path den Bug; aber andere Module könnten weiterhin still falsch stempeln. Akzeptabel, weil B+C den User-sichtbaren Bug schließt.
- **A: Bestehende Tests, die ohne `spaceId` schreiben** → Audit-Schritt vor A. `vitest run` deckt's auf.
## Out-of-Scope
- Server-truthed Scene-Creation (mana-sync seedet auf Space-Create direkt in PG). Bricht local-first für nur einen Use-Case — nicht der richtige Tradeoff fürs Daten-Modell.
- Vereinheitlichung mit Workbench-Templates-Apply-Pattern (bereits ähnliche Seed-Handler-Registry in `apply-template.ts`). Spannend, aber nicht Teil dieses Plans.