Adds an opaque JSON `actor` column alongside the existing field_timestamps
so cross-device consumers can distinguish user / ai / system writes. The
server never parses the shape — it just stores and re-emits the blob the
webapp stamped in its Dexie hook.
- `sync/types.go` — Change.Actor as json.RawMessage with omitempty; nil
for pre-actor clients so wire remains backward-compatible
- `store/postgres.go`
- Migrate: CREATE TABLE includes `actor JSONB` for fresh DBs;
ALTER TABLE ADD COLUMN IF NOT EXISTS actor JSONB for existing ones
(idempotent, safe to re-run)
- RecordChange signature takes json.RawMessage; pgx writes nil as NULL
- All three SELECT paths (GetChangesSince, GetAllChangesSince,
StreamAllUserChanges) return actor, Scan into ChangeRow.Actor
- ChangeRow.Actor added with doc noting "missing = user" consumer rule
- `sync/handler.go` — Change.Actor threaded through HandleSync →
RecordChange, and populated on both changeFromRow (pull/POST replies)
and convertChanges (SSE stream)
- Tests: roundtrip of an AI-actor payload + omitempty verification for
pre-actor clients. All existing tests still pass.
Webapp types still need `actor?: Actor` on SyncChange + PendingChange to
match the wire, and applyServerChanges needs to stamp __lastActor /
__fieldActors from incoming changes for Workbench attribution on other
devices — both tracked as separate follow-ups.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- DATA_LAYER_AUDIT.md: new section 8 covering the export/import flow
end-to-end — architecture diagram, .mana format, protocol-stability
commitments we locked in pre-launch (eventId + schemaVersion + op
vocab + tombstones-forever), encryption-boundary argument, file
map, and the remaining backup backlog (M4b, M5, signature,
resumable download, dedup table).
- services/mana-sync/CLAUDE.md: /backup/export row in API table with
explicit note that it sits outside the billing gate, new Backup /
Restore section with format sketch + split between writer.go (pure)
and handler.go (shim), test-coverage line mentions the backup cases,
project-structure tree lists backup/*.go, Security section mentions
RLS still applies to the export path.
No code changes.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Refactor: HTTP handler becomes a thin shim over a pure WriteBackup(w,
userID, createdAt, iter) function. RowIterator abstracts the store, so
tests feed synthetic ChangeRow slices and production feeds
StreamAllUserChanges. Zero behavior change in production — same bytes
on the wire.
Tests (all pass):
- TestWriteBackup_Roundtrip: three rows across two apps, assert zip has
2 entries, events.jsonl has 3 JSON lines in order, insert omits
fieldTimestamps, update surfaces them, manifest apps are sorted,
eventsSha256 equals a recomputed sha of the decompressed body.
- TestWriteBackup_EmptyUser: empty userID refused up-front.
- TestWriteBackup_NoRows: zero-row export still produces a valid zip
with an empty events.jsonl and a manifest with eventCount=0 and a
non-empty sha (sha of empty input).
- TestWriteBackup_DefaultsSchemaVersionZeroRowsToOne: legacy rows with
schema_version=0 clamp to 1 so the manifest never claims a protocol
version that never existed.
Paired with the vitest zip parser suite on the TS side, this closes
the Go-writes / JS-reads round-trip without needing live mana-sync.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Recovering three files dropped when a parallel terminal reset past the
original M1 commit:
- cmd/server/main.go: register GET /backup/export outside billingMiddleware
- lib/api/services/backup.ts: browser-side downloadBackup() helper
- settings/my-data/+page.svelte: "Backup & Wiederherstellung" section
Pairs with the earlier backup handler + schema_version work already on
main (79996f946). With this commit the endpoint is actually reachable
end-to-end and the download button works.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- sync_changes gains schema_version column (default 1, idempotent ADD)
- Change/Changeset carry schemaVersion; server refuses > MaxSupported
- server->client changes now carry eventId + schemaVersion so the
restore path can dedup via eventId and route through a migration
chain keyed on schemaVersion
- backup JSONL gains schemaVersion per line
Pre-M2 clients (omit the field) are treated as v1 for compatibility.
This is the stability contract we commit to before launch: once v1
events are in the wild, all future builds must replay them forward.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Dockerfile only copied services/mana-sync, but go.mod has a replace
directive pointing to ../../packages/shared-go which needs to be in the
build context. Switch context to repo root and copy both packages.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add MANA_CREDITS_URL and MANA_SERVICE_KEY to configuration table
- Document billing gate on sync endpoints (402 behavior, 5min cache, fail-open)
- Add billing/check.go to project structure
- Add stream endpoint to API table
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Defense-in-depth on top of the existing application-level WHERE clauses:
- Migrate() now ENABLE + FORCE row level security on sync_changes and
installs a policy that gates rows on current_setting('app.current_user_id').
FORCE makes the policy apply to the table owner too, so the application
role used by mana-sync cannot bypass it regardless of grants.
- New withUser(ctx, userID, fn) helper opens a transaction and calls
set_config('app.current_user_id', userID, true) before running fn.
Empty userIDs are rejected up-front so an unauthenticated request can
never reach the database with an empty RLS scope (which would match
every row).
- RecordChange / GetChangesSince / GetAllChangesSince all run inside
withUser. WITH CHECK on the policy double-validates the user_id column
on insert against the active session, so a future code path that
forgets the WHERE clause cannot leak data.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Bug 1: NotifyUser() early-returned when no WebSocket clients existed,
skipping SSE subscriber notifications entirely. Fixed by restructuring
to check WS clients and SSE subscribers independently.
Bug 2: SSE stream cursor defaulted to client's `since` parameter when
no initial data existed. If `since` was in the future (or very recent),
live updates had created_at < cursor and were silently filtered out.
Fixed by defaulting cursor to now() when no initial data is returned.
Bug 3: NotifyUser used original sseSubs slice instead of sseSubsCopy
after releasing the read lock (race condition).
Verified E2E: Push from client A → SSE stream on client B receives
live change event with correct data within ~1 second.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New endpoint GET /sync/{appId}/stream sends Server-Sent Events with
change data directly, replacing the WebSocket notification + HTTP pull
round-trip pattern.
Server (Go):
- HandleStream() in handler.go: SSE endpoint with initial sync + live streaming
- Hub.Subscribe()/Unsubscribe() in hub.go: channel-based SSE subscriber system
- Notification type for type-safe SSE events
- convertChanges() helper extracted from duplicated code
- WriteTimeout set to 0 for SSE long-lived connections
Protocol: Client connects to /sync/{appId}/stream?collections=a,b&since=...
Server sends initial changes, then streams live changes as other clients sync.
Heartbeat every 30s keeps connection alive. Push still uses POST /sync/{appId}.
WebSocket remains available as fallback (not removed).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Server now returns hasMore: true when there are more than 1000 changes
pending for a collection. Client continues pulling in a loop until
hasMore is false, using the last row's timestamp as cursor.
Prevents data loss after long offline periods where >1000 changes
accumulated for a single collection.
Server changes (Go):
- GetChangesSince() accepts limit parameter
- HandlePull() fetches limit+1, trims, sets hasMore
- SyncedUntil uses last row's timestamp when paginating
Client changes (TypeScript):
- Pull loop: while (hasMore) { fetch → apply → advance cursor }
- Cursor only persisted after all pages fetched
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add unified /ws endpoint that serves all app notifications over a single connection.
The server now includes appId in the sync-available message payload so the client
knows which app to pull. Legacy /ws/{appId} endpoint remains for backward compatibility.
Backend (Go):
- hub.go: Message struct gains AppId field, NotifyUser sends to all user clients
(unified clients receive everything, legacy clients filtered by appId)
- main.go: new GET /ws route (empty appId = unified mode)
Frontend (sync.ts):
- Single connectUnifiedWs() replaces 27 per-app connectWs() calls
- Parses msg.appId from server to pull only the affected app
- Reconnect/offline logic simplified to one WS
This reduces WebSocket connections from 27 per user to 1, cutting server
connection overhead by ~96%.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sync integration:
- Redirect service reads links from mana-sync's sync_changes table
- Analytics service queries clicks from sync_changes
- Click tracking writes to sync_changes (visible to all clients)
- Public profile reads from sync_changes
- Server DB points to mana_sync database (not separate uload DB)
- Removed uload-database dependency from server
Stripe:
- Real Stripe checkout session creation (monthly/yearly)
- Webhook handler with signature verification
- Webhook route bypasses JWT auth
Documentation:
- Root CLAUDE.md: added uload to project table, dev commands, local-first list
- mana-sync CLAUDE.md: added uLoad, Taktik, Calc to connected apps
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Bash-based integration test that verifies the full sync cycle:
1. Health check
2. Client A pushes insert
3. Client B pulls and sees the change
4. Client B pushes update (field-level)
5. Client A pulls and sees the update
6. Client A pushes delete
7. Unauthorized request rejected (401)
Requires running mana-sync + mana-auth. Run with:
./services/mana-sync/test/e2e-sync-flow.sh [TOKEN]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Implement the foundational local-first data layer for ManaCore apps:
- New @manacore/local-store package (Dexie.js IndexedDB, sync engine, Svelte 5 reactive queries)
- New mana-sync Go service (sync protocol, WebSocket push, field-level LWW conflict resolution)
- Todo app migrated as pilot: stores read/write IndexedDB, guest mode with onboarding seed data
- PillNavigation: prominent login pill for unauthenticated users
- SyncIndicator component showing local/syncing/offline status
- GuestWelcomeModal on first visit for Todo app
- Removed demo-mode auth_required checks from Todo components (all writes are now local)
- CSP fix for local development (localhost:3001, localhost:3050)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>