All reports
by deep-research

supabase exit strategy

Supabase Exit Strategy — Migration Patterns for Standalone PostgreSQL + Cloudflare R2

Date: 2026-03-19 Author: Deep Research Agent Tags: supabase, postgresql, cloudflare-r2, migration, infrastructure, better-auth Products: All Moklabs projects (OctantOS, Remindr, Narrativ, Argus) Priority: High — unblocks blocked infra issues (standalone PG deploy, R2 buckets, auth migration)


Executive Summary

Moklabs currently uses Supabase for database, auth, and storage across multiple projects. This report provides a complete exit strategy for migrating to standalone PostgreSQL (on Hostinger VPS via Coolify), Cloudflare R2 for object storage, and Better Auth for authentication. The migration is low-risk because Supabase uses standard PostgreSQL — no proprietary lock-in. Estimated monthly savings: $50-150/month depending on current Supabase tier.


1. Current State Assessment

What Supabase Provides

ServiceReplacementDifficulty
PostgreSQL databaseStandalone PG on HostingerLow (standard PG)
Auth (GoTrue)Better Auth (already decided)Medium
Storage (S3-compat)Cloudflare R2Low (S3 API)
Realtime (WebSocket)Not used heavilyN/A
Edge FunctionsNot usedN/A
PostgREST APIReplaced by custom APIs (Fastify/Hono)Already done
Connection pooling (Supavisor)PgBouncer or direct connectionsLow

Risk Assessment

  • Database: Zero proprietary lock-in. Supabase = standard PostgreSQL. pg_dump exports everything.
  • Auth: Better Auth has an official Supabase migration guide. bcrypt hashes transfer without password resets.
  • Storage: R2 is S3-compatible. Migration is a bucket copy + URL update.

2. Database Migration

2.1 Export from Supabase

# Export roles (required first)
supabase db dump --db-url "$SUPABASE_DB_URL" -f roles.sql --role-only

# Export schema (without data)
supabase db dump --db-url "$SUPABASE_DB_URL" -f schema.sql

# Export data only
supabase db dump --db-url "$SUPABASE_DB_URL" -f data.sql --data-only

# Alternative: standard pg_dump (works because it's standard PG)
pg_dump -h db.PROJECT_ID.supabase.co -U postgres -d postgres \
  --no-owner --no-acl -F c -f backup.dump

2.2 Target: Standalone PostgreSQL on Hostinger

Deployment via Coolify:

Coolify supports PostgreSQL as a first-class database service with:

  • UI-based management and monitoring
  • Built-in backup scheduling
  • S3-compatible backup destinations (can back up to R2)
  • Connection string generation
  • Automatic restarts

Setup:

  1. Create PostgreSQL service in Coolify dashboard
  2. Configure backup schedule (daily to R2)
  3. Set up connection string for applications

Recommended config:

  • PostgreSQL 16+ (latest stable)
  • max_connections: 100 (sufficient for Moklabs workload)
  • shared_buffers: 25% of RAM
  • work_mem: 64MB
  • SSL enabled with Let’s Encrypt via Coolify

2.3 Import to Standalone PG

# Restore roles
psql -h $NEW_PG_HOST -U postgres -f roles.sql

# Restore schema
psql -h $NEW_PG_HOST -U postgres -f schema.sql

# Restore data
psql -h $NEW_PG_HOST -U postgres -f data.sql

# Or from custom format dump:
pg_restore -h $NEW_PG_HOST -U postgres -d postgres \
  --no-owner --no-acl backup.dump

2.4 Drizzle ORM Considerations

Connection string change: Update DATABASE_URL from Supabase pooler URL to direct PG URL.

Key difference: When using Supabase with PgBouncer (transaction mode), Drizzle requires prepare: false:

// Supabase with pooler (OLD)
const client = postgres(process.env.DATABASE_URL, { prepare: false });

// Standalone PG (NEW) — no need for prepare: false
const client = postgres(process.env.DATABASE_URL);

Migrations: No changes needed. drizzle-kit generate and drizzle-kit migrate work identically against any PostgreSQL instance.

Schema: Clean up Supabase-specific schemas (auth, storage, realtime, supabase_functions) after migration. Keep only public and custom schemas.

2.5 Connection Pooling

For standalone PG, consider PgBouncer if connection count becomes an issue:

ScenarioSolution
< 50 concurrent connectionsDirect PG connections (simplest)
50-200 concurrent connectionsPgBouncer in transaction mode
> 200 concurrent connectionsPgBouncer + read replicas

Moklabs current scale: Direct connections are sufficient. Add PgBouncer later via Coolify when needed.


3. Auth Migration (Better Auth)

3.1 Migration Strategy

Better Auth provides an official Supabase migration guide.

Key steps:

  1. Export user data from Supabase auth.users table
  2. Import into Better Auth tables (can be same or different database)
  3. Map bcrypt password hashes (transfer directly — no resets needed)
  4. Update application auth calls from @supabase/auth-helpers to better-auth client

3.2 Important Caveats

  • All active sessions will be invalidated — users must re-login once
  • 2FA data: Not covered by official migration guide, requires manual mapping
  • RLS policies: Must be recreated as application-level authorization (Better Auth doesn’t use PG RLS)
  • OAuth providers: Re-register OAuth apps (Google, GitHub, etc.) with new callback URLs

3.3 Password Hash Compatibility

Supabase uses bcrypt for password hashing. Better Auth supports bcrypt natively — password hashes can be imported “as-is” without requiring users to reset passwords.

-- Export password hashes from Supabase
SELECT id, email, encrypted_password, raw_user_meta_data
FROM auth.users
WHERE encrypted_password IS NOT NULL;

3.4 Phased Approach

PhaseActionRisk
1. Set up auth.moklabs.ioDeploy Better Auth server (Bun+Hono+Drizzle)Low
2. OIDC ProviderEnable OIDC provider plugin for SSO across appsMedium
3. User importRun migration script (bcrypt hashes transfer)Low
4. App updateSwitch client SDKs from Supabase to Better AuthMedium
5. CutoverDNS switch, invalidate Supabase sessionsHigh (brief downtime)
6. CleanupRemove Supabase auth deps, delete Supabase project authLow

4. Storage Migration (Cloudflare R2)

4.1 R2 Pricing

ResourceCostNotes
Storage$0.015/GB/month10 GB free
Class A ops (write)$4.50/million1M free/month
Class B ops (read)$0.36/million10M free/month
Egress$0Zero egress fees — key advantage

Moklabs estimate: At < 50 GB storage and moderate operations, monthly R2 cost will be < $2/month. Supabase Pro storage: $25/month for 100 GB included.

4.2 Migration Process

Option A: Automated (recommended for large buckets) Use Cloudflare’s built-in data migration service:

  1. Go to R2 > Settings > Data Migration
  2. Provide source S3 credentials (Supabase storage uses S3-compatible API)
  3. Cloudflare copies objects automatically

Option B: Manual (for small buckets or selective migration)

# Install rclone
brew install rclone

# Configure Supabase source
rclone config create supabase s3 \
  provider=Other \
  env_auth=false \
  access_key_id=$SUPABASE_S3_KEY \
  secret_access_key=$SUPABASE_S3_SECRET \
  endpoint=$SUPABASE_STORAGE_URL

# Configure R2 destination
rclone config create r2 s3 \
  provider=Cloudflare \
  env_auth=false \
  access_key_id=$R2_ACCESS_KEY \
  secret_access_key=$R2_SECRET_KEY \
  endpoint=https://$CF_ACCOUNT_ID.r2.cloudflarestorage.com

# Sync all buckets
rclone sync supabase:bucket-name r2:bucket-name --progress

4.3 Application Changes

Since R2 is S3-compatible, the application code change is minimal:

// OLD: Supabase Storage client
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(url, key);
const { data } = await supabase.storage.from('bucket').upload('path', file);

// NEW: S3-compatible client (aws-sdk or @aws-sdk/client-s3)
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({
  region: 'auto',
  endpoint: `https://${CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: {
    accessKeyId: R2_ACCESS_KEY,
    secretAccessKey: R2_SECRET_KEY,
  },
});
await s3.send(new PutObjectCommand({
  Bucket: 'bucket-name',
  Key: 'path',
  Body: file,
}));

4.4 Public Access via Custom Domain

R2 bucket → Custom domain (cdn.moklabs.io) → Cloudflare CDN

Configure in Cloudflare Dashboard: R2 > Bucket > Settings > Public Access > Custom Domain.


5. Migration Checklist

Pre-Migration

  • Inventory all Supabase projects and their usage (DB size, storage, auth users)
  • Set up standalone PostgreSQL on Hostinger via Coolify
  • Create R2 buckets (one per project or shared)
  • Deploy auth.moklabs.io (Better Auth server)
  • Generate Tauri update signing keys
  • Set up automated PG backups to R2

Database Migration (per project)

  • supabase db dump — export roles, schema, data
  • Restore to standalone PG
  • Update DATABASE_URL in project config
  • Remove prepare: false from Drizzle config
  • Run smoke tests against new DB
  • Clean up Supabase-specific schemas

Auth Migration

  • Export user data from auth.users
  • Import bcrypt hashes to Better Auth
  • Re-register OAuth providers with new callback URLs
  • Update client SDKs in each app
  • Test login flow end-to-end
  • Cutover (invalidate old sessions)

Storage Migration (per bucket)

  • Sync files from Supabase Storage to R2 (rclone or Cloudflare migration)
  • Update file URLs in database records
  • Switch application code to S3 client
  • Configure public access via custom domain
  • Verify all files accessible

Post-Migration

  • Monitor standalone PG performance for 1 week
  • Verify backup schedules running
  • Delete Supabase projects (after 30-day verification period)
  • Update DNS records if needed

6. Cost Comparison

Current (Supabase)

TierCostIncluded
Free$0500 MB DB, 1 GB storage, 50K auth users
Pro$25/month/project8 GB DB, 100 GB storage, 100K auth users
Team$599/monthSOC2, priority support

With 4-8 projects, Supabase Pro costs $100-200/month.

Target (Self-Hosted)

ServiceCostNotes
Hostinger VPSAlready paidShared with other services
PostgreSQL$0Runs on existing VPS via Coolify
Cloudflare R2~$2/month< 50 GB, generous free tier
Better Auth$0Self-hosted, open-source
Coolify$0Self-hosted, open-source
Total~$2/month

Savings: $98-198/month ($1,176-2,376/year).


7. Risk Mitigation

RiskMitigation
Data loss during migrationFull pg_dump backup before starting; keep Supabase running 30 days after
Auth session invalidationSchedule cutover during low-traffic window; notify users in advance
Storage URL breakageRun migration script to update all file URLs in DB after storage move
Performance regressionMonitor query latency for 1 week; PG config tuning checklist
Connection limitsStart with direct connections; add PgBouncer if needed
Backup failureDaily automated backups to R2; test restore procedure monthly

Based on project dependencies and launch priority:

  1. Jarvis (auth server first) — Deploy Better Auth at auth.moklabs.io
  2. OctantOS (most active development) — Migrate DB and storage
  3. Argus (GTM #1) — Migrate before launch
  4. Remindr — Migrate DB, storage minimal
  5. Narrativ — Lower priority, migrate last

Each migration should be atomic per project: DB → Auth → Storage → Verify → Next.


Sources