supabase exit strategy
Supabase Exit Strategy — Migration Patterns for Standalone PostgreSQL + Cloudflare R2
Date: 2026-03-19 Author: Deep Research Agent Tags: supabase, postgresql, cloudflare-r2, migration, infrastructure, better-auth Products: All Moklabs projects (OctantOS, Remindr, Narrativ, Argus) Priority: High — unblocks blocked infra issues (standalone PG deploy, R2 buckets, auth migration)
Executive Summary
Moklabs currently uses Supabase for database, auth, and storage across multiple projects. This report provides a complete exit strategy for migrating to standalone PostgreSQL (on Hostinger VPS via Coolify), Cloudflare R2 for object storage, and Better Auth for authentication. The migration is low-risk because Supabase uses standard PostgreSQL — no proprietary lock-in. Estimated monthly savings: $50-150/month depending on current Supabase tier.
1. Current State Assessment
What Supabase Provides
| Service | Replacement | Difficulty |
|---|---|---|
| PostgreSQL database | Standalone PG on Hostinger | Low (standard PG) |
| Auth (GoTrue) | Better Auth (already decided) | Medium |
| Storage (S3-compat) | Cloudflare R2 | Low (S3 API) |
| Realtime (WebSocket) | Not used heavily | N/A |
| Edge Functions | Not used | N/A |
| PostgREST API | Replaced by custom APIs (Fastify/Hono) | Already done |
| Connection pooling (Supavisor) | PgBouncer or direct connections | Low |
Risk Assessment
- Database: Zero proprietary lock-in. Supabase = standard PostgreSQL.
pg_dumpexports everything. - Auth: Better Auth has an official Supabase migration guide. bcrypt hashes transfer without password resets.
- Storage: R2 is S3-compatible. Migration is a bucket copy + URL update.
2. Database Migration
2.1 Export from Supabase
# Export roles (required first)
supabase db dump --db-url "$SUPABASE_DB_URL" -f roles.sql --role-only
# Export schema (without data)
supabase db dump --db-url "$SUPABASE_DB_URL" -f schema.sql
# Export data only
supabase db dump --db-url "$SUPABASE_DB_URL" -f data.sql --data-only
# Alternative: standard pg_dump (works because it's standard PG)
pg_dump -h db.PROJECT_ID.supabase.co -U postgres -d postgres \
--no-owner --no-acl -F c -f backup.dump
2.2 Target: Standalone PostgreSQL on Hostinger
Deployment via Coolify:
Coolify supports PostgreSQL as a first-class database service with:
- UI-based management and monitoring
- Built-in backup scheduling
- S3-compatible backup destinations (can back up to R2)
- Connection string generation
- Automatic restarts
Setup:
- Create PostgreSQL service in Coolify dashboard
- Configure backup schedule (daily to R2)
- Set up connection string for applications
Recommended config:
- PostgreSQL 16+ (latest stable)
max_connections: 100 (sufficient for Moklabs workload)shared_buffers: 25% of RAMwork_mem: 64MB- SSL enabled with Let’s Encrypt via Coolify
2.3 Import to Standalone PG
# Restore roles
psql -h $NEW_PG_HOST -U postgres -f roles.sql
# Restore schema
psql -h $NEW_PG_HOST -U postgres -f schema.sql
# Restore data
psql -h $NEW_PG_HOST -U postgres -f data.sql
# Or from custom format dump:
pg_restore -h $NEW_PG_HOST -U postgres -d postgres \
--no-owner --no-acl backup.dump
2.4 Drizzle ORM Considerations
Connection string change: Update DATABASE_URL from Supabase pooler URL to direct PG URL.
Key difference: When using Supabase with PgBouncer (transaction mode), Drizzle requires prepare: false:
// Supabase with pooler (OLD)
const client = postgres(process.env.DATABASE_URL, { prepare: false });
// Standalone PG (NEW) — no need for prepare: false
const client = postgres(process.env.DATABASE_URL);
Migrations: No changes needed. drizzle-kit generate and drizzle-kit migrate work identically against any PostgreSQL instance.
Schema: Clean up Supabase-specific schemas (auth, storage, realtime, supabase_functions) after migration. Keep only public and custom schemas.
2.5 Connection Pooling
For standalone PG, consider PgBouncer if connection count becomes an issue:
| Scenario | Solution |
|---|---|
| < 50 concurrent connections | Direct PG connections (simplest) |
| 50-200 concurrent connections | PgBouncer in transaction mode |
| > 200 concurrent connections | PgBouncer + read replicas |
Moklabs current scale: Direct connections are sufficient. Add PgBouncer later via Coolify when needed.
3. Auth Migration (Better Auth)
3.1 Migration Strategy
Better Auth provides an official Supabase migration guide.
Key steps:
- Export user data from Supabase
auth.userstable - Import into Better Auth tables (can be same or different database)
- Map bcrypt password hashes (transfer directly — no resets needed)
- Update application auth calls from
@supabase/auth-helperstobetter-authclient
3.2 Important Caveats
- All active sessions will be invalidated — users must re-login once
- 2FA data: Not covered by official migration guide, requires manual mapping
- RLS policies: Must be recreated as application-level authorization (Better Auth doesn’t use PG RLS)
- OAuth providers: Re-register OAuth apps (Google, GitHub, etc.) with new callback URLs
3.3 Password Hash Compatibility
Supabase uses bcrypt for password hashing. Better Auth supports bcrypt natively — password hashes can be imported “as-is” without requiring users to reset passwords.
-- Export password hashes from Supabase
SELECT id, email, encrypted_password, raw_user_meta_data
FROM auth.users
WHERE encrypted_password IS NOT NULL;
3.4 Phased Approach
| Phase | Action | Risk |
|---|---|---|
| 1. Set up auth.moklabs.io | Deploy Better Auth server (Bun+Hono+Drizzle) | Low |
| 2. OIDC Provider | Enable OIDC provider plugin for SSO across apps | Medium |
| 3. User import | Run migration script (bcrypt hashes transfer) | Low |
| 4. App update | Switch client SDKs from Supabase to Better Auth | Medium |
| 5. Cutover | DNS switch, invalidate Supabase sessions | High (brief downtime) |
| 6. Cleanup | Remove Supabase auth deps, delete Supabase project auth | Low |
4. Storage Migration (Cloudflare R2)
4.1 R2 Pricing
| Resource | Cost | Notes |
|---|---|---|
| Storage | $0.015/GB/month | 10 GB free |
| Class A ops (write) | $4.50/million | 1M free/month |
| Class B ops (read) | $0.36/million | 10M free/month |
| Egress | $0 | Zero egress fees — key advantage |
Moklabs estimate: At < 50 GB storage and moderate operations, monthly R2 cost will be < $2/month. Supabase Pro storage: $25/month for 100 GB included.
4.2 Migration Process
Option A: Automated (recommended for large buckets) Use Cloudflare’s built-in data migration service:
- Go to R2 > Settings > Data Migration
- Provide source S3 credentials (Supabase storage uses S3-compatible API)
- Cloudflare copies objects automatically
Option B: Manual (for small buckets or selective migration)
# Install rclone
brew install rclone
# Configure Supabase source
rclone config create supabase s3 \
provider=Other \
env_auth=false \
access_key_id=$SUPABASE_S3_KEY \
secret_access_key=$SUPABASE_S3_SECRET \
endpoint=$SUPABASE_STORAGE_URL
# Configure R2 destination
rclone config create r2 s3 \
provider=Cloudflare \
env_auth=false \
access_key_id=$R2_ACCESS_KEY \
secret_access_key=$R2_SECRET_KEY \
endpoint=https://$CF_ACCOUNT_ID.r2.cloudflarestorage.com
# Sync all buckets
rclone sync supabase:bucket-name r2:bucket-name --progress
4.3 Application Changes
Since R2 is S3-compatible, the application code change is minimal:
// OLD: Supabase Storage client
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(url, key);
const { data } = await supabase.storage.from('bucket').upload('path', file);
// NEW: S3-compatible client (aws-sdk or @aws-sdk/client-s3)
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({
region: 'auto',
endpoint: `https://${CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: R2_ACCESS_KEY,
secretAccessKey: R2_SECRET_KEY,
},
});
await s3.send(new PutObjectCommand({
Bucket: 'bucket-name',
Key: 'path',
Body: file,
}));
4.4 Public Access via Custom Domain
R2 bucket → Custom domain (cdn.moklabs.io) → Cloudflare CDN
Configure in Cloudflare Dashboard: R2 > Bucket > Settings > Public Access > Custom Domain.
5. Migration Checklist
Pre-Migration
- Inventory all Supabase projects and their usage (DB size, storage, auth users)
- Set up standalone PostgreSQL on Hostinger via Coolify
- Create R2 buckets (one per project or shared)
- Deploy auth.moklabs.io (Better Auth server)
- Generate Tauri update signing keys
- Set up automated PG backups to R2
Database Migration (per project)
-
supabase db dump— export roles, schema, data - Restore to standalone PG
- Update
DATABASE_URLin project config - Remove
prepare: falsefrom Drizzle config - Run smoke tests against new DB
- Clean up Supabase-specific schemas
Auth Migration
- Export user data from
auth.users - Import bcrypt hashes to Better Auth
- Re-register OAuth providers with new callback URLs
- Update client SDKs in each app
- Test login flow end-to-end
- Cutover (invalidate old sessions)
Storage Migration (per bucket)
- Sync files from Supabase Storage to R2 (rclone or Cloudflare migration)
- Update file URLs in database records
- Switch application code to S3 client
- Configure public access via custom domain
- Verify all files accessible
Post-Migration
- Monitor standalone PG performance for 1 week
- Verify backup schedules running
- Delete Supabase projects (after 30-day verification period)
- Update DNS records if needed
6. Cost Comparison
Current (Supabase)
| Tier | Cost | Included |
|---|---|---|
| Free | $0 | 500 MB DB, 1 GB storage, 50K auth users |
| Pro | $25/month/project | 8 GB DB, 100 GB storage, 100K auth users |
| Team | $599/month | SOC2, priority support |
With 4-8 projects, Supabase Pro costs $100-200/month.
Target (Self-Hosted)
| Service | Cost | Notes |
|---|---|---|
| Hostinger VPS | Already paid | Shared with other services |
| PostgreSQL | $0 | Runs on existing VPS via Coolify |
| Cloudflare R2 | ~$2/month | < 50 GB, generous free tier |
| Better Auth | $0 | Self-hosted, open-source |
| Coolify | $0 | Self-hosted, open-source |
| Total | ~$2/month |
Savings: $98-198/month ($1,176-2,376/year).
7. Risk Mitigation
| Risk | Mitigation |
|---|---|
| Data loss during migration | Full pg_dump backup before starting; keep Supabase running 30 days after |
| Auth session invalidation | Schedule cutover during low-traffic window; notify users in advance |
| Storage URL breakage | Run migration script to update all file URLs in DB after storage move |
| Performance regression | Monitor query latency for 1 week; PG config tuning checklist |
| Connection limits | Start with direct connections; add PgBouncer if needed |
| Backup failure | Daily automated backups to R2; test restore procedure monthly |
8. Recommended Migration Order
Based on project dependencies and launch priority:
- Jarvis (auth server first) — Deploy Better Auth at auth.moklabs.io
- OctantOS (most active development) — Migrate DB and storage
- Argus (GTM #1) — Migrate before launch
- Remindr — Migrate DB, storage minimal
- Narrativ — Lower priority, migrate last
Each migration should be atomic per project: DB → Auth → Storage → Verify → Next.
Sources
- Supabase: Transferring from Cloud to Self-Host
- Better Auth: Supabase Migration Guide
- Better Auth: Supabase Auth to PlanetScale Migration
- Migrating from Supabase Storage to Cloudflare R2
- Saved $300/Year by Migrating to R2
- Cloudflare R2 Pricing
- R2 Pricing Calculator
- Supabase vs R2 Comparison 2026
- Drizzle ORM with Supabase
- Coolify Cloud vs Self-Hosted
- Managed PostgreSQL vs Self-Hosted
- Supabase Database Migrations