How to Migrate Between SaaS Boilerplates 2026
TL;DR
Most SaaS migrations aren't full rewrites — they're selective transplants. The core application logic (your features, your business rules, your data model) stays. What changes is the scaffolding: auth, billing wiring, routing patterns, component library. The key insight: migrate in layers, not all at once. Start with the new boilerplate running in production with zero app features, then transplant your features one by one while keeping the old version live. This is the strangler fig pattern applied to boilerplate migration.
Key Takeaways
- Most migrations trigger: boilerplate abandoned, need multi-tenancy, framework upgrade required
- Strangler fig pattern: run new and old side by side, migrate feature by feature
- What to keep: your database schema (port to new ORM), your business logic, your components
- What to replace: auth wiring, billing integration, routing patterns
- Hardest migrations: any auth provider change (passwords can't be exported)
- Timeline estimate: 2-4 weeks for a simple SaaS, 2-3 months for complex multi-tenant products
Why Teams Migrate Boilerplates
The most common migration triggers:
1. Boilerplate abandoned by creator
→ Author stopped maintaining; stuck on old Next.js version
2. Outgrew the original choice
→ Started with ShipFast (B2C), now need multi-tenancy (B2B)
→ Started with free boilerplate, need features only paid versions have
3. Framework upgrade required
→ Pages Router → App Router migration
→ Next.js 13 → 15 breaking changes
4. Bad architectural choice upstream
→ Chose MongoDB, now need SQL for complex queries
→ Auth provider deprecated
5. Security vulnerability in boilerplate dependency
→ Can't patch without pulling in incompatible changes
The Migration Framework
Phase 1: Audit What You Have
Before starting, inventory exactly what you've built on top of the boilerplate:
Custom code audit checklist:
[ ] List all pages you've added (not in boilerplate)
[ ] List all API routes you've added or modified
[ ] List all database models you've added
[ ] List all UI components you've built
[ ] List all third-party integrations you've added
[ ] Note any boilerplate files you've significantly modified
[ ] List all environment variables and what they control
# Quick inventory of custom files vs boilerplate files:
# Compare your repo to the original boilerplate commit:
git diff <original-boilerplate-commit> HEAD --name-only
# Files modified from original (your custom work):
# app/dashboard/analytics/page.tsx ← custom page
# app/api/reports/route.ts ← custom API
# components/charts/RevenueChart.tsx ← custom component
# Files untouched (pure boilerplate):
# app/api/auth/[...nextauth]/route.ts ← boilerplate auth
# app/pricing/page.tsx ← boilerplate pricing
Phase 2: Set Up the New Boilerplate in Production First
Critical mistake: migrating locally and then trying to deploy. Do it in reverse:
# 1. Clone the new boilerplate:
npx create-t3-app@latest my-saas-v2
# or
git clone https://github.com/new-boilerplate/template my-saas-v2
# 2. Configure the new boilerplate (auth, billing, DB connection)
# 3. Deploy to Vercel at a new subdomain: v2.yourdomain.com
# 4. Run in parallel with production while you migrate features
The parallel deployment approach:
yourdomain.com → Old boilerplate (live, taking users)
v2.yourdomain.com → New boilerplate (your team developing here)
Over 4 weeks:
Week 1: Port auth layer
Week 2: Port billing
Week 3: Port core features
Week 4: Beta test on v2.yourdomain.com → migrate users → cut over
Phase 3: Migrate in This Order
The order matters. Get the foundation right before adding features:
1. Database / Data Model ← Port your schema first
2. Auth ← Validate session pattern
3. Billing ← Money must work
4. Core features ← Your actual product
5. Admin / supporting pages ← Last, least risky
Layer-by-Layer Migration Guide
Database Migration
Most migrations keep the same database — you're just changing the ORM:
// From Prisma to Drizzle (keeping Postgres):
// OLD (Prisma):
// prisma/schema.prisma
model User {
id String @id @default(cuid())
email String @unique
name String?
plan String @default("free")
createdAt DateTime @default(now())
}
// NEW (Drizzle) — same database, same tables:
// db/schema.ts
import { pgTable, varchar, text, timestamp } from 'drizzle-orm/pg-core';
export const users = pgTable('users', {
id: text('id').primaryKey().$defaultFn(() => crypto.randomUUID()),
email: varchar('email', { length: 255 }).unique().notNull(),
name: varchar('name', { length: 255 }),
plan: varchar('plan', { length: 50 }).default('free').notNull(),
createdAt: timestamp('created_at').defaultNow().notNull(),
});
// Drizzle can introspect existing Postgres schema:
// npx drizzle-kit introspect:pg
// → Generates Drizzle schema from your existing tables
// No data migration needed — just ORM layer changes
If you're also migrating databases (MongoDB → Postgres):
// Export from MongoDB:
const users = await mongoDb.collection('users').find({}).toArray();
const subscriptions = await mongoDb.collection('subscriptions').find({}).toArray();
// Transform to SQL-friendly format:
const sqlUsers = users.map((u) => ({
id: u._id.toString(),
email: u.email,
name: u.name,
plan: u.plan ?? 'free',
createdAt: u.createdAt ?? new Date(),
}));
// Insert into Postgres:
await db.insert(pgUsers).values(sqlUsers).onConflictDoNothing();
Auth Migration
Easiest case: staying on same provider, changing integration
// Migrating from NextAuth to Clerk (same users, new integration):
// NextAuth sessions → Clerk sessions
// During migration, support BOTH:
// middleware.ts
import { clerkMiddleware } from '@clerk/nextjs/server';
export default clerkMiddleware(async (auth, req) => {
// Check Clerk session:
const { userId } = await auth();
if (userId) return; // Clerk user, allow through
// Fallback: check if they have old NextAuth session
// Redirect to re-authenticate with Clerk
const url = new URL('/re-auth', req.url);
url.searchParams.set('redirect', req.pathname);
return Response.redirect(url);
});
Hardest case: migrating to/from Clerk/Auth0 (passwords involved)
Passwords are hashed with the provider's key. You can't export them. Your options:
Option A: Force re-authentication
→ Email all users: "We're upgrading our platform, please log in"
→ New login with Google/GitHub (most users use OAuth anyway)
→ Reset password flow for email/password users
Option B: Gradual migration
→ Keep old auth for existing users during transition period
→ New users go to new auth
→ Set 3-month deadline, after which all must re-authenticate
Option C: Use WorkOS as a bridge
→ WorkOS can handle SSO from multiple identity providers
→ Enterprise users migrate without disruption
Billing Migration
This is the most dangerous layer — money must not break.
// Migrating Stripe integration (ShipFast → Supastarter):
// The key: Stripe customer IDs and subscription IDs stay the same
// Step 1: Copy your Stripe metadata from old DB to new DB:
const oldSubscriptions = await oldDb.subscription.findMany();
for (const sub of oldSubscriptions) {
await newDb.subscription.upsert({
where: { stripeSubscriptionId: sub.stripeSubscriptionId },
create: {
userId: sub.userId, // Need to map to new user ID format
stripeCustomerId: sub.stripeCustomerId,
stripeSubscriptionId: sub.stripeSubscriptionId,
stripePriceId: sub.stripePriceId,
status: sub.status,
currentPeriodEnd: sub.currentPeriodEnd,
},
update: {},
});
}
// Step 2: Point Stripe webhook to new endpoint:
// In Stripe Dashboard → Developers → Webhooks
// Update endpoint URL from old app to new app
// Add the same events: subscription.*, customer.*, invoice.*
// Step 3: Test with Stripe CLI:
stripe listen --forward-to localhost:3001/api/webhooks/stripe
stripe trigger customer.subscription.updated
Feature Migration (The Core Work)
// Pattern: extract feature from old boilerplate, adapt to new patterns
// OLD (ShipFast, Pages Router):
// pages/api/reports/generate.ts
import { getServerSession } from 'next-auth/next';
export default async function handler(req, res) {
const session = await getServerSession(req, res, authOptions);
if (!session) return res.status(401).json({ error: 'Unauthorized' });
const report = await generateReport(session.user.id);
return res.json(report);
}
// NEW (Supastarter, App Router):
// app/api/reports/generate/route.ts
import { createClient } from '@/utils/supabase/server';
export async function POST() {
const supabase = await createClient();
const { data: { user } } = await supabase.auth.getUser();
if (!user) return new Response('Unauthorized', { status: 401 });
const report = await generateReport(user.id);
return Response.json(report);
}
// The core business logic (generateReport) stays IDENTICAL.
// Only the auth pattern changed.
Common Migration Paths
ShipFast → Supastarter (B2C → B2B)
Trigger: Your SaaS grew and enterprise customers need team accounts.
Main changes:
- Auth: Supabase Auth stays (both use it); auth code largely compatible
- Database: Add organization/member tables; update resource ownership
- Billing: Add per-seat billing logic; Stripe stays
- Routes: Convert Pages Router to App Router if ShipFast version was old
Time estimate: 2-3 weeks for a medium-sized SaaS
// The core migration: resource ownership change
// OLD (user-owned):
const projects = await db.project.findMany({ where: { userId } });
// NEW (org-owned):
const projects = await db.project.findMany({
where: { organizationId: activeOrganizationId }
});
// Data migration script:
// For each user → create organization → move their resources to org
async function migrateToOrgs() {
const users = await db.user.findMany({ include: { projects: true } });
for (const user of users) {
// Create personal organization for solo users:
const org = await db.organization.create({
data: {
name: `${user.name}'s workspace`,
slug: user.email.split('@')[0],
members: { create: { userId: user.id, role: 'owner' } },
},
});
// Move their resources:
await db.project.updateMany({
where: { userId: user.id },
data: { organizationId: org.id },
});
}
}
Any Boilerplate → T3 Stack (Framework Upgrade)
Trigger: Your boilerplate was abandoned on Next.js 13 Pages Router.
Main changes:
- Routing: pages/ → app/ directory
- Data fetching: getServerSideProps → Server Components + Server Actions
- Auth: likely changing (old boilerplates often use older NextAuth patterns)
// Pages Router → App Router patterns:
// OLD (getServerSideProps):
export async function getServerSideProps(context) {
const session = await getServerSession(context.req, context.res, authOptions);
if (!session) return { redirect: { destination: '/login', permanent: false } };
const data = await fetchDashboardData(session.user.id);
return { props: { data } };
}
// NEW (Server Component + redirect):
import { redirect } from 'next/navigation';
export default async function DashboardPage() {
const session = await auth();
if (!session) redirect('/login');
const data = await fetchDashboardData(session.user.id);
return <Dashboard data={data} />;
}
Migration Timeline Template
For a medium-complexity SaaS (auth + billing + 5-10 features):
Week 1: Foundation
Mon-Tue: Set up new boilerplate, deploy to v2.yourdomain.com
Wed-Thu: Database schema migration + data copy scripts
Fri: Auth layer migration + testing
Week 2: Money Layer
Mon-Wed: Billing migration (Stripe webhooks, subscription sync)
Thu-Fri: Test all billing flows (subscribe, upgrade, cancel, webhook)
Week 3: Core Features
Each day: migrate 1-2 features
Testing after each feature
Keep old version running, add feature parity check
Week 4: Polish + Cutover
Mon-Tue: Admin pages, settings, secondary features
Wed: Beta users on v2 (10% traffic)
Thu: 50% traffic to v2
Fri: Full cutover (DNS swap + old app in maintenance mode)
Week 5: Cleanup
Mon: Decommission old app
Tue: Remove migration-specific compatibility code
Wed: Final testing, close migration tickets
When NOT to Migrate
Sometimes migrating is the wrong call:
Don't migrate if:
→ You've customized >60% of the boilerplate
(You've already done the work — migration gains little)
→ The migration would take longer than 3 months
(Your product is now the boilerplate. Build on top, don't restart.)
→ Your tech debt is in your features, not the boilerplate
(Migrating scaffolding won't fix business logic problems)
→ You're changing because of hype, not need
(Bun is faster than Node.js for your use case? Measure first.)
Do migrate if:
→ Your boilerplate has an unfixable security vulnerability
→ Your auth provider is shutting down
→ You cannot add needed features (multi-tenancy) without months of work
→ The original boilerplate framework version is EOL
Avoiding the Trap in the First Place
The best migration is the one you never need. Most developers who end up migrating boilerplates made their original selection under time pressure — they grabbed the first result that looked reasonable and started shipping. When you slow down and evaluate fit first, the odds of a costly migration drop dramatically.
The key questions to ask before committing to a boilerplate:
Does your product need multi-tenancy from day one? If you're building B2B SaaS with team accounts, start with a boilerplate that has organization support built in — Makerkit and Supastarter are the standard choices here. Retrofitting org support onto a user-centric boilerplate like ShipFast is exactly the pattern that triggers the migrations described in this guide. See our comparison of the top boilerplate options for 2026 to understand which ones are architected for B2B vs B2C.
What auth pattern does your product need long-term? Changing auth providers mid-flight is the single most painful migration type because passwords can't be exported. If you're considering the Clerk ecosystem from the start, begin there and avoid the NextAuth-to-Clerk migration later. Our detailed comparison of Better Auth vs Clerk vs NextAuth explains the tradeoffs clearly.
What's the deployment model? Some boilerplates are opinionated about Vercel and break if you try to self-host or move to Railway, Render, or Fly.io. Check the deployment section of our production deployment guide before committing to a starter.
The Hidden Complexity of Auth Migrations
Auth migrations deserve their own expanded discussion because they're categorically harder than any other migration. When you migrate a database, you copy data. When you migrate a billing integration, Stripe customer IDs survive. When you migrate auth, you face a hard constraint: password hashes are tied to the auth provider's key material.
This means you cannot silently migrate users from NextAuth's bcrypt hashes to Clerk's internal hash format. You have three options, each with significant UX cost:
The forced re-authentication approach works for apps where most users sign in via OAuth (Google, GitHub). Send an email explaining the upgrade, deprecate old credentials on a timeline, and let users re-establish their session through OAuth. For products where Google/GitHub OAuth is the primary sign-in method, this is often painless — 80% of users just click "Continue with Google" again and they're in.
The parallel auth period works for apps with a substantial email-and-password user base. You run two auth systems simultaneously for a defined window (typically 60-90 days). New users go to the new system. Existing users hit the old system on first login, which re-authenticates them and silently migrates their session to the new provider. At the end of the window, force remaining holdouts through a password reset. This is operationally complex but minimizes disruption to active users.
The gradual migration approach works for enterprise clients where any disruption is contractually problematic. Each tenant migrates on their own schedule. SSO-enabled tenants (SAML/OIDC) migrate last because their identity lives upstream of your auth provider anyway — they just need their IdP config re-pointed.
What Survives a Migration
One of the most valuable mental shifts for developers planning a boilerplate migration: your product is not the boilerplate. The boilerplate is a launch accelerator. The product is the unique business logic you've built on top.
In any well-structured migration, these components carry over unchanged:
Business logic lives in service files, utility functions, and data transformation layers. The function that calculates a user's subscription status based on their plan and usage, the algorithm that ranks search results, the logic that generates a report — none of this touches the boilerplate. It travels across with a copy-paste.
UI components you've built are almost always portable. A <DataTable> component you wrote for your analytics dashboard doesn't care whether the underlying boilerplate uses ShipFast or Supastarter. Tailwind-based components are particularly portable because they have no runtime dependencies beyond the class names.
Database schemas and data are completely independent of the boilerplate, as long as you stay in the same database type (or are willing to write a migration script). Even changing ORMs from Prisma to Drizzle is non-destructive — the SQL tables are identical, only the query layer changes.
API route business logic survives. The endpoint that creates a new project, sends a notification, or processes a webhook payload does the same thing regardless of whether it lives in pages/api/ or app/api/. The auth check pattern changes; the business logic doesn't.
What does NOT survive cleanly: boilerplate-specific abstractions. If you've built your entire app around ShipFast's getServerActionUser() helper, every Server Action needs to be updated. If you've leaned on Supastarter's organization context hooks, those patterns are specific to that boilerplate. The more you've built against boilerplate-specific APIs, the more migration work you have.
Data Migration Scripts: The Practical Reality
Most teams discover their "just copy the database" plan is more complex in practice. A few real scenarios:
If you're migrating between Supabase projects (e.g., moving from a development Supabase instance to a new boilerplate's Supabase project), use pg_dump and pg_restore for the data, then run the new boilerplate's schema migrations on top. The auth tables live in Supabase's auth schema, which you don't control — you'll need to use Supabase's admin API to migrate users.
If you're migrating from Prisma to Drizzle while keeping the same Postgres database, the migration is zero-data-movement. Run drizzle-kit introspect to generate Drizzle schema from your existing tables, update import paths throughout the codebase, and you're done. The database is untouched.
If you're migrating from MongoDB to PostgreSQL (common when founders discovered they needed relational queries), plan for data transformation work. MongoDB documents with nested arrays need to be flattened into relational tables. For production data, test your migration script on a full copy of the production dump before running it against production. See our guide on zero-downtime database migrations for the cutover procedure.
Free and Open Source Boilerplate Migration Considerations
Migrations from free boilerplates have one important difference from paid ones: there's no vendor support to help. If you're migrating from an open-source starter like OpenSaaS, SaaSgear, or any of the best free open-source SaaS boilerplates, you're relying entirely on community docs and your own audit.
This matters because free boilerplates often have less documentation around internal patterns. When you're trying to understand how the email service is wired, or what the subscription status lifecycle looks like, you may need to read the source rather than consult a migration guide.
The audit step in Phase 1 becomes even more important here. Spend an afternoon mapping every boilerplate touch point before writing a single line of migration code. You'll often discover integrations you forgot the boilerplate was handling — email delivery, webhook processing, cron jobs — that need explicit migration plans.
Find the right boilerplate for your next stage of growth at StarterPick.