How to Add Feature Flags to Your SaaS Starter (2026)
TL;DR
Feature flags let you ship code without shipping features. For SaaS, they serve three purposes: plan-based gating (Pro only), progressive rollout (10% → 100%), and A/B testing. You can build a lightweight flags system in half a day or adopt a managed service like Unleash or Growthbook. This guide covers both paths.
Why Feature Flags Matter for SaaS Products
Feature flags decouple deployment from release. Without them, shipping a new feature to 100% of users simultaneously — risks, bugs, and unexpected edge cases included. With flags, you can deploy code on Tuesday and release the feature to 5% of users on Friday, watch metrics for 48 hours, then expand to 100%.
For SaaS specifically, flags serve three distinct purposes that are worth separating conceptually:
Plan gating (most common): Feature X is only available to Pro plan subscribers. This is a static configuration check — the same user always gets the same answer. planHasFeature('free', 'advanced_analytics') → false. This is technically a feature flag but doesn't need progressive rollout or targeting logic. A simple lookup table in code is sufficient.
Progressive rollout: A new feature is deployed but enabled only for 10% of users initially. This is the classic A/B testing or canary release pattern. The flag is dynamic — the same user gets the same answer (deterministically based on their ID), but different users get different answers. Requires rollout percentage logic.
Kill switch: A flag that's normally enabled but can be instantly disabled if something goes wrong. "We can turn it off in 30 seconds without a deploy" is enormously valuable for high-risk features or during incidents.
Building all three into a single implementation is tempting but can over-engineer the solution. Start with plan gating (hardcoded), add rollout flags when you have a specific need, and add managed services only when the in-house system's complexity exceeds the cost of the service.
Option 1: Lightweight DIY Flags (Recommended to Start)
For most SaaS apps, feature flags are just a database table and a helper function:
// prisma/schema.prisma
model FeatureFlag {
id String @id @default(cuid())
name String @unique
enabled Boolean @default(false)
enabledFor String[] // user IDs or organization IDs
rolloutPct Int @default(0) // 0-100 percentage rollout
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
// lib/flags.ts
import { prisma } from './prisma';
export async function isFeatureEnabled(
flagName: string,
userId?: string
): Promise<boolean> {
const flag = await prisma.featureFlag.findUnique({
where: { name: flagName },
});
if (!flag) return false;
if (!flag.enabled) return false;
// Specific user override
if (userId && flag.enabledFor.includes(userId)) return true;
// Percentage rollout (deterministic per user)
if (flag.rolloutPct > 0 && userId) {
const hash = simpleHash(userId + flagName);
return hash % 100 < flag.rolloutPct;
}
// Fully enabled
return flag.rolloutPct === 100;
}
function simpleHash(str: string): number {
let hash = 0;
for (let i = 0; i < str.length; i++) {
const char = str.charCodeAt(i);
hash = ((hash << 5) - hash) + char;
hash = hash & hash;
}
return Math.abs(hash);
}
Plan-Based Feature Gating
The most common SaaS use case — gate features by subscription:
// lib/plan-features.ts
const PLAN_FEATURES: Record<string, string[]> = {
free: ['dashboard', 'basic_analytics'],
pro: ['dashboard', 'basic_analytics', 'advanced_analytics', 'api_access', 'team_members'],
enterprise: ['*'], // All features
};
export function planHasFeature(planName: string, feature: string): boolean {
const features = PLAN_FEATURES[planName] ?? [];
return features.includes('*') || features.includes(feature);
}
// In your API routes
export async function requireFeature(userId: string, feature: string) {
const user = await prisma.user.findUnique({
where: { id: userId },
include: { subscription: true },
});
const plan = user?.subscription?.priceId
? getPlanFromPriceId(user.subscription.priceId)
: 'free';
if (!planHasFeature(plan, feature)) {
throw new Error(`Feature '${feature}' requires ${getRequiredPlan(feature)} plan`);
}
}
Using Flags in Next.js
Server Components
// app/dashboard/analytics/page.tsx
import { isFeatureEnabled } from '@/lib/flags';
import { getServerSession } from 'next-auth';
export default async function AnalyticsPage() {
const session = await getServerSession();
const hasAdvancedAnalytics = await isFeatureEnabled(
'advanced_analytics',
session?.user.id
);
return (
<div>
<BasicAnalytics />
{hasAdvancedAnalytics && <AdvancedAnalytics />}
</div>
);
}
Client Components with Context
// providers/FlagsProvider.tsx
'use client';
import { createContext, useContext } from 'react';
type FlagsContextType = Record<string, boolean>;
const FlagsContext = createContext<FlagsContextType>({});
export function FlagsProvider({
flags,
children,
}: {
flags: FlagsContextType;
children: React.ReactNode;
}) {
return <FlagsContext.Provider value={flags}>{children}</FlagsContext.Provider>;
}
export function useFlag(name: string): boolean {
const flags = useContext(FlagsContext);
return flags[name] ?? false;
}
// Load flags in root layout and pass to provider
// app/layout.tsx
const session = await getServerSession();
const flags = session?.user.id
? await getUserFlags(session.user.id)
: {};
Performance Considerations
Feature flags add latency if implemented naively. A flag check that queries the database on every request, for every component that checks a flag, adds up.
Cache flag reads at the request level: The first flag check for a user in a request should query the database; subsequent checks should use cached results. Use React's cache() for this in Server Components:
// lib/flags.ts — cached per request
import { cache } from 'react';
// This runs once per request, even if called multiple times
const getUserFlagsFromDB = cache(async (userId: string) => {
const flags = await prisma.featureFlag.findMany();
return flags.reduce((acc, flag) => {
// Check if this user has this flag enabled
const enabled =
flag.enabled &&
(flag.rolloutPct === 100 ||
flag.enabledFor.includes(userId) ||
(flag.rolloutPct > 0 && simpleHash(userId + flag.name) % 100 < flag.rolloutPct));
acc[flag.name] = enabled;
return acc;
}, {} as Record<string, boolean>);
});
export async function isFeatureEnabled(flagName: string, userId?: string): Promise<boolean> {
if (!userId) return false;
const flags = await getUserFlagsFromDB(userId);
return flags[flagName] ?? false;
}
Cache frequently-read flags globally: For flags that rarely change (plan gating features, permanent feature launches), add a short TTL cache at the application level using unstable_cache:
import { unstable_cache } from 'next/cache';
// Revalidates every 5 minutes — acceptable staleness for feature flags
export const getGlobalFlags = unstable_cache(
async () => prisma.featureFlag.findMany({ where: { enabled: true } }),
['global-flags'],
{ revalidate: 300 }
);
For most SaaS products with under 50,000 DAU, the per-request cache is sufficient. The global cache becomes valuable when flag checks happen in layouts that serve every page request.
Option 2: Managed Flag Services
When you need targeting rules, analytics, and team collaboration:
| Service | Free Tier | Highlights |
|---|---|---|
| Growthbook | Self-host free | A/B testing built-in, open source |
| Unleash | Self-host free | Enterprise feature parity |
| Flagsmith | 50K requests/mo | Simple API, good SDKs |
| LaunchDarkly | 1 seat free | Best-in-class, expensive at scale |
Growthbook (Self-hosted, Free)
// lib/growthbook.ts
import { GrowthBook } from '@growthbook/growthbook';
export function createGrowthBook(userId: string, attributes: Record<string, unknown>) {
const gb = new GrowthBook({
apiHost: process.env.GROWTHBOOK_API_HOST!,
clientKey: process.env.GROWTHBOOK_CLIENT_KEY!,
attributes: {
id: userId,
...attributes,
},
});
return gb;
}
// Usage
const gb = createGrowthBook(user.id, {
plan: user.subscription?.plan,
country: user.country,
});
await gb.loadFeatures();
const showNewDashboard = gb.isOn('new_dashboard');
Admin UI for Flag Management
A simple admin page to toggle flags without code deploys:
// app/admin/flags/page.tsx
export default async function FlagsAdminPage() {
const flags = await prisma.featureFlag.findMany({
orderBy: { name: 'asc' },
});
return (
<div>
<h1>Feature Flags</h1>
{flags.map(flag => (
<div key={flag.id} className="flex items-center justify-between py-3 border-b">
<div>
<p className="font-medium">{flag.name}</p>
<p className="text-sm text-gray-500">Rollout: {flag.rolloutPct}%</p>
</div>
<form action={toggleFlag}>
<input type="hidden" name="id" value={flag.id} />
<input type="hidden" name="enabled" value={String(!flag.enabled)} />
<button type="submit" className={flag.enabled ? 'bg-green-500' : 'bg-gray-300'}>
{flag.enabled ? 'ON' : 'OFF'}
</button>
</form>
</div>
))}
</div>
);
}
Monitoring Flag Impact
When you enable a flag for 10% of users, you need to know whether anything broke. Feature flags should be paired with monitoring, not just trust. The minimum monitoring setup for a new flag rollout:
Track error rates segmented by flag status. If new_dashboard is enabled for 10% of users, and that cohort shows 3x the error rate of the control group, that's a signal to roll back immediately. Most monitoring services (Datadog, PostHog, Sentry) support custom properties on events — attach the flag name and status to every error report while the flag is in rollout.
For Server Components, pass flag context to your error boundary:
// app/dashboard/error.tsx
'use client';
import { useEffect } from 'react';
import * as Sentry from '@sentry/nextjs';
export default function DashboardError({
error,
reset,
}: {
error: Error;
reset: () => void;
}) {
useEffect(() => {
// Log flag context with the error for segmentation
Sentry.captureException(error, {
tags: { component: 'dashboard' },
// Flag context would be passed via a context provider
});
}, [error]);
return <button onClick={reset}>Try again</button>;
}
A flag rollout with active error monitoring is a controlled release. A flag rollout without monitoring is hope.
Flag Lifecycle Management
Feature flags accumulate over time. A flag that was used for a 3-week rollout in 2025 is still in your codebase in 2026, checked on every relevant request, and creating conditional code paths that make refactoring harder. Flag debt is real.
The discipline for managing flag lifecycle:
Create flags with expiry intent: When you create a flag, document its purpose and whether it's permanent (plan gating) or temporary (rollout, A/B test). Rollout flags should be removed from code once the rollout reaches 100% and has been stable for 2 weeks.
Flag audits quarterly: Every 3 months, review the flag list. Any flag that's been at 100% rollout for more than a month should be removed from code and deleted from the database. Any flag that's been at 0% (disabled) for more than a month should be evaluated: is it still needed?
One-way flag convention: Temporary rollout flags should only ever increase rollout percentage, never decrease. If you need to disable a feature, use a kill switch flag (feature_x_enabled default true, can be set to false). Rollout flags that go back and forth confuse users.
A/B Testing with Feature Flags
The difference between a rollout flag (gradually enable for all users) and an A/B test (permanently split users into groups, measure outcomes):
For a true A/B test, you need:
- A deterministic assignment (same user always gets same variant) — the
simpleHashapproach handles this - Outcome measurement — which group converted, churned, or performed the target action
- Statistical significance — enough data before declaring a winner
The flags implementation above handles (1). For (2) and (3), either integrate PostHog's A/B testing feature (it handles the statistics automatically) or use Growthbook which is purpose-built for experiment tracking.
Don't build your own A/B test analysis. Determining statistical significance from raw conversion data requires proper variance calculations that are easy to get wrong. PostHog and Growthbook both handle this correctly.
Testing Code with Feature Flags
Code gated by feature flags needs testing in both the enabled and disabled state. The pattern for test environments:
// test/helpers/flags.ts
export function withFeatureEnabled(flagName: string) {
beforeEach(() => {
// Override the flag check for this test suite
jest.spyOn(flagsModule, 'isFeatureEnabled').mockImplementation(
async (name) => name === flagName
);
});
afterEach(() => {
jest.restoreAllMocks();
});
}
// Usage in tests:
describe('AdvancedAnalytics', () => {
describe('when advanced_analytics flag is enabled', () => {
withFeatureEnabled('advanced_analytics');
it('shows the advanced analytics component', async () => {
// test
});
});
});
Organizational Workflows for Feature Flags
As your team grows, feature flags become a coordination mechanism, not just a technical tool. An engineer enabling a flag affects every user in every flagged cohort — that requires some process.
Flag ownership: Every flag should have a named owner — the engineer or PM responsible for deciding when it rolls out and when it's cleaned up. Ownerless flags accumulate indefinitely. Add an owner field to your flag schema and make it required:
model FeatureFlag {
id String @id @default(cuid())
name String @unique
enabled Boolean @default(false)
rolloutPct Int @default(0)
owner String // Email of responsible party
expiresAt DateTime? // When this flag should be removed from code
purpose String // 'rollout' | 'kill_switch' | 'plan_gate' | 'experiment'
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
Approval for production rollout: For products with enterprise customers, requiring a second approver before enabling a flag for 100% production rollout prevents accidents. This can be as lightweight as a Slack message to the team channel with a 15-minute waiting period before enabling a high-impact flag.
Flag change audit log: Track every flag state change with who changed it and when. When a production incident is caused by a flag being enabled, the audit log is essential for the post-mortem. Append-only log table:
model FeatureFlagAuditLog {
id String @id @default(cuid())
flagId String
changedBy String // User email
oldValue Json // { enabled, rolloutPct } before change
newValue Json // { enabled, rolloutPct } after change
reason String? // Optional: why was this changed
createdAt DateTime @default(now())
}
For teams where multiple people touch feature flags, this audit trail pays for itself the first time you need to answer "who enabled this and why?"
Time Budget
| Component | Duration |
|---|---|
| DB schema + migration | 0.5 hour |
isFeatureEnabled helper | 1 hour |
| Plan-based gating | 1 hour |
| Server/client component usage | 1 hour |
| Admin toggle UI | 2 hours |
| Total | ~1 day |
Feature Flags in Edge Middleware
For flags that need to be evaluated at the CDN edge — before the request reaches your Next.js server — standard database-backed flags don't work (no database access at the edge). The pattern for edge-compatible flags uses environment variables or edge-compatible KV stores:
// middleware.ts — edge-compatible flag check
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
// Flags defined in environment variables (changed via Vercel dashboard, no deploy)
// EDGE_FLAGS=new_onboarding:true,redesigned_pricing:false
function getEdgeFlags(): Record<string, boolean> {
const raw = process.env.EDGE_FLAGS ?? '';
return Object.fromEntries(
raw.split(',').filter(Boolean).map(pair => {
const [key, value] = pair.split(':');
return [key, value === 'true'];
})
);
}
export function middleware(request: NextRequest) {
const flags = getEdgeFlags();
// A/B test at the edge — split traffic before any server rendering
if (flags.new_onboarding) {
const url = request.nextUrl.clone();
if (url.pathname === '/onboarding') {
url.pathname = '/onboarding-v2';
return NextResponse.rewrite(url);
}
}
return NextResponse.next();
}
This covers a narrower set of use cases than full feature flags — primarily routing and A/B testing at the URL level. For most SaaS products, database-backed flags at the server component level are the right default. Edge flag evaluation is a specialized optimization for high-traffic marketing pages.
Related Resources
For the admin panel where you'll manage feature flags alongside user management and metrics, how to add an admin dashboard to your boilerplate covers the admin route protection and management UI. For plan-based feature gating in the context of usage billing — where features are limited by consumption rather than binary on/off, usage-based billing with Stripe covers the quota management approach. For A/B testing on your marketing site and landing pages (as opposed to in-app features), SEO in SaaS boilerplates covers the analytics and conversion tracking setup.
Methodology
Implementation patterns based on LaunchDarkly's feature flag best practices documentation and Growthbook's open-source implementation. Flag lifecycle recommendations adapted from Martin Fowler's Feature Toggles article. A/B testing statistical methodology sourced from Optimizely's experimentation documentation.
Find boilerplates with feature flags built-in on StarterPick.
Check out this boilerplate
View ShipFaston StarterPick →