Skip to main content

Docker vs Vercel Deployment for SaaS 2026

·StarterPick Team
Share:

TL;DR

Vercel-only for most indie SaaS: zero ops, instant deploys, great DX. Docker when you need persistent processes (background workers, WebSocket servers), cost control at scale, or multi-service deployments. Most teams start Vercel, introduce Docker containers via Railway or Render when complexity demands it.

What "Vercel-only" Actually Means

Vercel deploys Next.js as serverless functions — each route/page is an isolated function invocation:

User request → Vercel Edge Network → Serverless Function → Response
                                    (spins up for each request)

Pros:

  • Zero server management
  • Scales to zero (no idle cost)
  • Automatic SSL, CDN, domains
  • Git push = deploy (zero config)

Cons:

  • No persistent state (functions die between requests)
  • No background processes
  • Max 300s execution timeout (Enterprise), 30s (Pro)
  • Cold starts (50-500ms for first request after idle)
  • Serverless pricing can exceed container pricing at high volume

What Requires Docker/Containers

Some workloads don't fit the serverless model:

Background Job Queues

// This can't run on Vercel — needs a persistent process
import Queue from 'bullmq';
import { Redis } from 'ioredis';

const redis = new Redis(process.env.REDIS_URL!);
const emailQueue = new Queue('emails', { connection: redis });

// Worker — runs continuously, not as a request handler
emailQueue.process(10, async (job) => {
  const { userId, template, data } = job.data;
  await sendEmail(userId, template, data);
});

console.log('Worker listening...');

BullMQ workers need a persistent Node.js process. Vercel can only enqueue jobs (via the API route); the worker must run on a container (Railway, Render, Fly.io, EC2).

WebSocket Servers

// WebSocket server — persistent connection, not request/response
import { WebSocketServer } from 'ws';

const wss = new WebSocketServer({ port: 8080 });

wss.on('connection', (ws) => {
  ws.on('message', (message) => {
    // Broadcast to all connected clients
    wss.clients.forEach((client) => {
      if (client.readyState === WebSocket.OPEN) {
        client.send(message);
      }
    });
  });
});

Native WebSockets need a persistent server. Vercel supports WebSockets experimentally, but for production real-time apps, a dedicated container is more reliable.

Long-Running Processes

// PDF generation — might take 60-120 seconds for complex documents
// Vercel Pro limit: 60 seconds
// Solution: Queue job, process in background container

// API route — queues the job (runs in < 5s on Vercel)
export async function POST(req: Request) {
  const { reportId } = await req.json();
  await pdfQueue.add('generate', { reportId });
  return Response.json({ status: 'queued', reportId });
}

// Worker container — generates PDF (no timeout)
pdfQueue.process(async (job) => {
  const pdf = await generateComplexReport(job.data.reportId);
  await uploadToS3(pdf, job.data.reportId);
  await notifyUser(job.data.reportId);
});

Docker Setup for SaaS Boilerplates

A typical SaaS Docker setup:

# Dockerfile — multi-stage build
FROM node:20-alpine AS base
WORKDIR /app
COPY package*.json ./

FROM base AS deps
RUN npm ci --only=production

FROM base AS builder
RUN npm ci
COPY . .
RUN npm run build

FROM base AS runner
ENV NODE_ENV=production
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json

EXPOSE 3000
CMD ["npm", "start"]
# docker-compose.yml — local development
services:
  web:
    build: .
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis

  worker:
    build: .
    command: node worker/index.js
    environment:
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis

  db:
    image: postgres:16
    environment:
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine

volumes:
  postgres_data:

The Hybrid Pattern: Vercel + Container

Most production indie SaaS uses a hybrid approach:

┌─────────────────────────────────────────────────────┐
│                   Vercel                             │
│  Next.js App (SSR + API routes for user-facing)     │
│  - Auth, dashboard, settings, checkout              │
│  - Stripe webhooks (fast, < 5s response)            │
└────────────────────────┬────────────────────────────┘
                         │ Enqueue jobs
                         ▼
┌─────────────────────────────────────────────────────┐
│              Railway / Render                        │
│  Background Workers (persistent containers)         │
│  - PDF generation                                   │
│  - Email sending queue                              │
│  - Data processing jobs                             │
│  - Scheduled tasks (cron)                           │
└────────────────────────────────────────────────────┘
// On Vercel — API route triggers background job
export async function POST(req: Request) {
  const data = await req.json();

  // Enqueue via Redis (Upstash on Vercel, or Railway Redis)
  await triggerBackgroundJob('process-user-data', data);

  // Return immediately — job runs in container
  return Response.json({ status: 'processing' });
}

Boilerplate Docker Support

BoilerplateDockerfile includedDocker ComposeWorker setup
T3 Stack❌ (community)
ShipFast
Supastarter
Makerkit
Epic Stack
Bedrock

When to Choose What

Stay Vercel-only when:

  • No background jobs (or small jobs < 30s)
  • No WebSockets needed
  • Team has zero DevOps experience
  • Early stage, validating product

Add Docker containers (Railway/Render) when:

  • Need background job processing
  • Need WebSocket server
  • Jobs routinely exceed 30s
  • Want predictable container pricing vs serverless

Go full Docker/Kubernetes when:

  • Enterprise clients require on-premise or VPC deployment
  • Compliance requires data residency control
  • Traffic is high enough that serverless pricing exceeds container pricing
  • Need custom runtime environments

Cost Reality at Different Scales

Vercel's pricing model rewards low-traffic applications and punishes high-bandwidth serverless invocations. The free Hobby plan covers personal projects. The Pro plan ($20/month per user) adds production features. At the $20 baseline, Vercel is cheaper than any container platform for applications that don't need persistent processes.

The calculus changes around 500,000-1,000,000 monthly requests. Vercel's Pro plan includes 1 million Edge Function invocations per month, with overages at $0.60 per million. A SaaS with 10,000 daily active users making 10 API calls each generates 3 million invocations per month — roughly $1.20 in overages. That's manageable. A data-intensive application with 50 API calls per session and 10,000 DAU generates 15 million monthly invocations — $8.40/month in overages. Still reasonable.

The price comparison flips when you have a sustained background processing workload. Railway's Hobby plan is $5/month and charges by usage, or Starter at $20/month with 8GB RAM and 8 vCPU for compute. A dedicated Node.js worker handling background jobs on Railway costs $5-20/month regardless of how many jobs run. The equivalent functionality on Vercel doesn't exist — you'd need to add an external queue service (Upstash Redis at $0/month for small queues) and a separate worker host.

The Background Job Inflection Point

Background job processing is the clearest signal that Vercel-only deployment is not enough. The test is simple: does your application need to do work that takes longer than 60 seconds, or work that should continue even if no user is actively making requests? If yes, you need a persistent process running somewhere.

The most common patterns that trigger this need: sending email sequences (welcome, onboarding, trial expiry — these fire on a schedule, not on user request), processing uploads (resizing images, generating thumbnails, transcribing audio), generating exports (PDF reports, CSV dumps of large datasets), and syncing data from external APIs (pulling records from Salesforce, updating analytics data overnight).

Teams typically discover this need around month two or three of active building, often prompted by a user-facing timeout or a failed email sequence. The initial MVP can often handle these in foreground API routes with extended Vercel timeouts. As the application grows, the workloads grow, and foreground handling starts timing out or degrading user experience. The refactor to a queue-and-worker pattern while also building product features is painful. Starting with the hybrid architecture early — even if the worker is initially idle — reduces this pain.

Railway and Render as Container Hosts

Railway and Render are the two most common container platforms for Next.js SaaS teams who need persistent processes without the overhead of managing Kubernetes or raw EC2 instances. Both support Docker deployments from a Dockerfile or from a Git repository directly. Both provide PostgreSQL and Redis as managed services alongside container deployments.

Railway's developer experience is closer to Vercel's: connect a GitHub repository, define services, and deploy. The pricing is usage-based, which can be unpredictable for high-traffic applications but is often cheaper than fixed-tier pricing at low traffic. Railway's free tier was retired in 2023, but the Hobby plan at $5/month provides a reasonable starting point.

Render's pricing is tier-based rather than usage-based, which makes cost estimation easier. Render's free tier for web services spins down after 15 minutes of inactivity (cold starts of 30-60 seconds), which makes it inappropriate for production APIs but acceptable for background workers that run periodically rather than serving real-time requests.

For teams committed to Vercel for their Next.js frontend, Railway is the most natural addition for background workers. The Upstash Redis integration makes queue configuration straightforward — Upstash provides a serverless Redis endpoint that both the Vercel API routes (for enqueuing) and the Railway worker (for dequeuing) can access.

Making the Decision

The decision tree for most SaaS teams in 2026: start on Vercel only, add Railway for workers when the first sustained background job requirement appears, and resist moving away from Vercel for the Next.js application itself unless you're dealing with extreme scale, specific latency requirements, or strict compliance constraints that serverless platforms can't satisfy.

The boilerplates that include Docker support (Makerkit, Epic Stack, Bedrock) do so for good reason — their users tend to build more complex products that need background processing from early stages. If you're evaluating a boilerplate for a product you expect to have background processing requirements, check specifically whether the Docker support includes a worker service configuration in the docker-compose file, not just a Dockerfile for the web service. Makerkit and Bedrock both include pre-configured worker services; ShipFast and T3 Stack leave that integration to you.

Migrating from Vercel-Only to Hybrid

When a product outgrows Vercel-only deployment, the migration is typically straightforward but requires planning. The key is extracting background processing logic into a separate module that can run in either a serverless context (for short jobs) or a persistent worker process (for longer jobs).

The most practical migration path starts by introducing a job queue before moving to containers. Upstash Redis provides a serverless-compatible Redis endpoint that works from both Vercel API routes and Railway containers. Teams often spend one sprint adding BullMQ job definitions and switching from direct function calls to job enqueueing, then a second sprint deploying the actual worker container on Railway. The two sprints can overlap — the worker can be deployed to Railway before the application fully uses it.

A common mistake during migration is treating the worker as a separate codebase. The worker consumes the same domain logic as the API routes — it just runs asynchronously. Keeping the worker in the same repository as the Next.js application simplifies code sharing, avoids managing two separate deployment pipelines, and ensures environment variables stay synchronized. Both Railway and Render support monorepo deployments where one repository defines multiple services with separate build commands.

For teams using TypeScript, sharing types and Prisma clients between the Next.js app and the worker is straightforward when they live in the same repo. For teams using a Turborepo structure (common in T3-turbo, Midday, and some Makerkit setups), the worker becomes another app in the monorepo. This is the architecture that scales cleanest as the background processing footprint grows. The Vercel plus Railway combination has become a well-documented deployment pattern in 2025, with community-maintained guides and shared environment variable conventions that reduce the integration overhead that early adopters encountered when the pattern was less established.

See the best boilerplates for Docker deployment and the Vercel vs Railway vs Render cost comparison for more detailed platform comparisons. The background jobs guide for SaaS covers the specific queue implementations in detail.

Both deployment approaches have matured significantly. The decision reduces to team expertise and operational requirements: Vercel for minimal ops overhead, Docker for maximum portability and control.

The SaaS Boilerplate Matrix (Free PDF)

20+ SaaS starters compared: pricing, tech stack, auth, payments, and what you actually ship with. Updated monthly. Used by 150+ founders.

Join 150+ SaaS founders. Unsubscribe in one click.