Skip to main content

Inngest vs BullMQ vs Trigger.dev for SaaS 2026

·StarterPick Team
Share:

Background Jobs: Required Infrastructure for Production SaaS

Every production SaaS needs background processing: sending emails asynchronously, processing uploaded files, syncing with external APIs, generating reports, running scheduled tasks. These operations cannot happen in the request-response cycle — they take too long and failures would break the user experience.

In 2026, three options dominate for Next.js SaaS:

  • Inngest — serverless background jobs with durable functions
  • BullMQ — Redis-backed job queues (requires Redis infrastructure)
  • Trigger.dev — managed background jobs with real-time observability

TL;DR

  • Inngest: Use for serverless-native Next.js apps (Vercel). No Redis needed. Generous free tier.
  • BullMQ: Use when you have Redis and need maximum performance or complex job topologies.
  • Trigger.dev: Use for long-running AI jobs, workflows with many steps, and production observability.

Key Takeaways

  • Inngest runs on any serverless platform — zero infrastructure to manage
  • BullMQ requires Redis (Upstash or self-hosted) — more infrastructure but maximum performance
  • Trigger.dev is purpose-built for AI/LLM workflows that run for minutes or hours
  • Inngest free tier: 50K function runs/month — sufficient for most SaaS in early stages
  • Trigger.dev is used by Midday v1 — the open-source boilerplate proves it in production
  • Scheduled jobs (cron) are supported by all three

Inngest: Serverless Background Jobs

Inngest is the background job system that requires no infrastructure — it runs as a handler in your Next.js API route.

npm install inngest

Setup

// inngest/client.ts
import { Inngest } from 'inngest';

export const inngest = new Inngest({ id: 'my-saas' });
// inngest/functions.ts
import { inngest } from './client';

// A durable background function:
export const processDocument = inngest.createFunction(
  { id: 'process-document', name: 'Process Document Upload' },
  { event: 'document/uploaded' },
  async ({ event, step }) => {
    const { documentId, userId } = event.data;

    // Step 1: Extract text
    const text = await step.run('extract-text', async () => {
      const doc = await db.document.findUnique({ where: { id: documentId } });
      return extractTextFromPdf(doc.url);
    });

    // Step 2: Generate embeddings (if this fails, retries from here)
    const embedding = await step.run('generate-embedding', async () => {
      const { embedding } = await embed({
        model: openai.embedding('text-embedding-3-small'),
        value: text,
      });
      return embedding;
    });

    // Step 3: Store in database
    await step.run('store-embedding', async () => {
      await db.document.update({
        where: { id: documentId },
        data: { embedding, processedAt: new Date() },
      });
    });

    // Step 4: Notify user
    await step.run('notify-user', async () => {
      await sendEmail({
        to: userId,
        subject: 'Document processed',
        body: 'Your document is ready for search.',
      });
    });

    return { success: true };
  }
);

// Scheduled function (cron):
export const dailyDigest = inngest.createFunction(
  { id: 'daily-digest' },
  { cron: '0 9 * * *' },  // 9am daily
  async ({ step }) => {
    const users = await step.run('get-active-users', async () => {
      return db.user.findMany({ where: { emailDigest: true } });
    });

    await step.run('send-digests', async () => {
      await Promise.all(users.map(u => sendDigestEmail(u)));
    });
  }
);
// app/api/inngest/route.ts
import { serve } from 'inngest/next';
import { inngest } from '@/inngest/client';
import { processDocument, dailyDigest } from '@/inngest/functions';

export const { GET, POST, PUT } = serve({
  client: inngest,
  functions: [processDocument, dailyDigest],
});
// Triggering a function from your app:
await inngest.send({
  name: 'document/uploaded',
  data: { documentId, userId },
});

BullMQ: Redis-Backed Performance

BullMQ is the modern job queue built on Redis. It is the fastest option and supports complex job patterns (priorities, rate limiting, job dependencies).

npm install bullmq ioredis
// lib/queue.ts
import { Queue, Worker } from 'bullmq';
import { Redis } from 'ioredis';

const connection = new Redis(process.env.REDIS_URL!, { maxRetriesPerRequest: null });

// Define queues:
export const emailQueue = new Queue('email', { connection });
export const documentQueue = new Queue('document-processing', { connection });

// Define workers (these run in a separate process):
export const emailWorker = new Worker(
  'email',
  async (job) => {
    const { to, subject, body } = job.data;
    await sendEmail({ to, subject, body });
  },
  {
    connection,
    concurrency: 10,  // Process 10 jobs simultaneously
  }
);

export const documentWorker = new Worker(
  'document-processing',
  async (job) => {
    const { documentId } = job.data;
    await processDocumentJob(documentId);
  },
  {
    connection,
    concurrency: 5,
  }
);

// Error handling:
documentWorker.on('failed', (job, err) => {
  console.error(`Job ${job?.id} failed:`, err.message);
  // Send to Sentry, etc.
});
// Adding jobs to the queue:
// From any API route or server action:
await emailQueue.add(
  'welcome-email',
  { to: user.email, subject: 'Welcome!', body: welcomeBody },
  {
    delay: 0,
    attempts: 3,
    backoff: { type: 'exponential', delay: 2000 },
  }
);

// Scheduled job (requires separate cron setup):
await documentQueue.add(
  'process-pending',
  { batchSize: 50 },
  {
    repeat: { pattern: '*/5 * * * *' },  // Every 5 minutes
  }
);

BullMQ requirement: You need Redis. Use Upstash Redis (free tier: 10K commands/day; $0.20/100K after) or self-host.


Trigger.dev: Long-Running AI Jobs

Trigger.dev is built for AI workflows that run for minutes or hours — perfect for LLM pipelines, multi-step automations, and anything that hits third-party APIs.

npm install @trigger.dev/sdk @trigger.dev/nextjs
// trigger/process-document.ts
import { task, logger } from '@trigger.dev/sdk/v3';

export const processDocumentTask = task({
  id: 'process-document',
  maxDuration: 300,  // 5 minutes max
  retry: {
    maxAttempts: 3,
    factor: 2,
    minTimeoutInMs: 1000,
    maxTimeoutInMs: 30000,
  },
  run: async (payload: { documentId: string; userId: string }) => {
    logger.info('Processing document', { documentId: payload.documentId });

    // Step 1: Download document
    const doc = await db.document.findUnique({ where: { id: payload.documentId } });
    const text = await downloadAndExtract(doc.url);

    logger.info('Text extracted', { length: text.length });

    // Step 2: AI analysis (may take 30-60 seconds for large docs)
    const analysis = await analyzeWithAI(text);

    // Step 3: Store results
    await db.document.update({
      where: { id: payload.documentId },
      data: { analysis, processedAt: new Date() },
    });

    // Step 4: Notify user
    await sendEmail({
      to: payload.userId,
      subject: 'Document analysis complete',
    });

    return { success: true, analysisLength: JSON.stringify(analysis).length };
  },
});
// Trigger a task from your API:
import { tasks } from '@trigger.dev/sdk/v3';
import { processDocumentTask } from '@/trigger/process-document';

export async function POST(req: Request) {
  const { documentId } = await req.json();
  const session = await auth();

  const handle = await processDocumentTask.trigger({
    documentId,
    userId: session.user.id,
  });

  return Response.json({ runId: handle.id });
}

Trigger.dev provides a real-time dashboard showing each step, its duration, inputs, outputs, and any errors. This observability is Trigger.dev's key differentiator.


Comparison Table

FeatureInngestBullMQTrigger.dev
InfrastructureNone (serverless)Redis requiredNone (managed)
Serverless compatibleYesPartialYes
Max job duration15 min (Vercel)UnlimitedHours
DashboardYesThird-partyYes (excellent)
Steps/checkpointsYesNoYes
Cron schedulingYesYesYes
Free tier50K runs/monthRedis cost only10K runs/month
Paid tier$50/moRedis cost$50/mo
Best forServerless, simplePerformance, complexAI/LLM, long-running

Which Boilerplates Use What?

BoilerplateBackground Jobs
ShipFastNone (add manually)
OpenSaaS (Wasp)Built-in (PgBoss)
MakerkitInngest (plugin)
Midday v1Trigger.dev
T3 StackNone (add manually)

Decision Guide

Choose Inngest if:
  → Deploying to Vercel serverless
  → Don't want Redis infrastructure
  → Simple to medium job complexity
  → Generous free tier is sufficient

Choose BullMQ if:
  → Already have Redis (Upstash works)
  → Need maximum throughput
  → Complex job topologies (dependencies, priorities)
  → Long-running workers in a separate process

Choose Trigger.dev if:
  → AI/LLM workflows (long-running)
  → Need excellent observability/debugging
  → Multi-step workflows with retry per step
  → Using Midday v1 (it's pre-configured)

Methodology

Based on publicly available documentation from Inngest, BullMQ, and Trigger.dev, and boilerplate analysis as of March 2026.


Common Mistakes in Background Job Architecture

Background job systems introduce a category of bugs that are hard to reproduce and slow to diagnose. The most common patterns in SaaS codebases:

The idempotency problem: a job runs once, fails halfway through, and retries — running the second half twice. If "send welcome email + create Stripe customer" is one job, a failure after the email sends but before the Stripe call means the retry sends a second welcome email. The solution is designing jobs to be idempotent — safe to run multiple times. For Inngest, this means using step.run() for each operation individually, since Inngest's step system checkpoints between steps and retries only from the last failed step. For BullMQ, this means checking whether an operation was already completed before performing it.

The fanout problem: a job that creates N child jobs is a common pattern for batch processing. If the parent job runs successfully but 30% of the child jobs fail, you need visibility into which specific items failed without re-running the whole batch. Trigger.dev's per-step observability handles this naturally. For Inngest and BullMQ, implement explicit tracking of individual item success/failure in your database, not just job-level status.

The long-running job timeout problem: Vercel serverless functions have a 60-second default timeout (300 seconds on the Pro plan). Any background job that runs longer than this window will be killed mid-execution. Inngest handles this by breaking jobs into steps and resuming across function invocations — each step completes within the timeout, and the overall job can run for hours. BullMQ workers run in a persistent process and have no timeout limitation. If you're on Vercel and need jobs longer than 5 minutes, Inngest or Trigger.dev are the only viable options without spinning up a separate long-running process.

Observability and Debugging in Production

All three tools differ significantly in their observability story — how easy it is to understand what happened when something goes wrong.

Inngest's dashboard shows function runs with their events, steps, and retry history. When a job fails, you can see exactly which step failed, the input that caused the failure, and the error. Inngest also provides a local development server (Dev Server) that lets you test functions locally with a visual interface before deploying — a significant debugging advantage over BullMQ's more manual local testing.

BullMQ has no built-in dashboard — you add Bull Board or Arena as a third-party dashboard. These dashboards show queue depths, job states (waiting, active, completed, failed), and job data. They work but require separate deployment and configuration. The Redis-native approach means you can also inspect jobs directly with Redis CLI, which can be faster for debugging specific issues.

Trigger.dev's observability is the most detailed of the three: every function run shows a timeline of each step with its duration, input payload, output payload, and any errors. For AI/LLM jobs where each step might be an API call to OpenAI or Anthropic, seeing the exact prompt and response for a failed step is enormously valuable. This observability is Trigger.dev's clearest differentiator.

Scheduling and Cron Job Patterns

All three tools support scheduled jobs (cron), but the implementation patterns and reliability characteristics differ significantly.

Inngest's cron support uses the { cron: '0 9 * * *' } event trigger syntax — the same POSIX cron format used by most cron tools. Inngest executes scheduled functions from its infrastructure, meaning your cron schedule doesn't depend on your Vercel deployment being "warmed up" or a separate process being running. The function receives no event payload (no data is passed to a cron-triggered function), so you fetch the data you need inside the function steps. The Inngest dashboard shows the last run time, next run time, and history of cron job executions — useful for verifying that scheduled jobs are actually running as expected.

BullMQ's repeat/cron feature adds scheduled jobs to the queue: queue.add('job-name', payload, { repeat: { pattern: '0 9 * * *' } }). This works reliably when the worker process is running, but requires that your worker process be always on. For Vercel serverless deployments, this means adding a separate persistent server (Railway, Fly.io, or a VPS) specifically to host BullMQ workers — cron jobs can't run in serverless functions without a persistent process to maintain the schedule.

Trigger.dev scheduled tasks use a different model: you define a schedules.task that Trigger.dev's infrastructure invokes on the configured schedule. This eliminates the need for a persistent process and gives you the same full-run observability for scheduled jobs as for event-triggered jobs. Trigger.dev's scheduling is particularly well-suited for jobs that need to run for more than a few minutes, since the managed infrastructure handles the timeout concerns that affect Vercel-hosted cron solutions.

For most early-stage SaaS on Vercel, Inngest's cron support is the simplest path: no additional infrastructure, runs from the same Next.js deployment, and has adequate observability for monitoring scheduled jobs. If you grow to the point where cron jobs need to run on dedicated persistent infrastructure, BullMQ's feature set becomes more competitive. The practical rule: choose your background job tool based on your deployment infrastructure, not on anticipated future requirements. Inngest for serverless-first, BullMQ when you have or want a persistent process, Trigger.dev when observability and long-running AI workflows are primary concerns.


Which Background Job Tool Belongs in Your SaaS Boilerplate Stack

The decision between Inngest, BullMQ, and Trigger.dev is largely determined by two variables: your deployment target and the complexity of your job workflows.

If you're deploying to Vercel (serverless): BullMQ is the wrong choice unless you add a separate persistent process. Vercel functions terminate after the request completes, which means BullMQ workers — which require a persistent Node.js process — cannot run on Vercel directly. You would need Railway, Fly.io, or a separate VPS specifically to host the worker. For most early-stage SaaS teams, adding and maintaining that infrastructure isn't worth the trade-off. Inngest and Trigger.dev are both designed for serverless environments: they handle the persistence and retry logic from their own infrastructure, and your Next.js functions just call into them.

If you have a persistent server (Node.js on Railway, Fly.io, Render): All three tools work well. BullMQ becomes a serious option because you can run long-lived workers without serverless timeout concerns. The Redis dependency is manageable — Upstash Redis provides a managed Redis-compatible store without a self-managed server. At this point, the decision shifts to observability and workflow complexity: BullMQ for maximum control and Redis-native integration, Inngest for developer experience and local testing tooling, Trigger.dev for AI workloads with detailed per-step observability.

If you're building AI-heavy workflows: Trigger.dev's per-step execution model directly addresses the most common AI job failure mode: LLM API calls that timeout, return errors, or need to be retried with different inputs. Running a RAG pipeline (document ingestion → chunking → embedding → vector storage → index update) as a Trigger.dev function gives you visibility into exactly which step failed and what the LLM returned, rather than just "job failed at step 3." For SaaS products where AI pipelines are core functionality, this observability is worth the added dependency.

For boilerplate-first SaaS projects: The simplest path is to choose Inngest, because it's the most likely to already be configured in your starting template. ShipFast, Makerkit, and most Next.js boilerplates that include background job infrastructure use Inngest. Starting with a known-working configuration reduces setup friction significantly — getting a background job tool running from scratch, with correct retry logic, dead letter queues, and idempotency, takes 3-6 hours even with good documentation. When it's already in the boilerplate, you start writing job logic immediately. Reserve the decision to evaluate BullMQ or Trigger.dev for when your specific requirements (persistent process, AI observability) make the switch clearly justified. The switching cost between tools is meaningful — job definitions, retry logic, and observability configurations are all tool-specific — so choose the background job tool that matches your deployment architecture from the start, and plan to stay with it unless a concrete requirement forces a change. The cost of an unnecessary migration is typically 1-2 weeks of engineering time that could be building product features instead.

Methodology

Based on publicly available documentation from Inngest, BullMQ, and Trigger.dev, and boilerplate analysis as of March 2026.


Building a SaaS with background jobs? StarterPick helps you find boilerplates pre-configured with the right job infrastructure for your needs.

The background job tool you choose becomes part of your production operations — monitoring, alerting, and debugging workflows all depend on what observability the tool provides. Choose based on your deployment architecture first, and your observability requirements second.

See background jobs in context of the full SaaS stack: Ideal tech stack for SaaS in 2026.

Find boilerplates pre-configured with job infrastructure: Best SaaS boilerplates 2026.

Compare AI-ready boilerplates that use Trigger.dev for LLM workflows: Best boilerplates for AI wrapper SaaS 2026.

Find the right background job infrastructure in your stack: Next.js SaaS tech stack guide 2026.

The SaaS Boilerplate Matrix (Free PDF)

20+ SaaS starters compared: pricing, tech stack, auth, payments, and what you actually ship with. Updated monthly. Used by 150+ founders.

Join 150+ SaaS founders. Unsubscribe in one click.