Skip to main content

File Upload in SaaS Boilerplates 2026

·StarterPick Team
Share:

TL;DR

UploadThing for most boilerplates: TypeScript-first, integrates seamlessly with Next.js, generous free tier. Cloudflare R2 for cost-sensitive apps at scale (zero egress fees). Cloudinary when image transformation (resize, crop, format conversion) is a core product feature. AWS S3 only when deep AWS integration or compliance requires it.

The Upload Landscape

ServiceFree TierPaidType-SafeImage TransformEgress
UploadThing2GB/month$10/mo✅ TypeScriptIncluded
Cloudflare R210GB/month$0.015/GBVia SDKFree
Cloudinary25GB storage$89/moPartial✅ ExcellentIncluded
AWS S35GB/12mo$0.023/GBVia SDK$0.09/GB
Supabase Storage1GB$25/moVia SDKIncluded

UploadThing: The Developer-First Choice

UploadThing is built specifically for Next.js. The API is TypeScript-native, the SDK handles chunked uploads, and the configuration is minimal.

// lib/uploadthing.ts — define upload routes
import { createUploadthing, type FileRouter } from 'uploadthing/next';
import { auth } from '@clerk/nextjs/server';

const f = createUploadthing();

export const ourFileRouter = {
  // Profile avatar — authenticated, max 4MB image
  profileAvatar: f({ image: { maxFileSize: '4MB' } })
    .middleware(async () => {
      const { userId } = await auth();
      if (!userId) throw new Error('Unauthorized');
      return { userId };
    })
    .onUploadComplete(async ({ metadata, file }) => {
      // Update user avatar in database
      await prisma.user.update({
        where: { id: metadata.userId },
        data: { avatarUrl: file.url },
      });
      return { url: file.url };
    }),

  // Document upload — PDF, max 16MB
  document: f({ pdf: { maxFileSize: '16MB' } })
    .middleware(async () => {
      const { userId } = await auth();
      if (!userId) throw new Error('Unauthorized');
      return { userId };
    })
    .onUploadComplete(async ({ metadata, file }) => {
      await prisma.document.create({
        data: { userId: metadata.userId, url: file.url, name: file.name }
      });
    }),
} satisfies FileRouter;
// app/api/uploadthing/route.ts
import { createRouteHandler } from 'uploadthing/next';
import { ourFileRouter } from '@/lib/uploadthing';

export const { GET, POST } = createRouteHandler({ router: ourFileRouter });
// Client component — drag and drop upload
'use client';
import { UploadButton } from '@uploadthing/react';

export function AvatarUpload({ onUpload }: { onUpload: (url: string) => void }) {
  return (
    <UploadButton
      endpoint="profileAvatar"
      onClientUploadComplete={(res) => onUpload(res[0].url)}
      onUploadError={(error) => toast.error(error.message)}
    />
  );
}

That's the entire implementation. Auth, chunked upload, database update, file type validation — all configured in ~30 lines.


Cloudflare R2: Cost-Optimized Storage

R2 is S3-compatible with zero egress fees — the main cost differentiator at scale.

// lib/r2.ts
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const r2 = new S3Client({
  region: 'auto',
  endpoint: `https://${process.env.CLOUDFLARE_ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY_ID!,
    secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
  },
});

// Generate presigned URL for direct browser upload
export async function getUploadUrl(key: string, contentType: string) {
  const command = new PutObjectCommand({
    Bucket: process.env.R2_BUCKET_NAME!,
    Key: key,
    ContentType: contentType,
  });

  return getSignedUrl(r2, command, { expiresIn: 300 });  // 5 minutes
}

// Generate presigned URL for download
export async function getDownloadUrl(key: string) {
  const command = new GetObjectCommand({
    Bucket: process.env.R2_BUCKET_NAME!,
    Key: key,
  });

  return getSignedUrl(r2, command, { expiresIn: 3600 });  // 1 hour
}
// API route — generate upload URL
export async function POST(req: Request) {
  const { userId } = await auth();
  const { fileName, contentType } = await req.json();

  const key = `uploads/${userId}/${Date.now()}-${fileName}`;
  const uploadUrl = await getUploadUrl(key, contentType);

  return Response.json({ uploadUrl, key });
}
// Client — upload directly to R2 from browser
async function uploadFile(file: File) {
  const { uploadUrl, key } = await fetch('/api/upload', {
    method: 'POST',
    body: JSON.stringify({ fileName: file.name, contentType: file.type }),
  }).then(r => r.json());

  await fetch(uploadUrl, {
    method: 'PUT',
    body: file,
    headers: { 'Content-Type': file.type },
  });

  // Notify backend that upload is complete
  await fetch('/api/upload/complete', {
    method: 'POST',
    body: JSON.stringify({ key, fileName: file.name }),
  });
}

R2 pricing: $0.015/GB storage, $0 egress. At 100GB storage with heavy download traffic, R2 is dramatically cheaper than S3.


Cloudinary: When Transformations Matter

Cloudinary excels when you need server-side image transformations:

import { v2 as cloudinary } from 'cloudinary';

cloudinary.config({
  cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
  api_key: process.env.CLOUDINARY_API_KEY,
  api_secret: process.env.CLOUDINARY_API_SECRET,
});

// Upload and generate transformations
const result = await cloudinary.uploader.upload(filePath, {
  folder: 'product-images',
  public_id: `product-${productId}`,
  transformation: [
    { width: 800, height: 800, crop: 'fill' },  // Square crop
    { quality: 'auto', fetch_format: 'auto' },   // Auto WebP/AVIF
  ],
});

// URL-based transformations — no pre-processing needed
const thumbnailUrl = cloudinary.url(result.public_id, {
  width: 150,
  height: 150,
  crop: 'thumb',
  gravity: 'face',  // Smart face detection for avatars
});

const heroUrl = cloudinary.url(result.public_id, {
  width: 1200,
  quality: 80,
  fetch_format: 'auto',
});

Choose Cloudinary when:

  • Products selling physical goods (need multiple image sizes, crops)
  • Avatar/profile photos (smart cropping, face detection)
  • Content platforms with user-generated images
  • Need real-time image transformations via URL parameters

Boilerplate Upload Defaults

BoilerplateFile UploadDefault Provider
ShipFastUploadThing
SupastarterSupabase Storage
MakerkitSupabase / Firebase Storage
T3 Stack❌ (add yourself)Community: UploadThing
Open SaaSAWS S3

Decision Guide

Starting a new Next.js SaaS?
  → UploadThing (least setup, TypeScript-native)

High-volume storage where egress costs matter?
  → Cloudflare R2 (zero egress fees)

Need image transformations (resize, crop, format)?
  → Cloudinary

Already on Supabase and want everything in one place?
  → Supabase Storage

Need compliance (SOC2, HIPAA) or enterprise features?
  → AWS S3 (most mature compliance certifications)


UploadThing vs Supabase Storage: The Full-Stack Choice

When your boilerplate already uses Supabase for the database and auth, adding Supabase Storage avoids a separate service dependency. The comparison:

Supabase Storage: Built into your existing Supabase project, same dashboard, same Row Level Security policies as your database. The TypeScript client uses the same pattern as your database client.

import { createClient } from '@supabase/supabase-js';

const supabase = createClient(process.env.SUPABASE_URL!, process.env.SUPABASE_ANON_KEY!);

// Upload file — authenticated via Supabase session
const { data, error } = await supabase.storage
  .from('avatars')
  .upload(`${userId}/avatar.jpg`, file, {
    cacheControl: '3600',
    upsert: true,
  });

// Get public URL
const { data: { publicUrl } } = supabase.storage
  .from('avatars')
  .getPublicUrl(`${userId}/avatar.jpg`);

Supabase Storage is the right choice when you're already on Supabase and want to minimize service count. The free tier (1GB) is limited; growth-stage products will need the $25/month plan.

UploadThing: Better TypeScript DX for Next.js specifically, handles chunked uploads automatically, file type validation is declarative, and the dashboard is cleaner for debugging upload issues. The $10/month paid tier includes 10GB.

For new Next.js projects not already on Supabase: UploadThing. For Supabase-based projects: Supabase Storage.


Virus Scanning and Content Moderation

User-uploaded files introduce a security and compliance risk: malicious file uploads and inappropriate content. For most early-stage SaaS, this risk is acceptable — but for products where users upload content that other users see (community platforms, marketplaces), it matters.

Virus scanning: CloudflareR2 and UploadThing don't include automatic virus scanning. AWS S3 can be connected to services like ClamAV or commercial scanners. For regulated industries (healthcare document uploads, financial file storage), virus scanning may be a compliance requirement.

Content moderation: Cloudinary's Content Moderation add-on can automatically flag or block images that contain adult content, violence, or other policy violations. This is relevant for user-generated content platforms.

File type validation: All three services support server-side file type validation. Never trust client-side MIME type declarations — validate the actual file bytes server-side. UploadThing's middleware validates file types before the upload completes.

For a standard B2B SaaS where users upload their own business documents (invoices, reports, screenshots), these concerns are minimal. For consumer platforms with user-generated content, build in content moderation from the start rather than retrofitting it.


Organizing File Storage at Scale

Upload key structure determines how well your storage scales and how easy it is to manage files:

// ❌ Flat structure — hard to manage at scale
const key = `avatar-${userId}-${Date.now()}.jpg`;
// Results in millions of files in one "directory"

// ✅ Structured keys by resource type and owner
const avatar = `users/${userId}/avatar.jpg`;              // User avatar (1 per user)
const document = `orgs/${orgId}/docs/${docId}/${filename}`; // Org document
const asset = `projects/${projectId}/assets/${assetId}`;    // Project asset

// Benefits:
// - Easy to list all files for a user (prefix query: users/${userId}/)
// - Easy to delete all files when account is deleted
// - Clear ownership in audit logs
// - Cost attribution by prefix possible

The structured key approach also makes lifecycle policies tractable — delete all files for users who deleted their account, or move old documents to cheaper storage tiers, using prefix-based policies.


Handling Large Files and Chunked Uploads

Standard browser uploads break on files over 100MB. The user's connection drops, the server times out, and the partial upload is lost. For products that accept video, high-resolution images, or large documents, chunked upload support is not optional.

UploadThing handles chunked uploads automatically. Files above a threshold are split client-side and reassembled on the server. Your application code doesn't change — the SDK manages the chunking transparently. The maximum supported file size is 2GB, which covers most SaaS use cases.

For R2 and S3, multipart upload must be explicitly implemented. The S3 multipart upload API requires initiating an upload, uploading each part, and completing the upload in separate API calls. Client libraries like uppy handle this automatically but add frontend bundle size.

// R2 multipart upload — required for files > 100MB
import { CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand } from '@aws-sdk/client-s3';

async function uploadLargeFile(file: File, key: string) {
  // 1. Initiate
  const { UploadId } = await r2.send(new CreateMultipartUploadCommand({
    Bucket: process.env.R2_BUCKET_NAME!,
    Key: key,
  }));

  // 2. Upload parts (5MB minimum per part)
  const chunkSize = 5 * 1024 * 1024;
  const parts = [];
  for (let i = 0; i < Math.ceil(file.size / chunkSize); i++) {
    const chunk = file.slice(i * chunkSize, (i + 1) * chunkSize);
    const { ETag } = await r2.send(new UploadPartCommand({
      Bucket: process.env.R2_BUCKET_NAME!,
      Key: key, UploadId, PartNumber: i + 1, Body: chunk,
    }));
    parts.push({ PartNumber: i + 1, ETag });
  }

  // 3. Complete
  await r2.send(new CompleteMultipartUploadCommand({
    Bucket: process.env.R2_BUCKET_NAME!,
    Key: key, UploadId,
    MultipartUpload: { Parts: parts },
  }));
}

For most SaaS products, UploadThing's automatic chunking is worth the dependency. Implementing multipart uploads for R2 is a non-trivial amount of code that needs to be tested across browser environments, file sizes, and network conditions.


Image Optimization Before Storage

Storing full-resolution user uploads verbatim wastes storage and bandwidth. A user uploading a 12MP phone photo (8-15MB) for an avatar shouldn't be stored and served at full resolution — a 400x400 JPEG at 80% quality (30-60KB) is the right delivery format.

For UploadThing, image processing happens after upload. The onUploadComplete callback receives the uploaded file URL and is the right place to trigger a resize job. The sharp library handles the actual compression:

// Process uploaded avatar asynchronously
.onUploadComplete(async ({ metadata, file }) => {
  // Trigger background job to resize
  await inngest.send({
    name: 'image/resize',
    data: { userId: metadata.userId, url: file.url, type: 'avatar' },
  });
  return { url: file.url };
})

Cloudinary handles this automatically through URL parameters — no post-processing job needed. This is the core Cloudinary advantage for image-heavy products: transformations are on-demand through URL construction, eliminating the resize pipeline entirely.

For R2 and S3, the pattern is: upload original → trigger Cloudflare Worker or Lambda to resize → store resized versions → serve resized URLs. More infrastructure, but full control over the transformation pipeline.


Cost Comparison at Scale

File storage costs compound significantly with user growth. At 10,000 users averaging 50MB of uploads each, that's 500GB of storage.

Service500GB storage/month100GB egress/monthTotal
UploadThing ProIncluded at $10/moIncluded$10
Cloudflare R2$7.50$0 (free egress)$7.50
AWS S3$11.50$9.00$20.50
Cloudinary~$89/mo (25GB plan)Included$89+
Supabase Storage$25/mo (Pro)Included$25

At 500GB, UploadThing and Cloudflare R2 are essentially equivalent in cost. R2 pulls ahead at higher volumes due to zero egress fees. AWS S3's egress costs ($0.09/GB) become significant at download-heavy workloads. Cloudinary's transformation features come at a cost premium that's justified only when you actively use transformations.

The egress cost difference is the most important factor for products where users frequently download or view stored files. A document storage SaaS where users open files daily has much higher egress-to-storage ratios than a profile avatar use case. For high-egress products, R2 can be 5-10x cheaper than S3 at the same storage volume.


File Cleanup and Account Deletion

Files outliving users is a subtle but costly problem. When a user deletes their account, their database rows are deleted, but their S3 or R2 files remain. Over months, deleted accounts accumulate storage costs for files that will never be accessed again.

The fix requires connecting file deletion to account deletion:

// When user deletes account — clean up files
async function deleteUserAccount(userId: string) {
  // 1. List all files for this user
  const userFiles = await prisma.file.findMany({
    where: { userId },
    select: { storageKey: true },
  });

  // 2. Delete from storage
  if (userFiles.length > 0) {
    await r2.send(new DeleteObjectsCommand({
      Bucket: process.env.R2_BUCKET_NAME!,
      Delete: {
        Objects: userFiles.map(f => ({ Key: f.storageKey })),
      },
    }));
  }

  // 3. Delete user record (cascades to file records)
  await prisma.user.delete({ where: { id: userId } });
}

For UploadThing, file deletion uses the utapi.deleteFiles() method with the file keys. The structured key pattern described above makes this tractable — listing all files for a user prefix is a single API call rather than tracking individual file keys in your database.

Storage lifecycle policies on R2 and S3 can also auto-delete files that haven't been accessed in a configurable period, providing a safety net for files that slip through application-level cleanup.

Building file cleanup into account deletion from the start is significantly easier than retrofitting it later. When you have 10,000 users with an average of 50MB each, a missing cleanup step means 500GB of orphaned storage accumulating indefinitely. Model file storage as a resource with an explicit owner from day one — the structured key pattern and the cleanup procedure above make this straightforward to implement alongside your initial upload integration setup. Cloudflare R2's pricing stability and zero egress fees have made it the default recommendation for cost-conscious SaaS builders in 2026, particularly for products where users download files frequently — the egress cost difference versus S3 can be significant at moderate traffic volumes.

The upload infrastructure choice compounds over time: migrating from UploadThing to S3 after launch requires updating every upload endpoint, storage reference, and signed URL implementation. Make the right choice for your scale and cost profile before shipping, not after user data is already stored.


Find boilerplates with file upload solutions in our best open-source SaaS boilerplates guide.

See our guide to multi-tenancy patterns for isolating file storage by organization.

Compare full-stack TypeScript boilerplates with built-in file upload in our full-stack TypeScript boilerplates guide.

Check out this boilerplate

View UploadThingon StarterPick →

The SaaS Boilerplate Matrix (Free PDF)

20+ SaaS starters compared: pricing, tech stack, auth, payments, and what you actually ship with. Updated monthly. Used by 150+ founders.

Join 150+ SaaS founders. Unsubscribe in one click.