Skip to main content

Add File Upload to Your SaaS Boilerplate 2026

·StarterPick Team
Share:

TL;DR

UploadThing is the fastest path to file uploads in Next.js in 2026. It handles the S3 infrastructure, resumable uploads, and provides React components. For more control or existing AWS infrastructure, direct S3 with presigned URLs is the alternative. Setup time: UploadThing takes 1-2 hours; direct S3 takes half a day.


Choosing Your Upload Strategy

File upload in Next.js has three viable approaches in 2026, each with different tradeoffs:

Option 1: UploadThing — a managed file upload service built specifically for Next.js. It handles infrastructure (S3-backed), provides React hooks and components, and costs $10/month for 10GB storage. The fastest path from zero to working uploads.

Option 2: Direct S3 with presigned URLs — generate a short-lived upload URL server-side, then have the client upload directly to S3 without routing through your server. Most cost-effective at scale; requires AWS account setup.

Option 3: Server-side proxy — file goes through your server to S3. Simplest implementation but bad for performance (doubles the bandwidth cost, increases latency, ties up server resources). Avoid this pattern in production.

For most SaaS products in 2026, UploadThing is the right default unless you already have AWS infrastructure or need to control costs at high volume. The $10/month cost is worth the saved engineering time.


Boilerplates with File Upload Built-In

Before building from scratch, check your boilerplate:

BoilerplateFile UploadStorageFile Management UI
Makerkit✅ Supabase StorageSupabase✅ Basic
Supastarter✅ Supabase StorageSupabase
Open SaaS
ShipFast❌ Build it
T3 Stack❌ Build it

If you're on Makerkit or Supastarter, you're configuring file type/size limits rather than building. If you're on ShipFast or T3, this guide has the complete implementation.


npm install uploadthing @uploadthing/react

Server Configuration

The file router defines what types of files are accepted and what happens after upload:

// app/api/uploadthing/core.ts
import { createUploadthing, type FileRouter } from 'uploadthing/next';
import { getServerSession } from 'next-auth';

const f = createUploadthing();

export const ourFileRouter = {
  // Profile picture: image only, 4MB max
  profilePicture: f({ image: { maxFileSize: '4MB' } })
    .middleware(async () => {
      const session = await getServerSession();
      if (!session) throw new Error('Unauthorized');
      return { userId: session.user.id };
    })
    .onUploadComplete(async ({ metadata, file }) => {
      await prisma.user.update({
        where: { id: metadata.userId },
        data: { image: file.url },
      });
      return { url: file.url };
    }),

  // Document upload: PDF/Word, 16MB max
  document: f({
    pdf: { maxFileSize: '16MB' },
    'application/msword': { maxFileSize: '16MB' },
    'application/vnd.openxmlformats-officedocument.wordprocessingml.document': { maxFileSize: '16MB' },
  })
    .middleware(async () => {
      const session = await getServerSession();
      if (!session) throw new Error('Unauthorized');
      return { userId: session.user.id, orgId: session.organization?.id };
    })
    .onUploadComplete(async ({ metadata, file }) => {
      await prisma.document.create({
        data: {
          userId: metadata.userId,
          organizationId: metadata.orgId,
          name: file.name,
          url: file.url,
          size: file.size,
        },
      });
      return { fileId: file.key };
    }),
} satisfies FileRouter;

export type OurFileRouter = typeof ourFileRouter;
// app/api/uploadthing/route.ts
import { createRouteHandler } from 'uploadthing/next';
import { ourFileRouter } from './core';

export const { GET, POST } = createRouteHandler({ router: ourFileRouter });

Client Upload Component

// components/ProfilePictureUpload.tsx
'use client';
import { UploadButton } from '@uploadthing/react';
import type { OurFileRouter } from '@/app/api/uploadthing/core';
import { useRouter } from 'next/navigation';

export function ProfilePictureUpload() {
  const router = useRouter();

  return (
    <UploadButton<OurFileRouter, 'profilePicture'>
      endpoint="profilePicture"
      onClientUploadComplete={(res) => {
        router.refresh(); // Refresh server component to show new image
      }}
      onUploadError={(error) => {
        alert(`Upload failed: ${error.message}`);
      }}
      appearance={{
        button: 'bg-indigo-600 hover:bg-indigo-700 text-white text-sm px-4 py-2 rounded-lg',
        allowedContent: 'text-gray-400 text-xs',
      }}
    />
  );
}

UploadThing's React components are pre-built and handle the upload progress UI, error states, and drag-and-drop. You can use UploadButton (button trigger), UploadDropzone (drag-and-drop area), or the useUploadThing hook for fully custom UI.


Option 2: Direct S3 with Presigned URLs

For full control over storage, use presigned URLs to upload directly to S3:

// lib/s3.ts
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const s3 = new S3Client({
  region: process.env.AWS_REGION!,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
  },
});

export async function getUploadUrl(
  key: string,
  contentType: string
): Promise<string> {
  const command = new PutObjectCommand({
    Bucket: process.env.S3_BUCKET_NAME!,
    Key: key,
    ContentType: contentType,
  });
  return getSignedUrl(s3, command, { expiresIn: 600 }); // 10 min expiry
}

export async function getDownloadUrl(key: string): Promise<string> {
  const command = new GetObjectCommand({
    Bucket: process.env.S3_BUCKET_NAME!,
    Key: key,
  });
  return getSignedUrl(s3, command, { expiresIn: 3600 }); // 1 hour
}
// app/api/upload/presign/route.ts
export async function POST(req: Request) {
  const session = await getServerSession();
  if (!session) return new Response('Unauthorized', { status: 401 });

  const { filename, contentType, size } = await req.json();

  // Validate
  const MAX_SIZE = 10 * 1024 * 1024; // 10MB
  const ALLOWED_TYPES = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf'];

  if (size > MAX_SIZE) return new Response('File too large', { status: 400 });
  if (!ALLOWED_TYPES.includes(contentType)) return new Response('Invalid type', { status: 400 });

  // Generate unique key
  const ext = filename.split('.').pop();
  const key = `uploads/${session.user.id}/${Date.now()}-${Math.random().toString(36).slice(2)}.${ext}`;

  const uploadUrl = await getUploadUrl(key, contentType);

  return Response.json({ uploadUrl, key });
}
// Client-side: upload to S3 directly
async function uploadFile(file: File) {
  // 1. Get presigned URL from your API
  const { uploadUrl, key } = await fetch('/api/upload/presign', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      filename: file.name,
      contentType: file.type,
      size: file.size,
    }),
  }).then(r => r.json());

  // 2. Upload directly to S3 (no server bandwidth)
  await fetch(uploadUrl, {
    method: 'PUT',
    headers: { 'Content-Type': file.type },
    body: file,
  });

  // 3. Store the key in your DB
  await fetch('/api/files', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ key, name: file.name }),
  });

  return key;
}

Database Model

model File {
  id             String   @id @default(cuid())
  userId         String
  organizationId String?
  name           String
  url            String   // UploadThing URL or S3 key
  size           Int      // Bytes
  mimeType       String
  createdAt      DateTime @default(now())

  user         User          @relation(fields: [userId], references: [id])
  organization Organization? @relation(fields: [organizationId], references: [id])

  @@index([organizationId])
  @@index([userId])
}

File Display with Signed URLs

For private files (user documents, internal assets), generate short-lived signed URLs rather than storing the public URL directly:

// components/FileList.tsx
export async function FileList({ orgId }: { orgId: string }) {
  const files = await prisma.file.findMany({
    where: { organizationId: orgId },
    orderBy: { createdAt: 'desc' },
  });

  return (
    <ul className="divide-y divide-gray-200 dark:divide-gray-700">
      {files.map(file => (
        <li key={file.id} className="flex items-center justify-between py-3">
          <div className="flex items-center gap-3">
            <FileIcon className="w-5 h-5 text-gray-400" />
            <div>
              <p className="text-sm font-medium">{file.name}</p>
              <p className="text-xs text-gray-400">{formatBytes(file.size)}</p>
            </div>
          </div>
          <a
            href={file.url}
            target="_blank"
            rel="noopener noreferrer"
            className="text-indigo-600 text-sm hover:text-indigo-700"
          >
            Download
          </a>
        </li>
      ))}
    </ul>
  );
}

For UploadThing URLs, the URL is public by default — control access through your application layer (check auth before showing the URL). For S3, use presigned download URLs to keep files private at the storage level.


File Validation and Security

Server-side validation is non-negotiable — never trust client-side file type checks:

// Server-side MIME type detection (not just file extension)
import { fileTypeFromBuffer } from 'file-type';

async function validateFile(buffer: Buffer, declaredType: string) {
  const detected = await fileTypeFromBuffer(buffer);
  
  // Reject if detected type doesn't match declared type
  if (detected?.mime !== declaredType) {
    throw new Error('File type mismatch — possible malicious upload');
  }
  
  // Block dangerous types regardless
  const BLOCKED_TYPES = ['text/html', 'text/javascript', 'application/x-php'];
  if (BLOCKED_TYPES.includes(detected.mime)) {
    throw new Error('File type not allowed');
  }
}

For image uploads specifically, always process images through Sharp to strip EXIF metadata (which can contain GPS coordinates and other private data) before storing:

import sharp from 'sharp';

async function processImage(inputBuffer: Buffer): Promise<Buffer> {
  return sharp(inputBuffer)
    .rotate()           // Auto-rotate based on EXIF orientation
    .resize(2048, 2048, { fit: 'inside', withoutEnlargement: true })
    .jpeg({ quality: 85, mozjpeg: true })
    .withMetadata({ exif: {} })  // Strip EXIF data
    .toBuffer();
}

Handling Multiple File Types and Use Cases

Most SaaS products need file upload in more than one context: profile pictures, document attachments, bulk import CSVs, video uploads. Each has different constraints.

Profile/avatar images: Small files (under 4MB), always images, usually displayed at multiple sizes. Process server-side with Sharp to generate multiple sizes (thumbnails, medium, original). Store three copies in S3, cache aggressively via CloudFront or Vercel's CDN.

Document attachments: PDFs, spreadsheets, Word docs. Users care about the original file, not processed versions. Store the original, generate a PDF preview for in-browser viewing. Virus scanning (ClamAV or AWS Macie) is worth considering for user-uploaded documents.

CSV/spreadsheet imports: These are data files, not display assets. Stream-parse them with Papa.parse or node-csv rather than loading into memory. For large files, process in a background job (Inngest, BullMQ) rather than synchronously in the API route.

Video uploads: Dramatically different requirements — files are large (100MB–10GB+), need transcoding to web-compatible formats (HLS for adaptive streaming), and require a CDN for efficient delivery. Mux is the standard managed solution; AWS MediaConvert for self-hosted. UploadThing's max file size is 2GB with large file support — usable for short videos. For anything professional-grade, use Mux.

Define your file type matrix before implementation: what types are allowed per use case, max sizes, whether processing is needed, and access control requirements (public vs signed-URL-protected).

Managing User Storage Quotas

If file storage is part of your billing model, you need to track storage usage per user or per organization and enforce limits.

// lib/storage.ts
export async function getUserStorageUsed(userId: string): Promise<number> {
  const result = await prisma.file.aggregate({
    where: { userId, deletedAt: null },
    _sum: { size: true },
  });
  return result._sum.size ?? 0;  // Returns bytes
}

export async function checkStorageQuota(userId: string, fileSize: number): Promise<void> {
  const plan = await getUserPlan(userId);
  const quotaBytes = PLAN_STORAGE_LIMITS[plan] * 1024 * 1024 * 1024;  // GB to bytes
  const usedBytes = await getUserStorageUsed(userId);
  
  if (usedBytes + fileSize > quotaBytes) {
    throw new Error(`Storage quota exceeded. Used: ${formatBytes(usedBytes)}, Limit: ${formatBytes(quotaBytes)}`);
  }
}

const PLAN_STORAGE_LIMITS: Record<string, number> = {
  free: 1,      // 1 GB
  starter: 10,  // 10 GB
  pro: 100,     // 100 GB
  enterprise: 1000,
};

Call checkStorageQuota in the upload middleware before accepting the upload. This prevents users from discovering quota limits only after uploading, which creates a frustrating UX.

Display storage usage in the user's settings or billing page — a progress bar showing "Used 4.2 GB of 10 GB" is standard. Proactively email users when they hit 80% and 95% of their quota.


Comparison: UploadThing vs Direct S3

UploadThingDirect S3
Setup time1-2 hoursHalf day
Monthly cost$10/mo (10GB)~$0.023/GB
Resumable uploadsManual
React componentsBuild your own
Vendor lock-inMediumNone
Bandwidth costIncludedEgress fees
File processingFull control
Access controlApp layerIAM + signed URLs

Choose UploadThing when: You want the fastest implementation, you're fine with $10/month, and you don't need to process files (resize images, scan for viruses, extract metadata).

Choose Direct S3 when: You already have AWS infrastructure, you need to process files before storing (image resizing, document conversion), your storage costs would exceed UploadThing pricing, or you need per-file IAM-level access control.


Deleting Files and Cleanup

File deletion is often an afterthought, but it's important for both cost management and GDPR/privacy compliance. When a user deletes a file, or deletes their account, you need to:

  1. Remove the database record
  2. Delete the actual file from S3/UploadThing storage

Don't just soft-delete the database record and leave the file in storage — storage costs accumulate, and GDPR "right to erasure" requests require actual deletion.

// lib/files.ts
export async function deleteFile(fileId: string, userId: string) {
  const file = await prisma.file.findFirst({
    where: { id: fileId, userId },  // Verify ownership
  });
  
  if (!file) throw new Error('File not found or unauthorized');
  
  // Delete from UploadThing:
  await utapi.deleteFiles([file.storageKey]);
  
  // Or delete from S3:
  // await s3.send(new DeleteObjectCommand({ Bucket: BUCKET, Key: file.storageKey }));
  
  // Delete DB record:
  await prisma.file.delete({ where: { id: fileId } });
}

// Cleanup on account deletion:
export async function deleteAllUserFiles(userId: string) {
  const files = await prisma.file.findMany({ where: { userId } });
  const keys = files.map(f => f.storageKey);
  
  if (keys.length > 0) {
    await utapi.deleteFiles(keys);  // Batch delete
  }
  
  await prisma.file.deleteMany({ where: { userId } });
}

Run a weekly job that identifies orphaned files — storage records with no corresponding database record, which can happen if the upload succeeded but the database write failed. UploadThing provides a listing API; cross-reference it with your DB to find orphans.

Progressive Enhancement for Upload UX

The default file upload UX (click button, select file, wait) can be significantly improved with progressive enhancement:

Drag-and-drop zones: Most users prefer dragging files over clicking and navigating. UploadThing's UploadDropzone component handles this. For custom UI, use the HTML5 dragover/drop events.

Upload progress indicators: For files over 1MB, show a progress bar. UploadThing fires progress callbacks; direct S3 uploads require tracking via the onUploadProgress callback in fetch (not supported) or using the axios library which does support it.

Immediate preview: For image uploads, show a preview immediately using URL.createObjectURL(file) before the upload completes. This gives instant feedback and makes the UX feel faster.

Error recovery: When an upload fails, preserve the file selection and offer a retry button. Don't clear the input and make the user re-select the file.

Multi-file upload: If your use case allows multiple files, batch them into a single upload operation rather than sequential uploads. UploadThing supports multi-file upload natively.


For managing uploaded files in an admin panel, see best boilerplates for admin dashboards. If your SaaS needs multi-tenant file isolation (each organization's files are isolated), the multi-tenancy patterns in best boilerplates for white-label SaaS apply to storage as well. For adding file upload to a specific boilerplate, how to customize ShipFast covers integrating new features into an existing ShipFast installation.


CDN and Serving Performance

Where files are stored affects how fast they load. UploadThing serves files through their CDN automatically. For direct S3, you should put CloudFront in front of your S3 bucket rather than serving directly from S3 — the latency difference for international users is significant, and CloudFront adds caching that reduces S3 GET request costs.

For images specifically, consider using Next.js's <Image> component, which handles lazy loading, size optimization (serving WebP to supported browsers), and intrinsic sizing automatically. Point the src at your UploadThing or S3 URL, and add the domain to next.config.js under images.remotePatterns.

Private files (signed URLs) expire, which means you can't cache them at the CDN layer or in the browser for long. Generate signed URLs server-side with an expiry that matches your session length (1–8 hours). Re-fetch the URL when it expires rather than storing it in client state. For very sensitive files (medical records, financial documents), use short expiry (15–60 minutes) to limit exposure if a URL is leaked.


Methodology

UploadThing pricing from official pricing page (April 2026). AWS S3 pricing from AWS pricing calculator. Security recommendations based on OWASP file upload vulnerability guidelines.

Find boilerplates with file upload built-in on StarterPick.

Check out this boilerplate

View ShipFaston StarterPick →

The SaaS Boilerplate Matrix (Free PDF)

20+ SaaS starters compared: pricing, tech stack, auth, payments, and what you actually ship with. Updated monthly. Used by 150+ founders.

Join 150+ SaaS founders. Unsubscribe in one click.