Shipped.club Review 2026: AI SaaS Boilerplate
TL;DR
Shipped.club is a well-structured Next.js SaaS boilerplate with solid AI integration patterns. It includes Stripe, auth, and AI features (OpenAI integration, streaming) pre-built. At ~$149, it's positioned between budget and premium boilerplates. Best for founders building AI SaaS products who want a foundation with AI patterns already configured.
What You Get
Price: ~$149 (check shipped.club for current pricing)
Core features:
- Next.js 14 (App Router) + TypeScript
- Auth: NextAuth (email + OAuth)
- Payments: Stripe subscriptions + usage-based billing
- AI: OpenAI integration + streaming responses
- Email: Resend + React Email
- Database: Prisma + PostgreSQL
- UI: shadcn/ui + Tailwind
- Blog: MDX
- Admin panel: Basic
The AI Integration
Shipped.club's differentiator: pre-built AI patterns for SaaS products.
// lib/ai.ts — OpenAI with streaming
import OpenAI from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
// Streaming AI response
export async function streamAIResponse(
prompt: string,
userId: string,
options?: { maxTokens?: number; temperature?: number }
) {
// Check and deduct credits before running
await deductCredits(userId, estimateTokens(prompt));
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: prompt }],
max_tokens: options?.maxTokens ?? 500,
temperature: options?.temperature ?? 0.7,
stream: true,
});
return OpenAIStream(response, {
onCompletion: async (completion) => {
// Log usage for billing reconciliation
await logAIUsage(userId, prompt, completion);
},
});
}
// API route — streaming response to client
export async function POST(req: Request) {
const { userId } = await auth();
const { prompt } = await req.json();
const stream = await streamAIResponse(prompt, userId);
return new StreamingTextResponse(stream);
}
// Client — useChat hook for streaming UI
'use client';
import { useChat } from 'ai/react';
export function AIChatWidget() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
api: '/api/chat',
});
return (
<div className="flex flex-col h-96">
<div className="flex-1 overflow-auto p-4">
{messages.map((msg) => (
<div key={msg.id} className={`mb-4 ${msg.role === 'user' ? 'text-right' : 'text-left'}`}>
<span className="inline-block bg-card rounded-lg px-3 py-2 text-sm">
{msg.content}
</span>
</div>
))}
{isLoading && <div className="text-muted-foreground text-sm">Thinking...</div>}
</div>
<form onSubmit={handleSubmit} className="p-4 border-t">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask anything..."
className="w-full border rounded-lg px-3 py-2"
/>
</form>
</div>
);
}
Usage-Based Billing
Shipped.club includes credit-based AI billing:
// models/credits.ts — credit management
export async function getUserCredits(userId: string): Promise<number> {
const user = await prisma.user.findUnique({
where: { id: userId },
select: { aiCredits: true },
});
return user?.aiCredits ?? 0;
}
export async function deductCredits(userId: string, amount: number): Promise<void> {
const current = await getUserCredits(userId);
if (current < amount) {
throw new Error('Insufficient credits. Please upgrade your plan.');
}
await prisma.user.update({
where: { id: userId },
data: { aiCredits: { decrement: amount } },
});
}
// Stripe Metered Billing for credits
export async function recordCreditUsage(
stripeSubscriptionItemId: string,
quantity: number
) {
await stripe.subscriptionItems.createUsageRecord(stripeSubscriptionItemId, {
quantity,
timestamp: 'now',
action: 'increment',
});
}
What's Included vs Missing
Included:
- Streaming AI responses
- Credit-based billing with Stripe
- AI usage logging and analytics
Not included:
- Vector database integration (Pinecone, Weaviate)
- RAG (retrieval-augmented generation) setup
- Fine-tuning infrastructure
- Multi-model support (Claude, Gemini)
For AI SaaS requiring RAG or vector search, you'll need to add those integrations.
Comparison with ShipFast + AI SDK
ShipFast doesn't include AI out of the box, but adding the Vercel AI SDK takes < 1 day. The question is whether Shipped.club's pre-built AI patterns justify the cost vs adding AI to ShipFast manually.
For pure AI SaaS where the AI integration is the core product: Shipped.club saves time. For SaaS with AI as a secondary feature: ShipFast + Vercel AI SDK is comparable.
Who Should Buy Shipped.club
Good fit:
- Founders building AI-first SaaS products
- Products where streaming AI responses are core UX
- Teams needing usage-based AI billing from day one
- Developers who want AI patterns pre-architected
Bad fit:
- Traditional SaaS without AI features (ShipFast is more polished)
- Products needing vector search / RAG (more setup needed)
- Multi-tenant B2B SaaS (Supastarter is better)
Final Verdict
Rating: 3.5/5
Shipped.club is a solid boilerplate for AI-focused SaaS products. The pre-built streaming, credit billing, and AI usage logging are genuine time-savers for AI products. Non-AI SaaS founders are better served by ShipFast or Supastarter.
Getting Started
# After purchase — clone and configure
git clone https://your-shipped-club-repo.git my-app
cd my-app && npm install
cp .env.example .env.local
# Required:
# OPENAI_API_KEY=sk-...
# NEXTAUTH_SECRET=...
# NEXTAUTH_URL=http://localhost:3000
# DATABASE_URL=postgresql://...
# STRIPE_SECRET_KEY=sk_test_...
# STRIPE_WEBHOOK_SECRET=whsec_...
# RESEND_API_KEY=re_...
npx prisma db push
npm run dev # → localhost:3000
Initial setup takes 45-90 minutes — longer than ShipFast because the AI billing configuration requires creating both subscription plans and metered billing items in Stripe.
Adding Multi-Model Support
Shipped.club defaults to OpenAI. Adding Claude or Gemini requires swapping the client:
// lib/ai-multi.ts — multi-provider support
import OpenAI from 'openai';
import Anthropic from '@anthropic-ai/sdk';
type Provider = 'openai' | 'claude';
export async function streamResponse(
prompt: string,
provider: Provider = 'openai'
) {
if (provider === 'claude') {
const client = new Anthropic();
const stream = client.messages.stream({
model: 'claude-opus-4-6',
max_tokens: 1024,
messages: [{ role: 'user', content: prompt }],
});
return stream.toReadableStream();
}
// Default: OpenAI
const client = new OpenAI();
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: prompt }],
stream: true,
});
return OpenAIStream(response);
}
The credit billing system works across providers — you're billing for credits consumed, not per-provider API calls.
AI SaaS Boilerplate Comparison
| Feature | Shipped.club | ShipFast + AI SDK | Best AI SaaS Starter |
|---|---|---|---|
| Price | ~$149 | $299 + DIY | ~$99 |
| Streaming | ✅ Built-in | Add AI SDK (~1 day) | ✅ |
| Credit billing | ✅ | Build yourself | ✅ |
| Usage logging | ✅ | Build yourself | Partial |
| Vector/RAG | ❌ | ❌ | ❌ |
| Multi-model | ❌ (add) | Via AI SDK | ❌ |
| Auth | NextAuth | Clerk/NextAuth | Varies |
For pure AI SaaS, Shipped.club's pre-built streaming and credit billing justify the cost compared to starting from scratch.
When to Choose Shipped.club vs ShipFast
The decision comes down to how central AI is to your product:
Choose Shipped.club if AI is the product: Your SaaS is primarily an AI feature — a writing assistant, code reviewer, document analyzer, or similar. The streaming setup, credit system, and usage logging are the core infrastructure, not afterthoughts.
Choose ShipFast if AI is a secondary feature: Your SaaS is a project management tool, CRM, or analytics dashboard that happens to have an AI summary or assistant feature. Adding the Vercel AI SDK to ShipFast takes less than a day and gives you the streaming foundation without paying twice.
The gap: Neither includes vector search (Pinecone, Weaviate) or RAG setup. If your AI product requires retrieval-augmented generation — indexing your own content for the LLM to reference — you're building that layer yourself regardless of which boilerplate you start from.
Key Takeaways
- Shipped.club's pre-built streaming, credit billing, and usage logging save 1-2 days of setup for AI-first SaaS products
- The credit-based Stripe billing pattern (deduct credits, record usage, reconcile with metered billing) is correctly implemented and can be adapted for any AI provider
- Multi-model support (Claude, Gemini) requires adding those clients manually — the pattern is compatible but not pre-wired
- For SaaS where AI is a secondary feature, ShipFast + Vercel AI SDK is more cost-effective
- Neither Shipped.club nor ShipFast includes vector database integration — RAG-based products require additional setup regardless
The Right AI Pricing Model
Shipped.club's credit-based billing is the standard AI SaaS monetization pattern for a reason: it decouples your revenue from your API costs. Token consumption varies by user and prompt — flat subscriptions either over-charge light users or under-charge heavy users. Credits convert API cost variability into predictable revenue.
The implementation detail to get right: bill credits at purchase, not at use. When users buy 1,000 credits, they pre-pay for future usage. This means you collect revenue upfront and hold a liability (the unused credits), rather than billing retroactively. Shipped.club's credit deduction system follows this model correctly — check balance before running, deduct on successful completion.
For AI products monetizing via subscriptions (monthly access, not per-use credits), Shipped.club's metered billing integration is the right foundation — Stripe Metered Billing tracks usage and bills monthly, with Shipped.club providing the usage recording hooks.
The hybrid model — subscription base with overage credits — is increasingly common in AI SaaS: users pay $29/month for 500 credits, then buy additional credit packs if they need more. Shipped.club's credit system supports this by separating the subscription from the credit balance. Monthly credits are replenished on billing cycle reset; purchased credits accumulate without expiry. This distinction matters for customer trust and for accurate revenue recognition.
Prompt Engineering and Template Management
Shipped.club doesn't include a prompt management system, but for AI SaaS products with multiple feature types (document summarization, code review, data extraction), managing prompts as versioned configuration is worth setting up from the start.
The simplest approach is a prompts.ts file that exports prompt templates as typed constants. When a prompt needs updating (better output quality, reduced token usage, new model compatibility), you update the constant and the change propagates to all usages. Storing prompts in a database is overkill at the start — add that complexity when you have a non-technical team member who needs to edit prompts without code changes.
For products where prompt quality directly affects user outcomes — writing assistants, code reviewers, document analyzers — consider tracking prompt versions alongside your git history. When a prompt change degrades output quality, git history lets you roll back to the previous version without guessing which change caused the regression.
Setting Up Rate Limits and Abuse Prevention
AI endpoints are expensive to abuse. A user who discovers a way to bypass your credit system and make unlimited API calls can cost you hundreds of dollars in a single night. Shipped.club's credit check provides the first layer of protection, but you also need request-level rate limiting to prevent burst abuse.
Upstash Redis rate limiting integrates with Next.js App Router in under an hour and costs cents per million requests:
// lib/rate-limit.ts — per-user rate limiting for AI routes
import { Ratelimit } from '@upstash/ratelimit';
import { Redis } from '@upstash/redis';
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(20, '1 m'), // 20 requests/minute per user
});
// In your AI route handler
export async function POST(req: Request) {
const { userId } = await auth();
const { success, remaining } = await ratelimit.limit(userId);
if (!success) {
return Response.json({ error: 'Rate limit exceeded' }, { status: 429 });
}
// Check credits, then proceed with AI generation
const hasCredits = await checkUserCredits(userId);
if (!hasCredits) {
return Response.json({ error: 'Insufficient credits' }, { status: 402 });
}
// ...
}
The credit system handles authorization (can this user make AI requests?). Rate limiting handles abuse (is this user making too many requests too quickly?). Both checks are necessary for production AI SaaS.
Extending Shipped.club's LLM Support
Shipped.club ships with OpenAI integration. Adding Claude (Anthropic) or Gemini (Google) follows the same streaming pattern with a different SDK:
// lib/ai-providers.ts — multi-provider support
import Anthropic from '@anthropic-ai/sdk';
import { GoogleGenerativeAI } from '@google/generative-ai';
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY! });
const gemini = new GoogleGenerativeAI(process.env.GOOGLE_AI_KEY!);
export async function streamWithProvider(
provider: 'openai' | 'claude' | 'gemini',
messages: Message[]
): Promise<ReadableStream> {
if (provider === 'claude') {
const stream = await anthropic.messages.stream({
model: 'claude-sonnet-4-6',
messages,
max_tokens: 2000,
});
return stream.toReadableStream();
}
// similar for gemini...
}
The credit cost per request can vary by model — charging more credits for Claude Opus than Haiku reflects actual API costs and lets you offer model choice as a premium feature. Modify Shipped.club's credit deduction to accept a creditCost parameter based on the model selected.
The multi-model architecture also provides a fallback path. When OpenAI's API experiences degraded performance or rate limiting, routing to Anthropic or Gemini preserves service availability. For AI SaaS where uptime directly affects user trust, having a fallback provider is a reliability feature that most boilerplates don't account for. Building this into Shipped.club's provider abstraction from the start costs an afternoon and pays dividends when the primary provider has an incident. As the AI model landscape has continued to fragment through 2025, with users increasingly having preferences for specific models, the multi-provider pattern has shifted from a reliability optimization to a competitive product feature for developer-facing AI tools.
Compare Shipped.club with other AI SaaS boilerplates in our best AI/LLM boilerplates guide.
See our guide to adding AI features to SaaS boilerplates for integration patterns across different starters.
Browse best open-source SaaS boilerplates for the full comparison across all product types.
Check out this boilerplate
View Shipped.clubon StarterPick →