Best Analytics Dashboard Boilerplates 2026
TL;DR
For SaaS analytics dashboards in 2026: Tremor + Next.js for product metrics you own, PostHog for product analytics you want to move fast on, Recharts or Nivo for custom visualization needs. The database choice matters more than the chart library — PostgreSQL + TimescaleDB handles most SaaS analytics at reasonable scale.
The Real Challenge: Data, Not Charts
Analytics dashboard development looks like a UI problem. It's actually a data problem. The chart library is 20% of the work; getting the right data, aggregated correctly, in <200ms is 80%.
Common mistakes:
- Running
SELECT *with client-side aggregation — works at 10k rows, breaks at 1M - Storing raw events in the same DB as your application data
- Using
GROUP BYon unindexed columns - No caching layer — dashboard regenerates full aggregation on every page load
This guide covers both the front-end tooling and the data patterns that make analytics dashboards actually fast.
Chart Library Options
Tremor — Best for SaaS Metrics Dashboards
Price: Free | Stars: 17k+ | Stack: React, Tailwind CSS
Tremor provides pre-built dashboard components specifically designed for analytics: area charts, bar charts, line charts, donut charts, metric cards, sparklines, and progress bars — all built on Recharts and styled with Tailwind.
import { AreaChart, BarChart, Card, Metric, Text, Title, DonutChart } from "@tremor/react";
// MRR trend chart
const mrrData = [
{ date: "Jan 2026", MRR: 4200, Subscribers: 140 },
{ date: "Feb 2026", MRR: 5100, Subscribers: 170 },
{ date: "Mar 2026", MRR: 6800, Subscribers: 227 },
];
export function MRRChart() {
return (
<Card>
<Title>Monthly Recurring Revenue</Title>
<Text>Last 90 days</Text>
<AreaChart
className="h-72 mt-4"
data={mrrData}
index="date"
categories={["MRR"]}
colors={["indigo"]}
valueFormatter={(v) => `$${v.toLocaleString()}`}
showLegend={false}
showAnimation
/>
</Card>
);
}
// Metric summary card
export function MRRSummary({ current, previous }: { current: number; previous: number }) {
const growth = ((current - previous) / previous) * 100;
return (
<Card decoration="top" decorationColor="indigo">
<Text>MRR</Text>
<Metric>${current.toLocaleString()}</Metric>
<Text className={growth >= 0 ? 'text-emerald-600' : 'text-red-600'}>
{growth >= 0 ? '↑' : '↓'} {Math.abs(growth).toFixed(1)}% vs last month
</Text>
</Card>
);
}
Tremor's design system is tuned for data — good contrast ratios, appropriate color palettes for multi-series charts, and responsive by default.
Choose Tremor when: Building an internal SaaS metrics dashboard (MRR, churn, signups) where you want polished, consistent components without designing from scratch.
Recharts — Most Flexible
Price: Free | Stars: 24k+ | Stack: React, D3
Recharts is the underlying chart engine that Tremor uses. Lower-level access means more customization:
import {
AreaChart, Area, XAxis, YAxis, CartesianGrid, Tooltip,
ResponsiveContainer, ReferenceLine
} from 'recharts';
function CustomRevenueChart({ data }: { data: RevenuePoint[] }) {
return (
<ResponsiveContainer width="100%" height={300}>
<AreaChart data={data} margin={{ top: 10, right: 30, left: 0, bottom: 0 }}>
<defs>
<linearGradient id="revenueGradient" x1="0" y1="0" x2="0" y2="1">
<stop offset="5%" stopColor="#6366f1" stopOpacity={0.3} />
<stop offset="95%" stopColor="#6366f1" stopOpacity={0} />
</linearGradient>
</defs>
<CartesianGrid strokeDasharray="3 3" stroke="#f0f0f0" />
<XAxis dataKey="date" tick={{ fontSize: 12 }} />
<YAxis tickFormatter={(v) => `$${(v/1000).toFixed(0)}k`} />
<Tooltip
formatter={(value: number) => [`$${value.toLocaleString()}`, 'Revenue']}
contentStyle={{ borderRadius: '8px', border: '1px solid #e5e7eb' }}
/>
<ReferenceLine y={10000} stroke="#f59e0b" strokeDasharray="5 5" label="Target" />
<Area
type="monotone"
dataKey="revenue"
stroke="#6366f1"
fill="url(#revenueGradient)"
strokeWidth={2}
/>
</AreaChart>
</ResponsiveContainer>
);
}
Choose Recharts when: You need custom chart types, custom tooltips, reference lines, or complex interactions that Tremor doesn't expose.
PostHog — Best for Product Analytics
Price: Free tier (1M events/month), $450/month for scale | Self-host: ✅ Available
PostHog gives you event tracking, funnels, cohort analysis, session recording, feature flags, and A/B testing. It's a full analytics platform, not just charts.
// Next.js: PostHog client setup
// app/providers.tsx
'use client';
import posthog from 'posthog-js';
import { PostHogProvider } from 'posthog-js/react';
if (typeof window !== 'undefined') {
posthog.init(process.env.NEXT_PUBLIC_POSTHOG_KEY!, {
api_host: process.env.NEXT_PUBLIC_POSTHOG_HOST || 'https://app.posthog.com',
capture_pageview: false, // Manual pageview capture for Next.js
loaded: (posthog) => {
if (process.env.NODE_ENV === 'development') posthog.debug();
},
});
}
export function PHProvider({ children }: { children: React.ReactNode }) {
return <PostHogProvider client={posthog}>{children}</PostHogProvider>;
}
// Track custom events
import { usePostHog } from 'posthog-js/react';
function UpgradeButton({ plan }: { plan: string }) {
const posthog = usePostHog();
const handleUpgrade = () => {
posthog.capture('upgrade_clicked', {
from_plan: 'free',
to_plan: plan,
page: 'pricing',
});
startCheckout(plan);
};
return <Button onClick={handleUpgrade}>Upgrade to {plan}</Button>;
}
Self-hosted PostHog: Runs on Docker, uses ClickHouse for analytics storage. The self-hosted version is free but requires ~8GB RAM minimum for the ClickHouse analytics engine.
Choose PostHog when: You want product analytics (funnels, cohorts, session replay) not just metrics charts. Faster to set up than building custom analytics infrastructure.
Database Patterns for Fast Analytics
Pattern 1: PostgreSQL with Materialized Views
For most SaaS apps under 10M events, PostgreSQL with materialized views handles analytics well:
-- Raw events table
CREATE TABLE events (
id BIGSERIAL PRIMARY KEY,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
user_id UUID REFERENCES users(id),
event_type TEXT NOT NULL,
properties JSONB
);
CREATE INDEX idx_events_created_at ON events (created_at DESC);
CREATE INDEX idx_events_user_id ON events (user_id);
CREATE INDEX idx_events_type_time ON events (event_type, created_at DESC);
-- Materialized view: daily signups (refresh nightly or on-demand)
CREATE MATERIALIZED VIEW daily_signups AS
SELECT
DATE_TRUNC('day', created_at) AS day,
COUNT(*) AS signups
FROM users
GROUP BY 1
ORDER BY 1 DESC;
CREATE UNIQUE INDEX ON daily_signups (day);
-- Refresh: run this nightly via cron
REFRESH MATERIALIZED VIEW CONCURRENTLY daily_signups;
-- Query is instant (reading pre-aggregated table)
SELECT * FROM daily_signups WHERE day > NOW() - INTERVAL '90 days';
Pattern 2: TimescaleDB for Time-Series Data
TimescaleDB is a PostgreSQL extension that partitions time-series data automatically. Neon and Supabase support it.
-- Enable TimescaleDB
CREATE EXTENSION IF NOT EXISTS timescaledb;
-- Create hypertable (automatically partitioned by time)
CREATE TABLE page_views (
time TIMESTAMPTZ NOT NULL,
user_id UUID,
path TEXT,
referrer TEXT,
duration_ms INTEGER
);
SELECT create_hypertable('page_views', 'time', chunk_time_interval => INTERVAL '1 day');
-- Time-bucketed aggregation query (fast even on 100M rows)
SELECT
time_bucket('1 hour', time) AS hour,
path,
COUNT(*) AS views,
COUNT(DISTINCT user_id) AS unique_visitors,
AVG(duration_ms) AS avg_duration_ms
FROM page_views
WHERE time > NOW() - INTERVAL '7 days'
AND path LIKE '/blog/%'
GROUP BY 1, 2
ORDER BY 1;
TimescaleDB's time-bucket queries are 10-100x faster than equivalent PostgreSQL GROUP BY on large datasets because data is physically co-located by time partition.
Pattern 3: ClickHouse for Scale
For 100M+ events per day, ClickHouse is the right tool:
-- ClickHouse: columnar storage, optimized for aggregations
CREATE TABLE events (
timestamp DateTime,
user_id UInt64,
event_type LowCardinality(String),
properties String -- JSON
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(timestamp)
ORDER BY (event_type, timestamp)
SETTINGS index_granularity = 8192;
-- Billion-row query in <1 second
SELECT
toDate(timestamp) AS date,
event_type,
uniq(user_id) AS unique_users,
count() AS total_events
FROM events
WHERE timestamp BETWEEN '2026-01-01' AND '2026-03-31'
GROUP BY 1, 2
ORDER BY 1;
ClickHouse is used by PostHog, Plausible, and most high-volume analytics platforms. For a SaaS under 10M daily events, it's overkill — PostgreSQL + TimescaleDB is sufficient.
Building Core SaaS Metrics
Every SaaS dashboard needs these metrics. Here's how to compute them efficiently:
MRR and Churn
// tRPC router: SaaS metrics
export const metricsRouter = router({
mrrSummary: adminProcedure.query(async () => {
// Current MRR: sum of active subscription prices
const activeSubscriptions = await db.subscription.findMany({
where: { status: 'active' },
include: { plan: { select: { monthlyPrice: true } } },
});
const currentMRR = activeSubscriptions.reduce(
(sum, sub) => sum + sub.plan.monthlyPrice,
0
);
// Previous MRR: 30 days ago snapshot (from ledger/snapshot table)
const previousMRR = await db.mrrSnapshot
.findFirst({
where: { date: { lte: subDays(new Date(), 30) } },
orderBy: { date: 'desc' },
})
.then((r) => r?.mrr ?? 0);
return {
current: currentMRR,
previous: previousMRR,
change: currentMRR - previousMRR,
changePercent: ((currentMRR - previousMRR) / previousMRR) * 100,
};
}),
churnRate: adminProcedure.query(async () => {
const thirtyDaysAgo = subDays(new Date(), 30);
const churned = await db.subscription.count({
where: {
status: 'cancelled',
updatedAt: { gte: thirtyDaysAgo },
},
});
const activeAtStart = await db.subscription.count({
where: {
createdAt: { lt: thirtyDaysAgo },
OR: [
{ status: 'active' },
{ status: 'cancelled', updatedAt: { gte: thirtyDaysAgo } },
],
},
});
return {
churned,
activeAtStart,
churnRate: activeAtStart > 0 ? (churned / activeAtStart) * 100 : 0,
};
}),
signupTimeSeries: adminProcedure
.input(z.object({ days: z.number().default(90) }))
.query(async ({ input }) => {
const result = await db.$queryRaw<{ date: Date; count: bigint }[]>`
SELECT
DATE_TRUNC('day', created_at)::date AS date,
COUNT(*)::bigint AS count
FROM users
WHERE created_at >= NOW() - INTERVAL '${input.days} days'
GROUP BY 1
ORDER BY 1
`;
return result.map((r) => ({
date: r.date.toISOString().split('T')[0],
signups: Number(r.count),
}));
}),
});
Caching Dashboard Queries
Analytics queries are expensive. Cache aggressively:
import { unstable_cache } from 'next/cache';
// Cache the MRR summary for 5 minutes
const getCachedMRR = unstable_cache(
async () => computeMRR(),
['mrr-summary'],
{ revalidate: 300 } // 5 minutes
);
// For long-running aggregations, use a background job + Redis cache
// Inngest: recompute MRR nightly and cache result
export const recomputeMRR = inngest.createFunction(
{ id: 'recompute-mrr', name: 'Nightly MRR Recomputation' },
{ cron: '0 2 * * *' }, // 2am daily
async () => {
const mrr = await computeMRR();
await redis.set('mrr:latest', JSON.stringify(mrr), 'EX', 86400);
await db.mrrSnapshot.create({ data: { mrr, date: new Date() } });
}
);
Complete Dashboard Stack Recommendation
| Component | Recommended Tool | Alternative |
|---|---|---|
| Chart library | Tremor | Recharts (more flexible) |
| Time-series DB | PostgreSQL + TimescaleDB | ClickHouse (>10M events/day) |
| Product analytics | PostHog | Plausible (simpler), Mixpanel |
| Real-time updates | Supabase Realtime / Server-Sent Events | Pusher |
| Caching | Next.js unstable_cache + Redis (Upstash) | React Query |
| Background jobs | Inngest | Trigger.dev |
For more on building the SaaS metrics layer, see our guide on analytics in SaaS boilerplates and the best SaaS boilerplates with analytics built in.
Choosing Between Building and Buying Analytics
The analytics dashboard question is really a build-vs-buy question with an important nuance: the answer depends on what you're measuring and who the audience is.
Internal metrics dashboards (MRR, churn, signups, support tickets) are safe to build with Tremor + PostgreSQL. These are low-volume queries on your own data, the user base is small (just your team), and the requirements don't change often. A one-day investment builds something that serves years of internal reporting needs.
Product analytics (user funnels, feature adoption, cohort retention) are better bought via PostHog, Mixpanel, or Amplitude. The engineering cost to build even a basic funnel analysis from scratch is weeks, not days. These tools have invested years into query optimization, flexible grouping, and exploratory UI. Unless analytics is your product, you'll always be behind. PostHog's free tier (1M events/month) covers most early-stage SaaS products for $0.
Customer-facing analytics (showing users their own data and usage metrics in your product) is almost always custom built. No third-party analytics tool gives you the right abstraction for showing users aggregated metrics about their own activity within your product. This is where Tremor + your database is the correct answer. The challenge here is data modeling — ensuring queries remain fast as individual users accumulate months of data.
Performance Benchmarks at Different Scales
Understanding when each database approach starts to fail is critical for planning. These are approximate timings for a single-node PostgreSQL instance (similar to what Neon, Supabase, or Railway provide):
Standard PostgreSQL handles up to approximately 100 million events with appropriate indexing before query times become noticeable for dashboard use. At that point, unindexed GROUP BY queries on large event tables start taking seconds rather than milliseconds. Materialized views are the first mitigation.
PostgreSQL with materialized views can extend usable range to 1-3 billion events, depending on refresh frequency and the complexity of aggregations. The tradeoff is data freshness — materialized views are refreshed on a schedule, so dashboards show data as of the last refresh (hourly, daily, or on-demand), not real-time.
TimescaleDB handles 10+ billion time-series data points on standard hardware through automatic partitioning. Its continuous aggregates (automatically refreshing materialized views with incremental computation) enable near-real-time dashboards at much larger scales than plain PostgreSQL can support. This is the right choice for analytics-heavy SaaS products with millions of users.
ClickHouse enters the picture at 100M+ events per day for most teams. It's operationally more complex to run (separate from your main PostgreSQL database, requires more infrastructure knowledge), but the query performance at this scale is genuinely remarkable — aggregating billions of rows in milliseconds.
Dashboard Architecture Patterns
The most common analytics dashboard architectures in SaaS boilerplates follow one of three patterns:
Pattern 1: Direct database queries — API routes query the application database on each page load. Simple to implement, works at small scale, and gives real-time data. The risk is query cost: a naive MRR calculation that joins subscriptions and plans against the full users table gets expensive as user count grows. This pattern is fine until you have more than ~10,000 users.
Pattern 2: Cached aggregations — Background jobs compute expensive metrics nightly (or on a schedule) and cache results in Redis or a separate reporting table. Dashboard routes read from the cache, not the application database. This pattern separates operational and analytical workloads, prevents dashboard queries from impacting application performance, and handles growth gracefully. The tradeoff is staleness — metrics are as fresh as the last background job run.
Pattern 3: Event sourcing + read models — Every significant action (user signup, plan upgrade, cancellation, feature use) emits an event to an append-only event store. Read models for specific dashboards are materialized from events. This is the most flexible architecture for evolving analytics requirements but has the highest upfront complexity. For most SaaS products, this is premature optimization until you're past $500K ARR.
What SaaS Boilerplates Include by Default
Most SaaS boilerplates include either no analytics infrastructure or a basic admin panel with hardcoded metric queries. Here's what the major options provide:
Boilerplates that include a meaningful analytics foundation: Open SaaS by Wasp ships with PostHog integration configured out of the box — event tracking, feature flags, and the PostHog admin dashboard are pre-wired. Makerkit includes a basic metrics dashboard in the admin panel using Chart.js, covering signups and subscriptions, but not custom event tracking. SaaSrock includes the most comprehensive analytics of any boilerplate — a full analytics module with custom reports, funnel tracking, and a dashboard builder, though it adds significant complexity.
Boilerplates that leave analytics to you: ShipFast, T3 Stack, Supastarter, and Epic Stack provide hooks for adding analytics (Plausible or PostHog are commonly recommended in their communities) but don't ship any analytics dashboard code. You configure the provider and add events manually.
For most SaaS builders, the pattern is: add PostHog or Plausible for product analytics (2-hour setup), build a custom internal metrics dashboard with Tremor for operational KPIs (1-2 day investment), and skip building customer-facing analytics until users specifically request it.
Common Gotchas When Building Analytics Dashboards
Timezone handling is the most common source of bugs in analytics dashboards. GROUP BY DATE_TRUNC('day', created_at) groups by UTC day, which may not match your users' local time. For a B2B product with US customers, this means metrics can show yesterday's signups split across two UTC days. Store events with timestamps, convert to user's timezone at query time for display.
Double-counting happens when the same event appears multiple times due to retry logic or optimistic UI updates. For metrics that matter (MRR, churn), use idempotency keys and upsert patterns rather than insert.
Sampling bias in date ranges affects cohort analysis. A "monthly retention" chart that shows the most recent month will always look worse than older months because newer cohorts have had less time to churn — or in the opposite case, recent users may look more retained because they haven't yet had time to leave. Account for cohort maturity in retention analyses.
Cached metric staleness needs to be communicated to users. If your MRR metric refreshes nightly, show "as of [timestamp]" next to the metric. Stale metrics that don't communicate their staleness erode trust in your dashboard.
When to Reach for Tinybird
For SaaS products where real-time analytics is a core product feature — not just an internal admin concern, but something you show to your users directly — Tinybird provides a ClickHouse-backed API layer with a developer-friendly SQL interface designed for exactly this use case. You push events to Tinybird via their ingest API, write SQL pipes in their web dashboard to define aggregations, and consume the pre-computed results via generated API endpoints in your Next.js application. No ClickHouse infrastructure to manage, no cluster to operate. The pricing is usage-based and starts free for development and low-volume production.
Tinybird is used in production by teams like Vercel (for project analytics), Clerk (for request monitoring), and Midday (the open-source financial SaaS). It's the right tool when you specifically need real-time aggregations over large volumes of events without building ClickHouse infrastructure yourself.
The analytics infrastructure decision should always be proportional to your current scale and actual performance constraints. At 0-10K users: use PostHog free tier for product analytics and Tremor plus direct database queries for operational metrics. At 10K-100K users: add materialized views and a Redis caching layer; consider migrating to TimescaleDB if you're accumulating significant event volume from usage tracking. At 100K+ users: seriously evaluate whether Tinybird or a dedicated analytics database is needed for specific high-volume query paths, while keeping standard application metrics on PostgreSQL. Introducing infrastructure complexity should always be driven by concrete, observed performance problems rather than anticipated future scale that may not arrive. PostHog's feature set has expanded considerably through 2025 — surveys, web analytics, A/B testing, and session replay are now included on the free tier, making it a genuinely compelling all-in-one analytics foundation for early-stage SaaS products that previously would have required multiple vendors.
The boilerplate and tool choices covered here represent the most actively maintained options in their category as of 2026. Evaluate each against your specific requirements: team expertise, deployment infrastructure, budget, and the features your product requires on day one versus those you can add incrementally. The best starting point is the one that lets your team ship the first version of your product fastest, with the least architectural debt.
See our PostHog vs Plausible vs Mixpanel comparison for the product analytics tool decision.
Browse SaaS boilerplates with dashboards pre-built to find starters that include admin analytics.