Every production Next.js app eventually needs to run work outside the normal request-response cycle. Sending digest emails, syncing data from a third-party API, cleaning up expired sessions, processing uploaded files — the list goes on. You need a reliable way to schedule and execute this stuff in the background.
Here's the thing, though: Next.js runs on serverless infrastructure by default, and there's no long-lived process to host traditional cron jobs. That changes everything about how you approach scheduling.
This guide covers every practical approach to scheduling and running background work in the App Router — from the built-in after() API to Vercel Cron Jobs, Upstash QStash, Trigger.dev, and self-hosted solutions with node-cron.
Understanding the Serverless Constraint
Before diving into solutions, it's worth understanding why background tasks are fundamentally different in Next.js compared to a traditional Express or Rails server.
In a serverful environment, your Node.js process runs 24/7. You can attach a scheduler like node-cron to the event loop, and it'll fire callbacks at the specified times because the process never stops. Memory is preserved, timers keep ticking, and background tasks run reliably.
Serverless is a completely different story.
In a serverless environment (Vercel, Netlify, AWS Lambda), the process starts when a request arrives, handles it, and then gets destroyed. If you register a cron job with node-cron inside a serverless function, it'll execute once during that request and then vanish — no error, no warning, just silence. Honestly, this is the single most common mistake I see developers make when trying to add scheduled tasks to a Next.js application.
The core principle is simple: in serverless, cron jobs must be triggered from outside your app. In serverful, they can run inside your Node.js process.
The after() API: Built-In Background Work
Starting with Next.js 15.1 (stabilized from the earlier unstable_after experimental API), the framework provides a native way to run code after the response has been sent to the client. This isn't a replacement for scheduled cron jobs, but it's the perfect tool for fire-and-forget side effects like logging, analytics, cache warming, and lightweight post-response processing.
How after() Works
Import after from next/server and call it with a callback. The callback executes after the response finishes streaming, without blocking the user-facing output:
// app/api/orders/route.ts
import { after } from 'next/server';
import { logOrderEvent } from '@/lib/analytics';
import { sendOrderConfirmation } from '@/lib/email';
export async function POST(request: Request) {
const order = await request.json();
// Process the order synchronously
const result = await processOrder(order);
// Schedule background work after the response is sent
after(async () => {
await logOrderEvent(order.id, 'created');
await sendOrderConfirmation(order.email, order.id);
});
return Response.json({ orderId: result.id, status: 'confirmed' });
}
Where You Can Use after()
The after() API works in Server Components, Server Actions, Route Handlers, Middleware, and the generateMetadata function. There's an important caveat though: in Server Components, you can't call cookies() or headers() inside the after callback. Read those values before calling after and pass them in via closure:
// app/dashboard/page.tsx
import { after } from 'next/server';
import { cookies } from 'next/headers';
import { trackPageView } from '@/lib/analytics';
export default async function DashboardPage() {
const sessionId = (await cookies()).get('session-id')?.value || 'anonymous';
after(() => {
// sessionId was read above, outside the callback
trackPageView('/dashboard', sessionId);
});
return <h1>Dashboard</h1>;
}
Platform Support for after()
The after() API runs for the platform's default or configured max duration of your route. On Vercel, it uses the waitUntil primitive to extend the serverless function's lifetime. It works on Node.js servers, Docker containers, and Vercel. Static exports don't support it. When self-hosting, you'll need to provide your own waitUntil implementation.
When to Use after() vs. a Cron Job
Use after() when the work is triggered by a user request and needs to happen immediately (but shouldn't block the response). Use a cron job when the work needs to happen on a schedule, independently of any user request.
For example, sending an order confirmation email? Perfect after() use case. Sending a weekly digest email to all users? That's a cron job.
Vercel Cron Jobs: The Serverless Standard
If your Next.js app is deployed on Vercel, built-in cron jobs are the simplest way to run scheduled tasks. Vercel triggers your API route on a schedule using standard cron syntax, and the route handler executes as a normal serverless function. No extra services, no additional billing — just configure and go.
Step 1: Create the Route Handler
Create an API route that performs your scheduled work:
// app/api/cron/cleanup-sessions/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { db } from '@/lib/db';
export async function GET(request: NextRequest) {
// Verify the request is from Vercel Cron
const authHeader = request.headers.get('authorization');
if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
}
// Perform the scheduled work
const result = await db
.delete(sessions)
.where(lt(sessions.expiresAt, new Date()));
return NextResponse.json({
success: true,
deletedSessions: result.rowsAffected,
timestamp: new Date().toISOString(),
});
}
Step 2: Configure the Schedule
Add a vercel.json file to your project root with the cron configuration:
{
"crons": [
{
"path": "/api/cron/cleanup-sessions",
"schedule": "0 3 * * *"
},
{
"path": "/api/cron/send-digest",
"schedule": "0 9 * * 1"
}
]
}
The first job runs daily at 3:00 AM UTC to clean up expired sessions. The second runs every Monday at 9:00 AM UTC to send a weekly digest.
Cron Expression Quick Reference
# ┌───────────── minute (0-59)
# │ ┌───────────── hour (0-23)
# │ │ ┌───────────── day of month (1-31)
# │ │ │ ┌───────────── month (1-12)
# │ │ │ │ ┌───────────── day of week (0-6, Sunday=0)
# * * * * *
# Examples:
# */15 * * * * Every 15 minutes
# 0 * * * * Every hour
# 0 0 * * * Daily at midnight
# 0 9 * * 1 Every Monday at 9 AM
# 0 0 1 * * First day of every month
Step 3: Secure the Endpoint
Always protect cron routes from unauthorized access. Generate a secure token and add it as an environment variable named CRON_SECRET in your Vercel project settings:
# Generate a secure token
node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
Vercel automatically includes CRON_SECRET as a Bearer token in the Authorization header when triggering cron jobs. The route handler shown above already validates this. Without this check, anyone who discovers the URL can trigger your cron job — and trust me, bots will find it eventually.
Vercel Cron Limitations
Be aware of these constraints when using Vercel Cron:
- Hobby plan: Maximum 2 cron jobs, once-daily minimum frequency, and timing is only guaranteed at the hour level (a job scheduled for 15:00 may run anytime between 15:00 and 15:59)
- Pro plan: Up to 40 cron jobs with minute-level scheduling precision
- Function duration: Default timeout is 10 seconds (Hobby) or 15 seconds (Pro). You can increase this to 60 seconds (Hobby) or 300 seconds (Pro) using the
maxDurationroute config. With Fluid Compute enabled, paid plans can go up to 14 minutes - Production only: Cron jobs only execute on production deployments, not preview or development
- No redirects: Cron jobs don't follow HTTP 3xx redirects
To increase the timeout for a cron job route handler:
// app/api/cron/heavy-task/route.ts
export const maxDuration = 300; // 5 minutes (Pro plan)
export async function GET(request: NextRequest) {
// Long-running task...
}
Upstash QStash: Serverless Message Queue
When you need more than what Vercel Cron provides — delayed jobs, retries, concurrency control, or the ability to trigger background work from within your application code — Upstash QStash is worth a serious look. It's an HTTP-based messaging and scheduling solution designed specifically for serverless environments.
How QStash Works
The concept is pretty straightforward: you send QStash an HTTP request describing what to do (the target URL, payload, schedule, and retry policy), and QStash sends an HTTP request to your endpoint at the right time. Your Next.js route handler receives the request, does the work, and returns a response. QStash handles delivery guarantees, retries, and scheduling on its end.
Setting Up QStash
Install the QStash SDK and configure your environment variables:
npm install @upstash/qstash
Add these variables to your .env.local (grab the values from the Upstash dashboard):
QSTASH_TOKEN=your_qstash_token
QSTASH_CURRENT_SIGNING_KEY=your_current_signing_key
QSTASH_NEXT_SIGNING_KEY=your_next_signing_key
Publishing a Scheduled Message
Create a route handler that publishes a message to QStash:
// app/api/schedule-report/route.ts
import { Client } from '@upstash/qstash';
import { NextResponse } from 'next/server';
const qstash = new Client({ token: process.env.QSTASH_TOKEN! });
export async function POST(request: Request) {
const { reportType, userId } = await request.json();
// Schedule a one-time job with a 5-minute delay
await qstash.publishJSON({
url: `${process.env.NEXT_PUBLIC_APP_URL}/api/workers/generate-report`,
body: { reportType, userId },
delay: 300, // 5 minutes in seconds
retries: 3,
});
return NextResponse.json({ status: 'scheduled' });
}
Creating the Worker Endpoint
The worker endpoint receives and processes the message. Use the QStash receiver to verify the request signature:
// app/api/workers/generate-report/route.ts
import { Receiver } from '@upstash/qstash';
import { NextRequest, NextResponse } from 'next/server';
const receiver = new Receiver({
currentSigningKey: process.env.QSTASH_CURRENT_SIGNING_KEY!,
nextSigningKey: process.env.QSTASH_NEXT_SIGNING_KEY!,
});
export async function POST(request: NextRequest) {
const body = await request.text();
const signature = request.headers.get('upstash-signature') || '';
// Verify the request is from QStash
const isValid = await receiver.verify({
body,
signature,
url: `${process.env.NEXT_PUBLIC_APP_URL}/api/workers/generate-report`,
});
if (!isValid) {
return NextResponse.json({ error: 'Invalid signature' }, { status: 401 });
}
const { reportType, userId } = JSON.parse(body);
// Do the actual work
await generateAndEmailReport(reportType, userId);
return NextResponse.json({ success: true });
}
Recurring Schedules with QStash
QStash also supports cron-style recurring schedules, which gives you an alternative to Vercel Cron with more features:
// Schedule a recurring job
await qstash.schedules.create({
destination: `${process.env.NEXT_PUBLIC_APP_URL}/api/workers/daily-sync`,
cron: '0 6 * * *', // Daily at 6 AM
retries: 3,
body: JSON.stringify({ source: 'external-api' }),
headers: { 'Content-Type': 'application/json' },
});
Trigger.dev: Full-Featured Background Job Platform
For applications that need durable execution, complex workflows, automatic retries with backoff, and real-time job status tracking, Trigger.dev is in a league of its own. It provides managed infrastructure purpose-built for background tasks in TypeScript projects.
Initial Setup
The quickest way to get started is via the CLI:
npx trigger.dev@latest init
This creates a /trigger directory with an example task, installs the SDK, and generates a trigger.config.ts configuration file. Add your secret key to .env.local:
TRIGGER_SECRET_KEY=your_trigger_secret_key
Defining a Background Task
Tasks are defined in the /trigger directory and automatically synced with the Trigger.dev platform:
// trigger/send-welcome-email.ts
import { task } from '@trigger.dev/sdk/v3';
import { sendEmail } from '@/lib/email';
export const sendWelcomeEmail = task({
id: 'send-welcome-email',
retry: {
maxAttempts: 3,
minTimeoutInMs: 1000,
maxTimeoutInMs: 10000,
factor: 2, // Exponential backoff
},
run: async (payload: { userId: string; email: string; name: string }) => {
await sendEmail({
to: payload.email,
subject: `Welcome, ${payload.name}!`,
template: 'welcome',
data: { name: payload.name },
});
return { sent: true, email: payload.email };
},
});
Triggering from a Server Action
You can trigger background tasks from Server Actions, Route Handlers, or anywhere on the server side:
// app/actions/register.ts
'use server';
import { tasks } from '@trigger.dev/sdk/v3';
import type { sendWelcomeEmail } from '@/trigger/send-welcome-email';
export async function registerUser(formData: FormData) {
const user = await createUser(formData);
// Trigger background task — returns immediately
await tasks.trigger<typeof sendWelcomeEmail>('send-welcome-email', {
userId: user.id,
email: user.email,
name: user.name,
});
return { success: true, userId: user.id };
}
Scheduled Tasks with Trigger.dev
You can define cron-scheduled tasks directly in your task definitions:
// trigger/daily-metrics.ts
import { schedules } from '@trigger.dev/sdk/v3';
import { db } from '@/lib/db';
export const dailyMetrics = schedules.task({
id: 'daily-metrics-collection',
cron: '0 0 * * *', // Midnight UTC
run: async () => {
const activeUsers = await db.query.users.findMany({
where: (users, { gte }) =>
gte(users.lastActiveAt, new Date(Date.now() - 86400000)),
});
await db.insert(metrics).values({
date: new Date(),
activeUserCount: activeUsers.length,
type: 'daily',
});
return { collected: true, activeUsers: activeUsers.length };
},
});
Self-Hosted: node-cron for Serverful Deployments
If you deploy your Next.js application to a traditional server — a VPS, a Docker container, or a Kubernetes pod — you've got a long-running Node.js process that supports in-process cron scheduling. This is honestly the simplest and cheapest approach when your infrastructure supports it.
Setting Up node-cron
npm install node-cron
Create a cron initialization module:
// lib/cron.ts
import cron from 'node-cron';
import { cleanupExpiredSessions } from '@/lib/tasks/cleanup';
import { syncExternalData } from '@/lib/tasks/sync';
import { sendWeeklyDigest } from '@/lib/tasks/email';
export function initCronJobs() {
// Clean up expired sessions every hour
cron.schedule('0 * * * *', async () => {
console.log('[Cron] Cleaning up expired sessions');
try {
const result = await cleanupExpiredSessions();
console.log(`[Cron] Removed ${result.count} expired sessions`);
} catch (error) {
console.error('[Cron] Session cleanup failed:', error);
}
});
// Sync external data every 6 hours
cron.schedule('0 */6 * * *', async () => {
console.log('[Cron] Syncing external data');
await syncExternalData();
});
// Send weekly digest every Monday at 9 AM
cron.schedule('0 9 * * 1', async () => {
console.log('[Cron] Sending weekly digest');
await sendWeeklyDigest();
}, { timezone: 'America/New_York' });
console.log('[Cron] All jobs registered');
}
Integrating with a Custom Server
For a custom server setup using output: 'standalone', initialize cron jobs when the server starts:
// server.ts
import { createServer } from 'http';
import { parse } from 'url';
import next from 'next';
import { initCronJobs } from './lib/cron';
const dev = process.env.NODE_ENV !== 'production';
const app = next({ dev });
const handle = app.getRequestHandler();
app.prepare().then(() => {
createServer((req, res) => {
const parsedUrl = parse(req.url!, true);
handle(req, res, parsedUrl);
}).listen(3000, () => {
console.log('Server running on port 3000');
initCronJobs(); // Start cron jobs after server is ready
});
});
Important: This approach only works in environments where the Node.js process runs continuously. It will silently fail on Vercel, Netlify, or any serverless platform.
GitHub Actions: Free External Scheduler
For simple scheduled tasks that don't require real-time responsiveness, GitHub Actions is a surprisingly good option. It's free (for public repos and generous on private ones) and works with any deployment platform because it just calls your endpoint over HTTP.
# .github/workflows/daily-sync.yml
name: Daily Data Sync
on:
schedule:
- cron: '0 6 * * *' # Daily at 6 AM UTC
workflow_dispatch: # Allow manual trigger
jobs:
sync:
runs-on: ubuntu-latest
steps:
- name: Trigger sync endpoint
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.CRON_SECRET }}" \
-H "Content-Type: application/json" \
"${{ secrets.APP_URL }}/api/cron/sync-data"
One thing to keep in mind: GitHub Actions scheduled workflows can be delayed by up to 15 minutes during periods of high demand on GitHub's infrastructure. For most background tasks like data syncing or cleanup, this is perfectly fine.
Choosing the Right Approach
With so many options available, here's a quick decision framework based on your deployment model and what you actually need:
- Post-response side effects (logging, analytics, email confirmations): Use the
after()API. It's built-in, requires no external services, and doesn't block the user response. - Simple scheduled tasks on Vercel (daily cleanup, weekly emails): Use Vercel Cron Jobs. Zero setup cost, native integration, and sufficient for most applications.
- Deferred or delayed jobs from application code (process file after upload, send email in 5 minutes): Use Upstash QStash. It provides retry logic, delivery guarantees, and works on any serverless platform.
- Complex workflows with observability (multi-step pipelines, long-running tasks, real-time status): Use Trigger.dev. It offers the richest feature set for background job orchestration.
- Self-hosted deployments (Docker, VPS, Kubernetes): Use
node-cronornode-scheduledirectly. No external dependencies, no cost, full control. - Budget-conscious teams needing external triggers: Use GitHub Actions. Free for public repos and generous free tier for private repos.
In my experience, most teams start with Vercel Cron for the basics and then add QStash or Trigger.dev as their needs grow. You don't have to pick just one — these tools complement each other well.
Real-World Patterns and Best Practices
Idempotent Job Design
Cron jobs and background tasks can (and will) be triggered more than once due to retries, infrastructure issues, or overlapping executions. Design every job to be idempotent — running the same job twice with the same input should produce the same result:
// app/api/cron/send-digest/route.ts
export async function GET(request: NextRequest) {
// Check if digest was already sent today
const today = new Date().toISOString().split('T')[0];
const existing = await db.query.digestLogs.findFirst({
where: eq(digestLogs.date, today),
});
if (existing) {
return NextResponse.json({
skipped: true,
reason: 'Digest already sent today',
});
}
// Send the digest and log it
await sendDigestToAllUsers();
await db.insert(digestLogs).values({ date: today, sentAt: new Date() });
return NextResponse.json({ success: true });
}
Breaking Up Long-Running Tasks
On serverless platforms with strict timeouts, you'll want to split large jobs into smaller chunks. Process a batch per invocation and track progress:
// app/api/cron/process-queue/route.ts
export const maxDuration = 60;
export async function GET(request: NextRequest) {
const BATCH_SIZE = 50;
const pendingItems = await db.query.queue.findMany({
where: eq(queue.status, 'pending'),
limit: BATCH_SIZE,
orderBy: asc(queue.createdAt),
});
let processed = 0;
for (const item of pendingItems) {
await processItem(item);
await db.update(queue)
.set({ status: 'completed' })
.where(eq(queue.id, item.id));
processed++;
}
return NextResponse.json({
processed,
remaining: pendingItems.length === BATCH_SIZE,
});
}
Error Handling and Alerting
Silent cron job failures are dangerous. I've seen teams go weeks without realizing a critical background job stopped running. Always add error handling and integrate with a monitoring service:
// lib/cron-wrapper.ts
export async function withCronMonitoring(
jobName: string,
handler: () => Promise<unknown>
) {
const startTime = Date.now();
try {
const result = await handler();
const duration = Date.now() - startTime;
console.log(`[Cron] ${jobName} completed in ${duration}ms`);
return { success: true, duration, result };
} catch (error) {
const duration = Date.now() - startTime;
console.error(`[Cron] ${jobName} failed after ${duration}ms:`, error);
// Send alert to your monitoring service
await fetch(process.env.ALERT_WEBHOOK_URL!, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
job: jobName,
error: error instanceof Error ? error.message : 'Unknown error',
duration,
timestamp: new Date().toISOString(),
}),
});
throw error;
}
}
Monitoring and Debugging Cron Jobs
Vercel Dashboard
On Vercel, navigate to your project's Settings > Crons to view all registered cron jobs, their schedules, and execution history. The Usage tab shows invocation counts and duration metrics. For detailed logs, check the Functions tab where each cron invocation shows up as a function execution.
Structured Logging
Add structured metadata to your cron job logs to make filtering and debugging easier (you'll thank yourself later):
function cronLog(jobName: string, data: Record<string, unknown>) {
console.log(JSON.stringify({
type: 'cron',
job: jobName,
timestamp: new Date().toISOString(),
...data,
}));
}
Health Check Endpoints
Create a dedicated health check route that reports the status of your scheduled jobs:
// app/api/health/cron/route.ts
export async function GET() {
const jobs = await db.query.cronLogs.findMany({
orderBy: desc(cronLogs.executedAt),
limit: 10,
});
const lastRun = jobs[0];
const isHealthy = lastRun &&
Date.now() - new Date(lastRun.executedAt).getTime() < 86400000;
return NextResponse.json({
healthy: isHealthy,
lastExecution: lastRun?.executedAt,
recentJobs: jobs,
}, { status: isHealthy ? 200 : 503 });
}
Frequently Asked Questions
Can I use node-cron with Vercel or serverless platforms?
No. Libraries like node-cron and node-schedule require a continuously running Node.js process. In serverless environments, the process starts per request and is destroyed afterward. A cron job registered with node-cron will execute once during that request and then disappear — no error, no warning, nothing. Use Vercel Cron Jobs, QStash, or an external trigger like GitHub Actions instead.
How do I test Vercel Cron Jobs locally during development?
Vercel cron jobs only run on production deployments. During local development, test your cron route handlers by making direct HTTP requests. Use curl or a tool like Thunder Client to call the endpoint with the appropriate authorization header: curl -H "Authorization: Bearer your-secret" http://localhost:3000/api/cron/your-job. For QStash, you'll need a public URL, so use a tunneling tool like ngrok.
What's the difference between after() and a cron job?
The after() API runs code immediately after a specific user request finishes, making it ideal for post-response side effects like logging or sending transactional emails. Cron jobs run on a fixed schedule regardless of user activity and are suited for periodic maintenance tasks. They serve different purposes: after() is request-triggered, cron jobs are time-triggered.
How many cron jobs can I run on Vercel for free?
The Vercel Hobby (free) plan allows up to 2 cron jobs with a minimum frequency of once per day. Timing is guaranteed at the hour level, not the minute level. The Pro plan supports up to 40 cron jobs with minute-level precision. If you need more scheduled jobs on the free plan, consider using GitHub Actions or Upstash QStash as alternatives.
What happens if my cron job takes longer than the function timeout?
The function gets terminated and returns a timeout error. On the Hobby plan, the default timeout is 10 seconds (configurable up to 60 seconds). On the Pro plan, it defaults to 15 seconds and can be increased to 300 seconds, or up to 14 minutes with Fluid Compute enabled. For tasks that exceed these limits, break the work into smaller batches, use a queue-based approach with QStash, or move to Trigger.dev which supports long-running tasks natively.