Real-time features aren't optional anymore. Users expect instant notifications, live progress updates, and dashboards that refresh without a page reload. While WebSockets tend to get all the attention, Server-Sent Events (SSE) offer a dramatically simpler alternative for the most common real-time patterns — and honestly, they work beautifully with the Next.js App Router.
In this guide, you'll build three production-ready real-time features using SSE in Next.js: a notification system, a long-running task progress tracker, and a live data dashboard. Along the way, you'll learn the correct route handler patterns, client-side consumption with EventSource, production hardening with heartbeats and reconnection logic, and how to deal with Vercel's deployment limitations.
What Are Server-Sent Events and When Should You Use Them?
Server-Sent Events is a web standard (part of the HTML spec) that lets servers push data to browsers over a single, long-lived HTTP connection. Unlike WebSockets, which give you full-duplex (bidirectional) communication, SSE is a one-way channel — the server sends, the client listens.
This simplicity is actually a strength.
SSE uses regular HTTP, which means it works through firewalls, proxies, and load balancers without any special configuration. The browser's built-in EventSource API handles connection management, automatic reconnection, and event parsing for you. No extra libraries needed.
SSE vs WebSockets: A Practical Comparison
| Feature | Server-Sent Events | WebSockets |
|---|---|---|
| Direction | Server → Client only | Full duplex (both ways) |
| Protocol | Standard HTTP | Custom WS/WSS protocol |
| Automatic reconnect | Built-in | Must implement manually |
| Serverless compatible | Yes (with caveats) | No — requires persistent server |
| Complexity | Low | Higher |
| Binary data | Text only (JSON strings) | Text and binary |
| Best for | Notifications, feeds, progress, dashboards | Chat, games, collaborative editing |
Choose SSE when your app only needs server-to-client updates — notifications, activity feeds, progress bars, live dashboards, stock tickers, or AI/LLM response streaming. Choose WebSockets when you need bidirectional communication — multiplayer games, collaborative document editing, or real-time chat where both parties send messages frequently.
Setting Up SSE in the Next.js App Router
The Next.js App Router uses Web Standard Request and Response APIs in route handlers. So your SSE implementation relies on the ReadableStream or TransformStream APIs rather than the classic Node.js res.write() pattern. It's a bit different if you're coming from Express, but the concept is the same.
The Basic SSE Route Handler
Create a new route handler at app/api/sse/route.ts. The critical pattern here is: construct a ReadableStream, set the correct SSE headers, and return the response immediately so Next.js doesn't buffer the output.
// app/api/sse/route.ts
export const dynamic = "force-dynamic";
export async function GET(request: Request) {
const encoder = new TextEncoder();
const stream = new ReadableStream({
start(controller) {
// Send an initial connection event
controller.enqueue(
encoder.encode("event: connected\ndata: {}\n\n")
);
// Send updates every 3 seconds
const interval = setInterval(() => {
const data = JSON.stringify({
timestamp: new Date().toISOString(),
message: "Hello from the server",
});
controller.enqueue(encoder.encode(`data: ${data}\n\n`));
}, 3000);
// Clean up when the client disconnects
request.signal.addEventListener("abort", () => {
clearInterval(interval);
controller.close();
});
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
"X-Accel-Buffering": "no",
},
});
}
A few key details worth calling out:
export const dynamic = "force-dynamic"prevents Next.js and Vercel from caching the route response. Skip this and you'll wonder why nothing streams.- The
X-Accel-Buffering: noheader disables NGINX proxy buffering — essential when you're deploying behind a reverse proxy. - The
request.signalabort listener cleans up resources when the client disconnects. Without it, you'll leak memory over time. - Each SSE message follows the format
data: <payload>\n\n— that double newline is required to delimit events.
The SSE Message Format
SSE messages support several optional fields beyond data:
// Named event with ID (enables resume on reconnect)
id: 42
event: notification
data: {"title":"New comment","body":"Someone replied to your post"}
// Default message event (no event field)
data: {"timestamp":"2026-03-01T12:00:00Z"}
// Multi-line data
data: line one
data: line two
// Comment (ignored by EventSource, useful for heartbeats)
: this is a heartbeat comment
// Retry interval (tells the client to reconnect after N ms)
retry: 5000
Building a Real-Time Notification System
Let's build something practical — a notification system where the server pushes notifications as they happen, and the client renders them with a notification badge.
Step 1: The Notification SSE Endpoint
In a production app, notifications would come from a database or message queue like Redis Pub/Sub. For this example, we'll simulate notifications with a polling pattern against a data store.
// app/api/notifications/stream/route.ts
import { getUnreadNotifications } from "@/lib/notifications";
export const dynamic = "force-dynamic";
export async function GET(request: Request) {
const encoder = new TextEncoder();
const userId = request.headers.get("x-user-id");
if (!userId) {
return new Response("Unauthorized", { status: 401 });
}
const stream = new ReadableStream({
start(controller) {
let lastCheckedId = "0";
const sendEvent = (event: string, data: unknown) => {
controller.enqueue(
encoder.encode(
`event: ${event}\ndata: ${JSON.stringify(data)}\n\n`
)
);
};
// Poll for new notifications every 2 seconds
const interval = setInterval(async () => {
try {
const notifications = await getUnreadNotifications(
userId,
lastCheckedId
);
for (const notification of notifications) {
sendEvent("notification", {
id: notification.id,
title: notification.title,
body: notification.body,
createdAt: notification.createdAt,
});
lastCheckedId = notification.id;
}
} catch (error) {
console.error("Notification polling error:", error);
}
}, 2000);
// Send heartbeat every 15 seconds to keep connection alive
const heartbeat = setInterval(() => {
controller.enqueue(encoder.encode(": heartbeat\n\n"));
}, 15000);
// Clean up on client disconnect
request.signal.addEventListener("abort", () => {
clearInterval(interval);
clearInterval(heartbeat);
controller.close();
});
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
"X-Accel-Buffering": "no",
},
});
}
Step 2: The Client-Side Notification Hook
Now for the fun part. Let's create a reusable React hook that manages the EventSource connection with proper cleanup and reconnection logic.
// hooks/use-notifications.ts
"use client";
import { useEffect, useRef, useState, useCallback } from "react";
interface Notification {
id: string;
title: string;
body: string;
createdAt: string;
}
export function useNotifications(userId: string) {
const [notifications, setNotifications] = useState<Notification[]>([]);
const [isConnected, setIsConnected] = useState(false);
const eventSourceRef = useRef<EventSource | null>(null);
const retryCountRef = useRef(0);
const maxRetries = 5;
const connect = useCallback(() => {
// Close existing connection
eventSourceRef.current?.close();
const url = `/api/notifications/stream`;
const es = new EventSource(url);
eventSourceRef.current = es;
es.addEventListener("notification", (event) => {
const notification: Notification = JSON.parse(event.data);
setNotifications((prev) => [notification, ...prev]);
});
es.addEventListener("open", () => {
setIsConnected(true);
retryCountRef.current = 0; // Reset retry count on success
});
es.addEventListener("error", () => {
setIsConnected(false);
es.close();
// Exponential backoff: 1s, 2s, 4s, 8s, 16s
if (retryCountRef.current < maxRetries) {
const delay = Math.pow(2, retryCountRef.current) * 1000;
retryCountRef.current += 1;
setTimeout(connect, delay);
}
});
}, []);
useEffect(() => {
connect();
return () => {
eventSourceRef.current?.close();
};
}, [connect]);
const clearNotification = useCallback((id: string) => {
setNotifications((prev) => prev.filter((n) => n.id !== id));
}, []);
return { notifications, isConnected, clearNotification };
}
Step 3: The Notification Bell Component
// components/notification-bell.tsx
"use client";
import { useNotifications } from "@/hooks/use-notifications";
export function NotificationBell({ userId }: { userId: string }) {
const { notifications, isConnected, clearNotification } =
useNotifications(userId);
return (
<div className="relative">
<button className="relative p-2">
<BellIcon className="h-6 w-6" />
{notifications.length > 0 && (
<span className="absolute -top-1 -right-1 bg-red-500 text-white text-xs rounded-full h-5 w-5 flex items-center justify-center">
{notifications.length}
</span>
)}
</button>
{!isConnected && (
<span className="text-xs text-yellow-600">Reconnecting...</span>
)}
<div className="absolute right-0 mt-2 w-80 bg-white shadow-lg rounded-lg">
{notifications.map((n) => (
<div key={n.id} className="p-3 border-b">
<p className="font-medium">{n.title}</p>
<p className="text-sm text-gray-600">{n.body}</p>
<button onClick={() => clearNotification(n.id)}>
Dismiss
</button>
</div>
))}
</div>
</div>
);
}
Progress Tracking for Long-Running Tasks
SSE is perfect for showing progress on tasks that take seconds or minutes to complete — file uploads, data imports, report generation, or deployment pipelines. Instead of polling an endpoint over and over, the server just streams progress updates as they happen. Way cleaner.
The Progress Stream Endpoint
// app/api/tasks/[taskId]/progress/route.ts
import { getTaskStatus } from "@/lib/tasks";
export const dynamic = "force-dynamic";
export async function GET(
request: Request,
{ params }: { params: Promise<{ taskId: string }> }
) {
const { taskId } = await params;
const encoder = new TextEncoder();
const stream = new ReadableStream({
start(controller) {
const sendProgress = (data: unknown) => {
controller.enqueue(
encoder.encode(
`event: progress\ndata: ${JSON.stringify(data)}\n\n`
)
);
};
const checkProgress = async () => {
try {
const status = await getTaskStatus(taskId);
sendProgress({
taskId,
progress: status.progress,
stage: status.stage,
message: status.message,
});
if (status.progress >= 100 || status.stage === "failed") {
// Send completion event and close the stream
controller.enqueue(
encoder.encode(
`event: complete\ndata: ${JSON.stringify({
taskId,
result: status.result,
stage: status.stage,
})}\n\n`
)
);
clearInterval(interval);
controller.close();
return;
}
} catch (error) {
console.error("Progress check failed:", error);
}
};
// Check progress every second
const interval = setInterval(checkProgress, 1000);
checkProgress(); // Send initial status immediately
// Send heartbeats to keep the connection alive
const heartbeat = setInterval(() => {
controller.enqueue(encoder.encode(": heartbeat\n\n"));
}, 15000);
request.signal.addEventListener("abort", () => {
clearInterval(interval);
clearInterval(heartbeat);
controller.close();
});
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
"X-Accel-Buffering": "no",
},
});
}
The Progress Bar Component
// components/task-progress.tsx
"use client";
import { useEffect, useState } from "react";
interface ProgressState {
progress: number;
stage: string;
message: string;
result?: unknown;
}
export function TaskProgress({ taskId }: { taskId: string }) {
const [state, setState] = useState<ProgressState>({
progress: 0,
stage: "starting",
message: "Initializing...",
});
const [isDone, setIsDone] = useState(false);
useEffect(() => {
const es = new EventSource(
`/api/tasks/${taskId}/progress`
);
es.addEventListener("progress", (event) => {
setState(JSON.parse(event.data));
});
es.addEventListener("complete", (event) => {
const data = JSON.parse(event.data);
setState((prev) => ({ ...prev, ...data, progress: 100 }));
setIsDone(true);
es.close();
});
es.addEventListener("error", () => {
es.close();
});
return () => es.close();
}, [taskId]);
return (
<div className="space-y-2">
<div className="flex justify-between text-sm">
<span>{state.message}</span>
<span>{Math.round(state.progress)}%</span>
</div>
<div className="w-full bg-gray-200 rounded-full h-3">
<div
className="bg-blue-600 h-3 rounded-full transition-all duration-300"
style={{ width: `${state.progress}%` }}
/>
</div>
{isDone && (
<p className="text-green-600 font-medium">
Task completed successfully
</p>
)}
</div>
);
}
Building a Live Data Dashboard
For dashboards that display metrics, analytics, or monitoring data, SSE lets you push updates the moment they happen. No polling overhead, no stale data — just a truly real-time experience for your users.
The Dashboard Stream Endpoint
// app/api/dashboard/stream/route.ts
import { getDashboardMetrics } from "@/lib/metrics";
export const dynamic = "force-dynamic";
export async function GET(request: Request) {
const encoder = new TextEncoder();
const stream = new ReadableStream({
start(controller) {
let eventId = 0;
const sendMetrics = async () => {
try {
const metrics = await getDashboardMetrics();
eventId += 1;
controller.enqueue(
encoder.encode(
`id: ${eventId}\nevent: metrics\ndata: ${JSON.stringify(
metrics
)}\n\n`
)
);
} catch (error) {
console.error("Metrics fetch failed:", error);
}
};
// Push fresh metrics every 5 seconds
const interval = setInterval(sendMetrics, 5000);
sendMetrics(); // Send initial data immediately
const heartbeat = setInterval(() => {
controller.enqueue(encoder.encode(": heartbeat\n\n"));
}, 15000);
request.signal.addEventListener("abort", () => {
clearInterval(interval);
clearInterval(heartbeat);
controller.close();
});
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
"X-Accel-Buffering": "no",
},
});
}
A Reusable useSSE Hook
You've probably noticed by now that the SSE consumption pattern is basically the same across all three features. So let's extract a generic hook that you can reuse everywhere.
// hooks/use-sse.ts
"use client";
import { useEffect, useRef, useState, useCallback } from "react";
interface UseSSEOptions {
url: string;
events: string[];
maxRetries?: number;
}
export function useSSE<T>({ url, events, maxRetries = 5 }: UseSSEOptions) {
const [data, setData] = useState<T | null>(null);
const [isConnected, setIsConnected] = useState(false);
const [error, setError] = useState<string | null>(null);
const esRef = useRef<EventSource | null>(null);
const retryRef = useRef(0);
const connect = useCallback(() => {
esRef.current?.close();
const es = new EventSource(url);
esRef.current = es;
events.forEach((eventName) => {
es.addEventListener(eventName, (event) => {
setData(JSON.parse(event.data));
setError(null);
});
});
es.addEventListener("open", () => {
setIsConnected(true);
retryRef.current = 0;
});
es.addEventListener("error", () => {
setIsConnected(false);
es.close();
if (retryRef.current < maxRetries) {
const delay = Math.pow(2, retryRef.current) * 1000;
retryRef.current += 1;
setError(`Connection lost. Retrying in ${delay / 1000}s...`);
setTimeout(connect, delay);
} else {
setError("Connection failed after maximum retries.");
}
});
}, [url, events, maxRetries]);
useEffect(() => {
connect();
return () => esRef.current?.close();
}, [connect]);
return { data, isConnected, error };
}
Now consuming SSE anywhere in your app becomes a one-liner:
// In any client component
const { data: metrics, isConnected } = useSSE<DashboardMetrics>({
url: "/api/dashboard/stream",
events: ["metrics"],
});
Production Hardening: Heartbeats, Reconnection, and Cleanup
Getting SSE working in development is the easy part. Making it reliable in production? That's where things get interesting. There are several edge cases you'll want to handle.
Server-Side Heartbeats
Proxies, load balancers, and cloud platforms often kill idle connections after 30 to 60 seconds. Heartbeats prevent this by sending periodic keep-alive messages that the EventSource API silently ignores.
// Send a comment-style heartbeat every 15 seconds
const heartbeat = setInterval(() => {
controller.enqueue(encoder.encode(": heartbeat\n\n"));
}, 15000);
The colon prefix (:) marks this as a comment in the SSE spec. The EventSource client ignores it completely, but it keeps the HTTP connection alive through any intermediate proxies. Simple and effective.
Event IDs and Resumable Streams
By including an id field with each event, you enable the EventSource client to automatically send a Last-Event-ID header when it reconnects. Your server can then use this to resume the stream right where the client left off — no missed events.
// Server: include an event ID
controller.enqueue(
encoder.encode(`id: ${eventId}\ndata: ${payload}\n\n`)
);
// Server: read Last-Event-ID on reconnect
export async function GET(request: Request) {
const lastEventId = request.headers.get("Last-Event-ID");
// If lastEventId exists, fetch missed events from the database
// and replay them before starting the live stream
}
Graceful Cleanup on Client Disconnect
Always listen for the request abort signal to clean up timers, database connections, and other resources when a client navigates away or closes their browser. This one's non-negotiable.
request.signal.addEventListener("abort", () => {
clearInterval(pollingInterval);
clearInterval(heartbeatInterval);
controller.close();
});
Deploying SSE on Vercel: Limitations and Workarounds
Vercel supports SSE streaming in both Serverless Functions and Edge Functions, but there are some important constraints you need to know about before deploying.
Function Timeouts
Vercel enforces execution time limits: 10 seconds on the Hobby plan and 60 seconds on the Pro plan. For SSE streams that need to run longer than these limits, you've got two main options:
- Client-side auto-reconnect: Let the stream time out and rely on
EventSource's automatic reconnection. Combine this with heartbeats every 15 seconds and event IDs for resumability. - Edge Runtime: Use
export const runtime = "edge"for longer-lived connections, though keep in mind the Edge Runtime has its own constraints around supported APIs.
No Persistent State Between Requests
Vercel's serverless architecture means each request gets its own isolated function instance. You can't store a map of connected clients in a global variable and expect it to persist across requests. For multi-client broadcasting, you'll need an external pub/sub service:
- Upstash Redis with pub/sub channels
- Pusher or Ably for managed real-time infrastructure
- Supabase Realtime for PostgreSQL-backed real-time
The force-dynamic Directive
Always add export const dynamic = "force-dynamic" to your SSE route handlers. Without it, Vercel may cache or statically optimize the route, which completely breaks streaming. I've seen this trip up quite a few developers.
Self-Hosting Advantages
If your application needs long-lived SSE connections with many concurrent clients, self-hosting on a VPS with a Node.js server removes all these limitations. You get persistent process memory for client connection maps, no function timeouts, and full control over your infrastructure. Docker with output: "standalone" makes deployment pretty straightforward.
Scaling SSE with Redis Pub/Sub
For apps running on multiple server instances (or serverless functions), you need a shared message bus to broadcast events to all connected clients. Redis Pub/Sub is by far the most popular choice for this.
// lib/redis-sse.ts
import { Redis } from "@upstash/redis";
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL!,
token: process.env.UPSTASH_REDIS_REST_TOKEN!,
});
export async function publishEvent(
channel: string,
data: unknown
) {
await redis.publish(channel, JSON.stringify(data));
}
// app/api/events/stream/route.ts
import { Redis } from "@upstash/redis";
export const dynamic = "force-dynamic";
export async function GET(request: Request) {
const encoder = new TextEncoder();
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL!,
token: process.env.UPSTASH_REDIS_REST_TOKEN!,
});
const stream = new ReadableStream({
async start(controller) {
const subscription = redis.subscribe("events");
// Forward Redis messages to the SSE stream
// (Implementation depends on your Redis client)
const heartbeat = setInterval(() => {
controller.enqueue(encoder.encode(": heartbeat\n\n"));
}, 15000);
request.signal.addEventListener("abort", () => {
clearInterval(heartbeat);
controller.close();
});
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
},
});
}
Streaming AI and LLM Responses with SSE
One of the fastest-growing use cases for SSE in Next.js right now is streaming AI model responses. Rather than making your users stare at a loading spinner while the entire LLM response generates, you can stream tokens to the UI as they come in — creating that familiar ChatGPT-like experience.
Using the Vercel AI SDK
The Vercel AI SDK gives you the simplest path to AI streaming in Next.js. It handles all the SSE complexity behind a clean API, and honestly, for most projects it's the right call.
// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
export async function POST(request: Request) {
const { messages } = await request.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
});
return result.toDataStreamResponse();
}
// components/chat.tsx
"use client";
import { useChat } from "@ai-sdk/react";
export function Chat() {
const { messages, input, handleInputChange, handleSubmit } =
useChat();
return (
<div>
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role}:</strong> {message.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
placeholder="Ask something..."
/>
</form>
</div>
);
}
Manual SSE Streaming Without the AI SDK
If you want full control or you're working with a provider that the AI SDK doesn't support, you can implement SSE streaming manually. It's more code, but sometimes that's what you need.
// app/api/ai/stream/route.ts
export const dynamic = "force-dynamic";
export async function POST(request: Request) {
const { prompt } = await request.json();
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
try {
const response = await fetch(
"https://api.openai.com/v1/chat/completions",
{
method: "POST",
headers: {
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "gpt-4o",
messages: [{ role: "user", content: prompt }],
stream: true,
}),
}
);
const reader = response.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
controller.enqueue(
encoder.encode(`data: ${chunk}\n\n`)
);
}
controller.enqueue(
encoder.encode("event: done\ndata: {}\n\n")
);
controller.close();
} catch (error) {
controller.error(error);
}
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
},
});
}
Common Pitfalls and How to Avoid Them
Buffered Responses in Production
This is the number one SSE issue developers run into: all events arrive at once instead of streaming incrementally. It happens when Next.js buffers the response until the handler function completes. The fix is to make sure your async work runs inside the ReadableStream's start() callback and that the Response is returned immediately.
Memory Leaks from Uncleared Intervals
Every setInterval in your SSE handler must be cleared when the client disconnects. Without proper cleanup, you'll accumulate orphaned timers that eat up memory and CPU. Always use request.signal.addEventListener("abort", ...) to tear down resources. Trust me on this one — it's bitten me more than once in production.
EventSource Only Supports GET
The browser EventSource API only supports GET requests with no custom headers (except cookies). If you need to send authentication tokens or POST data, use a two-step pattern: POST to create a session or token, then open an EventSource with the session ID as a query parameter.
// Step 1: Create a session via POST
const res = await fetch("/api/sse/session", {
method: "POST",
body: JSON.stringify({ filters: userFilters }),
});
const { sessionId } = await res.json();
// Step 2: Open EventSource with session ID
const es = new EventSource(
`/api/sse/stream?session=${sessionId}`
);
CORS and Cross-Origin SSE
If your SSE endpoint lives on a different origin than your frontend, you'll need to set the withCredentials option on EventSource and configure proper CORS headers on the server. Fortunately, this is rarely an issue with Next.js since route handlers share the same origin as your frontend.
Frequently Asked Questions
Can I use Server-Sent Events with Next.js middleware?
No. Next.js middleware runs on the Edge Runtime and is designed for short-lived request/response cycles like redirects, rewrites, and header modifications. SSE requires a long-lived connection that middleware can't maintain. Use route handlers (app/api/.../route.ts) for SSE endpoints instead.
Do Server-Sent Events work on Vercel's free Hobby plan?
Yes, but with a 10-second function timeout. Your SSE stream will get cut off after 10 seconds. The practical workaround is to lean on EventSource's built-in automatic reconnection. Send event IDs with each message so clients can pick up where they left off after reconnecting. For longer-lived streams, upgrade to Vercel Pro (60-second timeout) or self-host.
How many concurrent SSE connections can a Next.js server handle?
On a self-hosted Node.js server, you can typically handle thousands of concurrent SSE connections since each one is just an open HTTP request consuming minimal resources. On Vercel, each SSE connection uses a serverless function instance, so you're limited by your function concurrency quota. For high-concurrency scenarios, consider a dedicated real-time service like Pusher, Ably, or Upstash.
What's the difference between SSE and React Suspense streaming?
React Suspense streaming (used with loading.tsx and <Suspense> boundaries) is a one-time progressive render of the initial page load. SSE is a persistent connection for ongoing real-time updates after the page has loaded. They solve completely different problems — and you can actually use them together. Suspense for the initial render, SSE for live updates afterward.
Should I use SSE or the Vercel AI SDK for streaming LLM responses?
Use the Vercel AI SDK if your project uses a supported provider (OpenAI, Anthropic, Google, etc.) — it handles SSE, parsing, UI state, and error recovery for you. Go with raw SSE only when you need full control over the streaming protocol, are working with an unsupported provider, or want to avoid the extra dependency.