Self-Hosting Next.js with Docker: Standalone Output, Multi-Stage Builds, and Production Deployment

Learn how to self-host Next.js with Docker using standalone output and multi-stage builds. Covers health checks, ISR caching with Redis, Nginx reverse proxy, CI/CD with GitHub Actions, and environment variable management.

Vercel makes deploying Next.js ridiculously easy, but what happens when you need full control over your infrastructure? Maybe you're trying to cut hosting costs, or your company has strict compliance requirements that won't budge. Whatever the reason, self-hosting Next.js with Docker gives you a portable, reproducible production environment that runs anywhere — from a $5/month VPS to a full-blown Kubernetes cluster.

I've been running self-hosted Next.js apps in Docker for a while now, and honestly, it's more straightforward than most people think. This guide walks you through every step: configuring standalone output, writing a production-grade multi-stage Dockerfile, setting up health checks, handling ISR caching across multiple containers, managing environment variables, and wiring up a CI/CD pipeline.

So, let's dive in.

Why Self-Host Next.js Instead of Using Vercel?

Vercel is purpose-built for Next.js and the zero-config deploys are genuinely great. But self-hosting makes sense in quite a few scenarios:

  • Cost control — At scale, Vercel bills can climb fast. A dedicated VPS from providers like Hetzner or DigitalOcean starts at $4–6/month and can handle a surprising amount of traffic.
  • Data sovereignty — Regulatory or compliance requirements may mandate that your app runs in specific regions or on infrastructure you own.
  • Vendor independence — Self-hosting eliminates lock-in to any single platform. You can deploy to AWS, GCP, Azure, Railway, Fly.io, or bare metal.
  • Full-stack control — Run Next.js alongside databases, Redis, background workers, and other services in a single Docker Compose stack.
  • Custom networking — Integrate with private APIs, VPNs, or internal services that aren't reachable from Vercel's infrastructure.

Here's the thing that surprises a lot of people: a self-hosted Next.js app supports every feature that Vercel does — Server Components, Server Actions, middleware, ISR, image optimization, all of it. You're just managing the infrastructure yourself.

Configuring Standalone Output in Next.js

The first step is enabling standalone output mode. During next build, Next.js uses @vercel/nft to statically trace every import, require, and fs call to figure out exactly which files your application needs at runtime. The result is a self-contained .next/standalone directory with a minimal server.js entry point and only the node_modules your code actually uses.

// next.config.ts
import type { NextConfig } from 'next';

const nextConfig: NextConfig = {
  output: 'standalone',
};

export default nextConfig;

That's it. This single setting reduces your Docker image from over 1 GB down to roughly 100–200 MB. The difference is dramatic because it eliminates thousands of unused dependency files.

What Standalone Output Includes (and What It Doesn't)

The standalone directory includes compiled application code, traced node_modules, and the auto-generated server.js. However, it does not include the public/ folder or .next/static/. You'll need to copy these manually in your Dockerfile (or serve them from a CDN).

This is intentional — Next.js assumes a CDN will handle static assets in production for optimal performance.

Writing a Production-Grade Multi-Stage Dockerfile

A multi-stage build separates dependency installation, the build process, and the runtime into isolated stages. Only the final stage makes it into your production image, keeping things lean and secure.

# Stage 1: Install dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci --only=production && npm cache clean --force

# Stage 2: Build the application
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci
COPY . .
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build

# Stage 3: Production runtime
FROM node:20-alpine AS runner
WORKDIR /app

ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1

# Create a non-root user for security
RUN addgroup --system --gid 1001 nodejs \
    && adduser --system --uid 1001 nextjs

# Copy the standalone output
COPY --from=builder /app/.next/standalone ./
# Copy static assets (not included in standalone by default)
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public

# Set correct ownership
RUN chown -R nextjs:nodejs /app

USER nextjs

EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"

CMD ["node", "server.js"]

Key Decisions in This Dockerfile

  • Alpine base imagenode:20-alpine is roughly 50 MB versus 350 MB for Debian-based images. If you hit native module compatibility issues (pretty rare with Next.js 15+), switch to node:20-slim.
  • Non-root user — Running as nextjs instead of root limits the blast radius if the container gets compromised. Always do this.
  • Telemetry disabledNEXT_TELEMETRY_DISABLED=1 prevents anonymous usage data from being sent during the build and at runtime.
  • Explicit HOSTNAME — Setting HOSTNAME="0.0.0.0" ensures the server listens on all interfaces inside the container, not just localhost. Miss this and you'll spend an hour wondering why nothing connects.

Adding a .dockerignore File

Don't skip this step. Create a .dockerignore to exclude files that bloat the build context or could leak secrets:

node_modules
.next
.git
.env*
Dockerfile
docker-compose*.yml
README.md
.vscode
.idea

Image Optimization with Sharp

Next.js uses the sharp library for production image optimization via next/image. Good news if you're on Next.js 15 or later: sharp is automatically bundled when you use standalone output. No manual install needed.

If you're stuck on Next.js 14 or earlier, you'll need to add it yourself:

npm install sharp

Without sharp, Next.js falls back to the slower squoosh engine (which was removed entirely in Next.js 15). It's significantly slower and eats more memory under load — not what you want in production.

Health Checks for Container Orchestration

Health checks tell your orchestrator — whether that's Docker, Kubernetes, or ECS — if a container is actually ready for traffic. Without them, requests can route to containers that are still starting up, and your users get errors during deployments. Not great.

Step 1: Create a Health Check API Route

// app/api/health/route.ts
import { NextResponse } from 'next/server';

export async function GET() {
  return NextResponse.json({
    status: 'healthy',
    timestamp: new Date().toISOString(),
    uptime: process.uptime(),
  });
}

export const dynamic = 'force-dynamic';

Step 2: Wire It into the Dockerfile

Add a HEALTHCHECK instruction before the CMD line in your runner stage:

HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1

The --start-period gives Next.js time to compile its initial pages before the orchestrator starts checking. Use wget here (it's available in Alpine by default) instead of curl to avoid installing extra packages.

Managing Environment Variables in Docker

Honestly, environment variable handling is one of the trickiest parts of self-hosting Next.js. Variables fall into two categories, and mixing them up will cause headaches:

  • Build-time variables — Anything prefixed with NEXT_PUBLIC_ gets inlined into the JavaScript bundle during next build. These are baked in. They cannot change at runtime without a full rebuild.
  • Runtime variables — Server-only variables (without the NEXT_PUBLIC_ prefix) are read from process.env at request time and can differ per environment.

The Single-Image, Multi-Environment Pattern

The goal is to build one Docker image and promote it through staging, production, and wherever else by injecting different runtime variables. This is the pattern you want:

# docker-compose.yml
services:
  web:
    image: my-nextjs-app:latest
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/myapp
      - AUTH_SECRET=your-auth-secret
      - REDIS_URL=redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user"]
      interval: 10s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    volumes:
      - redisdata:/data

volumes:
  pgdata:
  redisdata:

One important rule: never bake secrets into the Docker image. Always inject them at runtime via environment variables, Docker secrets, or a secrets manager like AWS Secrets Manager or HashiCorp Vault.

ISR and Caching in Multi-Container Deployments

Incremental Static Regeneration works perfectly out of the box when you're running a single container. The problem shows up when you scale to multiple replicas behind a load balancer — each container maintains its own filesystem cache. That means stale data on some instances and cache misses that trigger redundant rebuilds.

There are two main approaches to fix this.

Option 1: Shared Volume (Simple)

Mount a shared network volume (NFS, EFS, or a Kubernetes PersistentVolumeClaim) to the .next/cache directory so all containers read and write to the same location:

# docker-compose.yml (add to web service)
volumes:
  - nextcache:/app/.next/cache

volumes:
  nextcache:

This works fine for small deployments but doesn't scale well. Filesystem contention and latency become real issues as traffic grows.

Option 2: Redis-Backed Cache Handler (Production-Grade)

For production at scale, you'll want a custom cache handler that stores ISR pages and fetch responses in Redis. Next.js provides a cacheHandler config option for exactly this:

// next.config.ts
import type { NextConfig } from 'next';

const nextConfig: NextConfig = {
  output: 'standalone',
  cacheHandler:
    process.env.NODE_ENV === 'production'
      ? require.resolve('./cache-handler.js')
      : undefined,
  cacheMaxMemorySize: 0, // Disable in-memory cache; use Redis instead
};

export default nextConfig;

Here's a minimal Redis cache handler implementation to get you started:

// cache-handler.js
const { createClient } = require('redis');

const client = createClient({ url: process.env.REDIS_URL });
const clientPromise = client.connect();

module.exports = class CacheHandler {
  constructor(options) {
    this.options = options;
  }

  async get(key) {
    await clientPromise;
    const data = await client.get(key);
    if (!data) return null;
    const parsed = JSON.parse(data);
    return {
      value: parsed.value,
      lastModified: parsed.lastModified,
    };
  }

  async set(key, data, ctx) {
    await clientPromise;
    const entry = {
      value: data,
      lastModified: Date.now(),
      tags: ctx.tags || [],
    };
    const ttl = ctx.revalidate
      ? ctx.revalidate
      : 60 * 60 * 24 * 365; // 1 year default
    await client.set(key, JSON.stringify(entry), { EX: ttl });
  }

  async revalidateTag(tags) {
    // For production, consider using Redis sets to track tag-to-key mappings
    // This simplified version scans keys (not recommended at large scale)
    const tagList = [tags].flat();
    await clientPromise;
    const keys = [];
    for await (const key of client.scanIterator()) {
      const data = await client.get(key);
      if (data) {
        const parsed = JSON.parse(data);
        if (parsed.tags?.some((t) => tagList.includes(t))) {
          keys.push(key);
        }
      }
    }
    if (keys.length) await client.del(keys);
  }

  resetRequestCache() {}
};

For a battle-tested solution, check out @fortedigital/nextjs-cache-handler. It's fully compatible with Next.js 15 and handles TTL management and tag-based invalidation with Redis properly out of the box.

Reverse Proxy with Nginx

In production, you'll almost always want to run Next.js behind a reverse proxy like Nginx or Traefik. The proxy handles SSL termination, static asset caching, gzip compression, and load balancing across your Next.js containers.

# nginx.conf
upstream nextjs {
    server web:3000;
}

server {
    listen 80;
    server_name yourdomain.com;

    # Redirect HTTP to HTTPS
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name yourdomain.com;

    ssl_certificate /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/key.pem;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    # Static assets - long cache
    location /_next/static/ {
        proxy_pass http://nextjs;
        proxy_cache_valid 200 365d;
        add_header Cache-Control "public, max-age=31536000, immutable";
    }

    # Public assets
    location /public/ {
        proxy_pass http://nextjs;
        proxy_cache_valid 200 30d;
    }

    # Application routes
    location / {
        proxy_pass http://nextjs;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

CI/CD Pipeline with GitHub Actions

Let's automate the whole build-and-deploy workflow. Every push to main should trigger a new Docker build and deployment — no manual steps:

# .github/workflows/deploy.yml
name: Build and Deploy

on:
  push:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - uses: actions/checkout@v4

      - name: Log in to Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v6
        with:
          context: .
          push: true
          tags: |
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

  deploy:
    needs: build-and-push
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to production server
        uses: appleboy/ssh-action@v1
        with:
          host: ${{ secrets.SERVER_HOST }}
          username: ${{ secrets.SERVER_USER }}
          key: ${{ secrets.SSH_PRIVATE_KEY }}
          script: |
            docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
            docker compose up -d --no-deps web

Monitoring and Observability

Next.js 15 stabilized the instrumentation.ts file, which makes integrating observability tools much more straightforward. The register() function runs once when the server starts, and the onRequestError hook captures server-side errors with full context.

// instrumentation.ts
export async function register() {
  if (process.env.NEXT_RUNTIME === 'nodejs') {
    // Initialize your APM/tracing library here
    // e.g., OpenTelemetry, Datadog, Sentry
    console.log('[instrumentation] Server starting...');
  }
}

export function onRequestError(
  error: { digest: string; message: string },
  request: { method: string; url: string; headers: Record<string, string> },
  context: { routerKind: string; routePath: string; routeType: string }
) {
  // Report to your error tracking service
  console.error('[error]', {
    message: error.message,
    method: request.method,
    path: context.routePath,
    type: context.routeType,
  });
}

Production Deployment Checklist

Before going live, run through this list. I keep a version of it pinned in every project's README and it's saved me more than once:

  1. Standalone output enabledoutput: 'standalone' in next.config.ts
  2. Multi-stage Dockerfile — Separate install, build, and runtime stages
  3. Non-root container user — Never run production containers as root
  4. Health check endpoint/api/health wired to Docker or Kubernetes probes
  5. Environment variables injected at runtime — Not baked into the image
  6. Sharp available — Automatic in Next.js 15+; install manually on older versions
  7. Shared cache for multi-replica — Redis cache handler or shared volume for ISR
  8. Reverse proxy configured — SSL termination, security headers, static asset caching
  9. CI/CD pipeline — Automated build, push, and deploy on every merge
  10. Monitoring and error tracking — Use instrumentation.ts with your APM tool
  11. .dockerignore in place — Exclude node_modules, .git, .env files
  12. At least 2 replicas — Multiple containers behind a load balancer for high availability

Frequently Asked Questions

Can I self-host Next.js with all features like Server Components and ISR?

Yes, absolutely. Self-hosting with next start or a Docker container supports every Next.js feature — React Server Components, Server Actions, ISR, middleware, image optimization. The only difference from Vercel is that you're managing the server infrastructure yourself.

How much does it cost to self-host Next.js compared to Vercel?

A VPS from providers like Hetzner, DigitalOcean, or Contabo runs $4–20/month depending on the resources you need. That can handle a lot of traffic. Vercel's free tier is generous for small projects, but costs scale with bandwidth, function invocations, and build minutes. For high-traffic sites, self-hosting is typically way more cost-effective.

Do I need to install sharp manually for image optimization?

If you're on Next.js 15 or later, no — sharp is automatically included in the standalone output. On Next.js 14 or earlier, you'll need to run npm install sharp yourself.

How do I handle ISR caching when running multiple Docker containers?

The default filesystem cache isn't shared between containers, so you have two options. Either mount a shared network volume for .next/cache, or (the better approach for production) implement a Redis-backed custom cache handler using the cacheHandler option in next.config.ts. Libraries like @fortedigital/nextjs-cache-handler provide a production-ready Redis integration for Next.js 15.

What is the best tool for zero-downtime deployments of Dockerized Next.js?

Kamal 2.0 (by Basecamp) is a solid choice — it automates zero-downtime deploys across servers, handles automatic SSL, and manages auxiliary services. For Kubernetes users, standard rolling deployment strategies with proper readiness probes do the same thing. Coolify is another option worth looking at if you want Vercel-like features (git push deploy, preview URLs) on your own infrastructure.

About the Author Editorial Team

Our team of expert writers and editors.