visit
Using multi-stage builds to optimize production Docker image for faster deployment.
At work, I noticed our deployment pipeline was significantly slower than I expected. The simplified pipeline consist of the following jobs:
After some investigation, we were able to identify the performance bottleneck. The task that consistently took the longest to complete was uploading production images to the . It was because we were uploading large, un-optimized Docker images to the registry.
isn't an urgent concern like security or throughput in general. The storage and cost on the cloud are usually generous. However, it becomes problematic when it slows down the pipeline. It increases the and affects our ability to .
We'll use the DALL-E demo I built for my as an example. The demo looks like this:
You can find . Feel free to take a look at the repo and try it out✨
We'll create two docker files:
Let's build the images based on each docker file. The results are:
It's a 92% reduction in size. We can roughly interpret it as a 92% reduction in uploading time because the file transfer over HTTPS is linear. Now let's dive into how you can achieve the same result.
Let's go.
We can start off by creating a straightforward Dockerfile like this:
# Dockerfile.local
FROM node:18-alpine
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY . ./
# Build the app
RUN yarn --frozen-lockfile
RUN yarn build
# Serve the app
CMD [ "yarn", "start" ]
In this image, we use as the base image because of its much smaller size compared to other base images. We also follow the recommendation and add to support the use of "process.dlopen". The rest is just like how we build and serve the project locally:
Let's build the image using this docker file and this is generally what you can see in the command line:
The build was completed in 52.9s.
The optimization is based on two features from Docker and Next.js:
The idea is to use a multi-stage build to . Let's take a look at the docker file.
# Dockerfile
ARG NODE=node:18-alpine
# Stage 1: Install dependencies
FROM ${NODE} AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock* ./
RUN yarn --frozen-lockfile
# Stage 2: Build the app
FROM ${NODE} AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN yarn build
# Stage 3: Run the production
FROM ${NODE} AS runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# copy assets and the generated standalone server
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
# Serve the app
CMD ["node", "server.js"]
We used the same "node:18-alpine" as the base image.
Instead of copying everything to the image, we were only copying the "package.json" and "yarn.lock" for the installation.
In order to build the project, we needed the installed dependencies, source code, and all the project configurations in the project root. So we copied the dependencies from the previous stage and everything from the project root.
is a Next.js feature that is designed to help us reduce deployment size by tracing all the files that are needed for production in build time.
Once we enable the "standalone" output, Next.js will build and output a standalone Node server in ".next/standalone" directory.
module.exports = {
output: 'standalone',
}
The build result looks like this:
![standalone folder
](//cdn.gzht888.com/images/XyqHIwK0xDMOGdIf5iTNW4CrLjb2-m8e35c6.png)
In this stage, all we did was to copy the standalone server, the assets in the "./public" folder, the JavaScript and CSS chunks from the ".next/static" folder to the working directory and start the server with port 3000.
The magic behind the output file tracing is . It statically analyzes the dependency graph and outputs the list of modules in the graph. To illustrate, let's log the dependencies for our page, API, and the Node server:
import { nodeFileTrace } from "@vercel/nft";
const files = [
"./.next/server/app/sc/page.js",
"./.next/server/pages/api/images.js",
"node_modules/next/dist/server/next-server.js",
];
const { fileList } = await nodeFileTrace(files);
console.log(fileList);
The output looks like this:
Now that we explored the stages and the standalone server, let's build the image and observe the result:
The build was completed in 41.3s. Compared to the un-optimized build, we didn't compromise the build time. It's a big win considering that we significantly reduced the build size by 92%.
Want to Connect?
This article was originally posted on .