visit
In the last article of this series, we finished adding unit tests to our project to reach 100% code coverage. With tests in place, the next piece is getting our project ready for deployment.
The last thing we need in place to get our application ready for production deployment is a Dockerfile. The Dockerfile is also a great place to run our unit tests, which is why I’ve decided to write the tests first. We have a few goals with our build:Docker is essentially an isolated environment for your code to run in. Just like you would provision a server, you provision a docker container. As discussed in A Better Way to Develop Node.js with Docker, most popular frameworks/languages have builds available from Docker Hub. Seems how we are using Node, we need an environment that runs node
. We’ll start our Dockerfile with that.
But before we do, let’s talk about what happens when you run the command docker build
. The first thing that happens is Docker determines the “context” in which the build is running. It sucks in everything from your currently directory as the context, except files or folders listed in the .dockerignore
file.
We only want the bare minimum required for the build process, so let’s start by creating a .dockerignore
file and ignoring everything else.
Here’s the difference in running docker build . -t ssr
with and without the .dockerignore
file:
Now let’s create the Dockerfile
line by line:
First, as I mentioned, it’s a Node app, so it make sense to start with the official Node image. It’s production, and in production we want immutable, repeatable builds, and for that reason I used a specific Node version 11.10.0
. Depending on your requirements you may want to choose the latest LTS version of Node 10. I just picked the newest available. You can find a list of the latest tags here: .
Next, note the AS
directive. This signals that this is not the final stage of the Dockerfile. Later on we can COPY
artifacts out of this stage into our final container. The reason for this is to produce an image with the minimum number of artifacts. We can run more expensive commands in the first stage, and the bloat of their results will be stripped out in the next layer, leaving us with only the essentials to run the app.
I’ve also decided to use an alpine
version of node. This means the base OS is Alpine Linux, an ~5MB minimal linux distribution made for containerization.
Next, because we are using alpine
and it does not come with many build tools, we should install the node-gyp
collection of tools.
With that we have all of the tooling in place for our builds and tests to run. You could potentially save 10 seconds or so off of the build time if the packages you are relying do not need to compile any of their dependencies using gyp by skipping the previous step. It will be stripped out of the final layer, however, so it’s not a huge savings, and many node dependencies do require it.
Our code is not yet inside of the container, which is pretty helpful in order to run it! Let’s copy it into a simply named src
directory, and set that directory as our working directory. All future commands in this layer will be run in the specified working directory.
npm ci
works similarly to npm i
, but skips the expensive dependency resolution step, and instead just installs the exact dependencies specified in your package-lock.json
file. It’s basically a faster npm i
for use in CI environments.
I usually aim to fail as quickly as possible, and generally put test
before build
but our Server Side tests rely on having an application built in order to serve it, so in this case I’ve just flipped them.
We built our application using a Node server to do streaming server side rendering. At this point in our Dockerfile we have a built client side application. We don’t necessarily need to use the server as well. We might decide we just want a statically served client-side only application instead. In the next part of the article I want to show you how you can go from here to either build a final layer using the original Node SSR server, or alternatively to package the application into a Nginx deployment.
Directly under the first stage, we now want to add a second FROM
statement. This time, we will not use AS
because it is the final layer. We’ll also want to go ahead and expose the port that the application runs on, as well as set a working directory as we did before.
FROM node:11.10.0-alpine
ENV PORT=1234EXPOSE $PORT
WORKDIR /usr/src/service
Again, notice that we starting with the same alpine node image at a specific version. When we create a new layer, nothing is copied over from the previous layer automatically. It’s a fresh slate. We need to copy the artifacts, in the case of our Node application, a couple files and folders, into our final layer. Let’s do that next:
Finally, we can run our app using node
, but we want to set the user to not be root before doing so. The official Node image creates a user named node
for this purpose.
When deploying, we should be relying on an orchestrator to manage restarting and scaling the application for us, such as Kubernetes or Docker Swarm, so there’s no need to use tools like pm2
or forever
.
If you’re paying attention or have read other Docker articles before you may notice that I haven’t defined a HEALTHCHECK
. HEALTHCHECK
is a command that is called when running in certain orchestrators, such as Docker Swarm. While running in Kubernetes, we instead rely on Kubernetes’ liveness and readiness probes.
Here’s a modified version of the final stage with a HEALTHCHECK
using curl
defined.
RUN apk add --update --no-cache curl
EXPOSE 1234 WORKDIR /usr/src/service
HEALTHCHECK --interval=5s \--timeout=5s \--retries=6 \CMD curl -fs || exit 1
USER node CMD ["node", "./dist/server/index.js"]We are gonna start with the same build layer, but this time, our final stage will use nginx
to serve the application statically, rather than rendered on the server side with Node.
Before we do, we will need to create a new entry in our package.json’s script
section. Add the following scripts:
"create-bundle:nginx": "cross-env BABEL_ENV=client parcel build app/index.html -d dist/client --public-url .",
The difference with the SSR build is the public url we set to .
when we run the build because we want it to be relative to the index.html
file in this case.
Now, create ./nginx/Dockerfile
:
RUN npm run formatRUN npm run build:nginxRUN npm run test
RUN npm prune --productionFROM nginx:1.15.8-alpine
RUN apk add --update --no-cache curl
WORKDIR /usr/src/service
COPY --from=build /src/dist ./distCOPY --from=build /src/nginx ./nginx
HEALTHCHECK --interval=5s \--timeout=5s \--retries=6 \CMD curl -fs || exit 1
RUN ["chmod", "+x", "./nginx/entrypoint.sh"]ENTRYPOINT [ "ash", "./nginx/entrypoint.sh" ]
There isn’t much new here, except instead of using a command, we are using an ENTRYPOINT
. This allows you to run a script instead of a command. We also want to make sure to call it with ash
the alpine linux version of sh
. The RUN
line just above is simply changing linux permissions to make the file executable.
The script that we will make in a moment will start nginx using a config file that we also need to create and store in an nginx
folder.
Let’s start with the entrypoint.sh
script. I’m gonna include two useful snippets inside that help with using environment variables commented out. We don’t need them for this project, but it’s a common requirement, such as when you want to use nginx as a proxy to a backend, or perhaps include an analytics token or key in the JS bundle.
Basically all we do is copy over our Nginx config to the /etc/nginx
folder, and then start it.
Here’s the nginx config — save it as./nginx/nginx.config.template
. You can use environment variables in it if you uncomment the envsubst
line above.
root /usr/src/service/dist/client;
index index.html;
gzip on;
gzip\_min\_length 1000;
gzip\_buffers 4 32k;
gzip\_proxied any;
gzip\_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
gzip\_vary on;
location ~\* \\.(?:css|js|eot|woff|woff2|ttf|svg|otf) {
# Enable GZip for static files
gzip\_static on;
# Indefinite caching for static files
expires max;
add\_header Cache-Control "public";
}
Part 1: Move over Next.js and Webpack 🤯_Simple Streaming Server Side Rendered (SSR) React + styled-components with Parcel_gzht888.com
Part 2: A Better Way to Develop Node.js with Docker_And Keep Your Hot Code Reloading_gzht888.com
Part 3: Enforcing Code Quality for Node.js_Using Linting, Formatting, and Unit Testing with Code Coverage to Enforce Quality Standards_gzht888.com
Part 4: The 100% Code Coverage Myth_There’s a lot of advice around the internet right now saying that 100% coverage is not a worthwhile goal. Is it?_gzht888.com
All of these articles are working on building out this boilerplate: