visit
It apparently, is the foundation. The first several results on Google all suggest the first thing you need is a Dockerfile, as well. After all, how can you have a Docker environment without creating a Dockerfile? I’m here to tell that while this is true for production, it’s the wrong approach for development. You do not need to create your own. A Dockerfile is a way to package your application. You don’t need to package your application for development, and honestly, you really shouldn’t. Development and production are not the same environment. When you develop on your MacBook you install different tools than you use running in production. Just because “it runs the same way everywhere” doesn’t mean it should. Your app runs differently in development. Packaging it in a time where it is meant to be flexible and malleable is why many engineers have come to the conclusion the Docker isn’t for development. You lose the flexibility of development by needing to build new containers when you have changes to dependencies change for example. Sure, you could exec into the container and perform some commands, install some libs, but is it really less effort at this point? Now, some of the above articles got this more right than others, but if you’re using a Dockerfile for development, you’ve probably already gone too far. There are situations where you will want one, but probably not in the manner you think.At the foundation of any Dockerized application, you will find a
[Dockerfile](//docs.docker.com/engine/reference/builder/) —
Hint: If your Dockerfile contains an
npm install
you’ve gone too far.
Let’s talk about what Docker is for a moment.
Docker is a way to package your code. This is the typical context for using Docker.
Docker is also a way to create an isolated environment which is capable of executing certain types of applications. Docker allows you to package environments that are capable of running your code.When you use Docker for production you are using the most specialized Docker containers you can make. They are customized and specifically built for your application, packaged in the way you built it. For this purpose, creating a Dockerfile makes sense.
When you set up your computer for development, that’s not what you do. You instead install the tools that you need for development. You just need to create an environment which your code can run in. This means you can use a more generalized Dockerfile. Usually, these generalized Dockerfiles you need for development already exist.For example, when developing a Node.js application, you need node
installed on your machine. That’s it.
You don’t need alpine linux. You don’t need to package your node_modules into an immutable build. You don’t need little containers to exec into to make significant changes. You just need to be able to execute node
and npm
.
Therefore, in a container, that’s all you need as well, meaning the official node
image on Docker Hub will do just fine.
In my last article I showed how to use Parcel for development and production. Let’s keep that rolling, and build on top of that.
I think it’s a good example because Hot Module Reloading is essential for developing React apps efficiently.First, we need a docker-compose
file. In it, we need our development environment. Seems how we are making a node
app, that means the officalnode
image is probably a safe bet.
Let’s add a file docker-compose.yml
:
To accomplish this we can use a volume. We will mount our current directory .
to /usr/src/service
in the container. We will also need to tell docker where our “working directory” is. Meaning — what directory did we put the code in?
dev:image: node:11volumes:- .:/usr/src/serviceworking_dir: /usr/src/service
Now, every time we make a change on our local machine, the same file changes will be reflected in /usr/src/service
.
Next, we need to execute the command npm run dev
. This is easily accomplished with a command
. We also want to access it locally on port 1234
.
Modify the dev
script in package.json
to include the option --hmr-port=1235
.
"dev": "npm run generate-imported-components && parcel app/index.html --hmr-port 1235",
And with that in place, let’s update the Docker file to map the ports on our local machine to the same ports on our container. version: '3' services:
dev:image: node:11volumes:- .:/usr/src/serviceworking_dir: /usr/src/service command: npm run devports:- 1234:1234- 1235:1235
If you’ve done enough Node development, you’ll notice we have a problem. You can’t just run a node app without installing dependencies. Also, you can’t just install your node modules locally on Mac or Windows and expect them to work on the linux container. When you run a build in some cases libraries compile natively and the resulting artifact only works on the operating system it was built on!As a first attempt, you may be tempted to just chain npm install
and npm run dev
in a single command, and sure enough that would work, but it’s not quite what we want. This would require to run an install every time we started development mode with the container.
For educational purposes the way to chain commands is using bash
or ash
to execute the command. If you try
command: npm install && npm run dev
You will learn that doesn’t work. Instead you can could use.
command: bash -c "npm install && npm run dev"
This would in fact work, but is not the optimal solution we are looking for.
Which brings us to Step Two.
Let’s create another docker-compose file, this time named docker-compose.builder.yml
.
We will need to use version: 2
this time to make use of a feature in docker-compose
that isn’t available in the version 3 specification.
The first thing we want to define in docker-compose.builder.yml
is a base image.
This should look pretty familiar. It’s the same base we use in our docker-compose.yml
file.
install:extends:service: basecommand: npm i
build:extends:service: basecommand: npm run build
create-bundles:extends:service: basecommand: npm run create-bundles
Now, to install dependencies using a node:11
image which matches our development service in docker-compose.yml
we can run:
Pro Tip: Admittedly,
docker-compose -f docker-compose.builder.yml run — rm install
, doesn’t really “roll off the tongue”, does it? I usually put this in a Makefile so can just runmake install
, etc.
After running the install, docker-compose up
will bring up our development environment, which works exactly the same as it would on your local machine.
➜ docker-compose upCreating stream-all-the-things_dev_1 ... doneAttaching to stream-all-the-things_dev_1dev_1 |dev_1 | > [email protected] dev /usr/src/servicedev_1 | > npm run generate-imported-components && parcel app/index.htmldev_1 |dev_1 |dev_1 | > [email protected] generate-imported-components /usr/src/servicedev_1 | > imported-components app app/imported.jsdev_1 |dev_1 | scanning app for imports...dev_1 | 1 imports found, saving to app/imported.jsdev_1 | Server running at
And when we make a change, hot code reloading works as expected!All with no Dockerfile!
I just wanted to quickly add an example Makefile that will make
the commands easier to remember and use.
Create a file called Makefile
:
Makefiles use tabs, but ’s editor won’t allow me to type tabs or even paste them in. Makefile’s will not work with spaces. 😢 👋 😬
Now you can run make install
and make dev
.
version: '3'services:dev:image: node:11volumes:- .:/usr/src/service- :/usr/src/service/node_modules working_dir: /usr/src/servicecommand: npm run devports:- 1234:1234- 1235:1235
Which will allow node_modules within the container to live on it’s own, isolated completely from local. While sound in theory, this will break the process we’ve just defined for sharing node_modules between the builder and the running container.Not doing it, on the other hand, causes problems if you are moving between local and docker development, as node_modules
would need to be deleted between each switch.
A happy medium is to use an “external volume” instead of the local volume. First, let’s update our Makefile
to take care of that as well, with a setup
script that simply calls the docker volume create
command.
docker-compose.yml
version: '3'services:dev:image: node:11volumes:- nodemodules:/usr/src/service/node_modules- .:/usr/src/serviceenvironment:- NODE_ENV=developmentworking_dir: /usr/src/servicecommand: npm run devports:- 1234:1234- 1235:1235
volumes:nodemodules:external: true
docker-compose.builder.yml
# ...
base:image: node:11volumes:- nodemodules:/usr/src/service/node_modules - .:/usr/src/service/working_dir: /usr/src/service/
volumes:nodemodules:external: true
This changes our startup process slightly as well, as on the first run we need to make sure the volume exists with make setup
.
To learn about how to create a multi-stage build for production, in CI pipelines, or how to use docker-compose to run staging tests, check out my article: I have a confession to make… I commit to master.
In the next article, I show you how to enforce code quality using Linting, Formatting, and Unit Testing with Code Coverage, a critical step before we finish up with a production ready multi-stage Dockerfile to package our code.
Part 1: Move over Next.js and Webpack 🤯_Simple Streaming Server Side Rendered (SSR) React + styled-components with Parcel_gzht888.com
Part 3: Enforcing Code Quality for Node.js_Using Linting, Formatting, and Unit Testing with Code Coverage to Enforce Quality Standards_gzht888.com
Part 4: The 100% Code Coverage Myth_There’s a lot of advice around the internet right now saying that 100% coverage is not a worthwhile goal. Is it?_gzht888.com
Part 5: A Tale of Two (Docker Multi-Stage Build) Layers_Production Ready Dockerfiles for Node.js_gzht888.com