Once you’ve got an understanding of the Dockerfile and how it is used to create images its time to look into other main pieces of the Docker ecosystem, namely Docker Compose.
Firstly, Compose is a tool for defining and running multi-container Docker applications.
As you can see from the official page there is some basic help for getting started with various projects.
Lets get started:
Firstly we need a sample NodeJS Application. For this we can use an example repository I have on my Github. This application is nothing fancy, however it will demonstrate how we can use Docker Compose to setup an environment that contains multiple moving parts.
Firstly lets take a look at the Dockerfile, I’ll be using Atom for my editor during this post.
Let’s quickly breakdown what we have here:
- FROM node
- We going to be basing our new image on the image ‘node’. This is the official NodeJS Docker image.
- LABEL is optional but helps identify the Docker image and its version
- ADD in this case is going copy all items in the current directory (indicated by the dot) and copy those to ‘/src’ in the image.
- WORKDIR is the filesystem default location that will be used image as our working directory.
- RUN is executing a command for us. This is simply baking in our NodeJS modules so that we don’t have to run ‘npm install’ every time we run our application.
- EXPOSE is doing just that, exposing port 5000 outside the image.
OK, so pretty straight forward here.
Lets now take a look at our docker-compose.yml:
- There have been two version of compose over the years. Here we are using version “2”.
- What services do we want in our composition? In this case we want a DB running mongodb and we want our Web Server running NodeJS.
- Here we’ve specified the image we want to use for our database.
- build: Here we’re saying ‘build’ (i.e docker build) on the current working directory. Since this directory contains our Dockerfile it will be building that image.
- command: Here we’re specifying a command we want to run when our web server is created. In this specific case we’re running a script that’s specified in our package.json and most importantly npm start.
- ports: Here we’re simply doing a port mapping of 5000 inside the image to 5000 outside the image.
- links: Here we’re specifying which containers should be linked to this container. By linking containers, you provide a secure channel via which Docker containers can communicate to each other.
Pretty self explanatory at this stage.
In order to bring our environment up we simply need to run ‘docker-compose up’ (you can use -d to send this to the background)
Here you can seen we’ve run our docker-compose up
Firstly, Docker looking to see if I have the images specified in the docker-compose.yml if they don’t exist it’s going to reach out and grab these.
After the images have been downloaded you will see some verbose output from compose thrown into stdout.
After that you should see something like:
You can clearly see each of the services output as the compose file unwinds. Here we can see MongoDB run its [initandlisten].
At the bottom we can see that ‘npm run insertdata’ has been completed. We see the connection opening and the result (JSON response) of the MongoDB insertMany method. (this is in /public/js/books.js)
After that we’re ready for our npm install step.
Looking like everything went well.
We can open up our browser and navigate to localhost:5000
Once you navigate to ‘Books’ you will see a database connection open (query) in your console:
Woohoo! we have our NodeJS web application pulling data from MondoDB. All setup with docker-compose.
We can simply develop our application now and bring up our environment all at once whenever we want to do some testing or see our work.