With Docker 1.12 in full swing now I thought it may be helpful to go through setting up Docker Swarm in a 1 manager and 1 worker environment. You can read more about the features included in 1.12 over here.
So what is Docker Swarm and why should you care? Well, Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. This allows us to make Docker in production just that bit more real.
So lets get started with Docker Swarm to get our application highly available and scaling automatically. For this process I’m going to use two CentOS 7 (minimal) virtual machines. Each VM will act as a Container host which will be clustered together using Swarm.
First thing as always on a new system we’re going to do:
yum update -y
Now we’re up to date you’ll want to go and grab the latest version of Docker. You can find the full guide to installing Docker here.
You should now be able to run docker -v and see at least 1.12.0.
Normally for clustering anything we will want a few more nodes than just 1 worker and 1 manager. But remember, A manager node performs work similar to a worker node in the cluster. Some of the more commonly scenario’s would be 3 or 5 manger nodes couped with 3,5 or more worker nodes.
So let’s SSH into the manger node and initialize the swarm. We do that by running:
docker swarm init --listen-addr 192.168.0.18:2377 --advertise-addr 192.168.0.18
Of course the above private IP address is the local address of my manager node. A few things about this line:
- –listen-addr can be set to 0.0.0.0 (default) or the network interface name. The node listens for inbound Swarm manager traffic on this address.
- The 2377 is not mandatory, it is the default swarm port.
- –advertise-addr is the address that will be advertised to other members of the swarm for API access and overlay networking.
Once you’ve run this you should have a response that looks like the following:
If you run docker info you should be able to see some swarm information about the node.
So now we’ve got two clear paths on how to add managers and workers to this swarm. Lets go and join our worker in.
SSH over to the worker node and paste in the line that our manager gave us when we initialized the swarm.
Now we have our swarm!
We can see this a bit clearer when we use the new docker node command:
Lets run a Service in our swarm so we can truly see this thing in action.
For this example ill just use a sample repository over on my Github page. This has a Dockerfile ready to roll and will give us a page we can hit to see our application in action.
To create a service we simply say docker service create. We give it a friendly name, do some port mapping and specify the amount of replicas we want in our swarm. Along with that we need to give the name of the image we’re going to be running in our containers. In this case I’ve cloned the repo (above) and run from its directory run docker build . -t myapp. This will build the image we need in order get our swarm going. This image is required locally on each node in the swarm.
Once we have our service started we can check the status of our swarm by using docker service ls.
Docker service has a ton of more useful information in the inspect command which you can run against your service to dig through the finer details.
We now have a functioning swarm! we can see the 2/2 replicas to indicate our swarm is balancing load across two nodes. Lets take a look at those in the browser.
Here we’re hitting one of the nodes in the swarm and getting our page displayed. The other node produces the same content. Let’s delete one of our containers and watch the magic of auto-scaling kick in.
Here I’ve removed the current running container from the mgr1 node and checked its removal by listing the nodes in the swarm. We can see the swarm at that point is showing 1/2 replicas available. No more than 5 seconds later a new container is created in the swarm to take the place of the container (25) that was killed.
This is just scratching the surface of what Swarm has to offer.
More to come on auto-scaling and CI/CD. stay tuned.