devops,

How does Docker work?

Sumeet Sumeet Follow Jun 18, 2020 · 10 mins read
How does Docker work?
Share this

As a continuation to the previous post where we talked about Containers, here we shall take a look at how Docker works internally, it’s architecture, Docker objects and Dockerization an existing application with an example. My intention here is to familiarise you more with the platform to get a better grip at the architectural aspects around Docker in simple language, if you are considering to use it.

Docker Engine

Docker Architecture Docker system is easy to understand. The main player here is the Docker daemon (dockerd) a.k.a. Docker server, which actually performs the containerisation tasks. This is a continuous running process which is responsible for:

  1. Building images
  2. Fetching images from remote repositories
  3. Maintaining volumes/storage
  4. Exposing network interfaces
  5. Spinning containers - of course!

Docker server resides on the host machine where the containers are to be spun. Docker server exposes API for the clients (a.k.a. Docker CLI) for communication, using which clients can execute various docker commands on the host. A common mistake made in understanding this is that Docker server is no great and client is no puny, we don’t need stereotypes here. Docker server is required to be installed on every host where the application/container needs to be hosted. The client need not always be on a different system, it can reside on the same host. In fact, docker installation doesn’t differentiate between “client” and “master” installation. When you install Docker, you install both. Using client, if you want to communicate with a remote Docker server, it is just a matter of a configuration change on client machine to point to the remote Docker server. By default, client communicates with local Docker daemon.

Referring to the image, all the Blue boxes represent Docker system. The way they are represented separately is to represent the fact the client can communicate with remote Docker server as well. However, in both the cases Docker CLI communicates to Docker daemon via the exposed REST API. Docker documentation also mentions usage of UNIX sockets for communication, but I have never used it in this case.

Docker Objects

Let us talk about the Green boxes in the diagrams which are known as Docker objects. Each of them have unique purpose as listed below.

  1. Images: Think of image as a template using which the containers are created. Images offer the environment setup needed to deploy an application, irrespective of which platform they are running on. For example, if you developed a Node application on Windows, but now you have to deploy it on Linux server - this will give rise to some potential worries. There have been so many cases of “But it works on my system!”, that we need not go into in this post. Docker containers take this worry away. Containers provide a consistent environment for your application to run. Images are nothing but “frozen”, consistent environment which run anywhere. I hope you got my cross-reference.

  2. Volumes: Have you ever thought - the data that is produced by application running inside container, what happens to it when the container is stopped? Well, the data doesn’t persist. To overcome this, Docker provides a way to store files on host machine using volumes, which is completely managed by Docker. There are other methods offered by Docker like bind mount and tmpfs mount, but the problem with them is they are older or temporary methods to make data persist and may not be capable with all operating systems. Having said that, they have their own use cases where they are best suited.

  3. Containers: As mentioned earlier, Docker manages the container life cycle. Docker creates the container instances and destroys them as per the instructions received from client.

  4. Network: Docker containers are secure, and one of the reasons is it provides the runtime environment in complete isolation. This includes the network as well. Every container instance is as good as an operating system of itself where it has it’s own set of network ports which are different from host system. Docker manages the network interfaces for the containers as well.

What is Registry?

Think of it as a library of Images. Registry is a centralised location where users maintain their container image versions. It is Github of Docker images. Docker Hub is the default registry for all Docker installations. Often, every Dockerfile starts with a reference to base image, on which the application specific customisation are layered. There are tons of public images which users can use as base image in their Dockerfile. Such Registry can also be hosted locally. To install the same, Docker suggests to use it’s own registry image hosted on Docker Hub.

Dockerization

All said and done, I have my application which I have been building since a couple of years now. Is it possible for me now to use Docker containers? Well, of course you can! It all starts with a Dockerfile, where you start off by specifying the base image at the very beginning. This is followed by the next step where you specify your working directory (path where your application needs to be installed) within the container. Remember, a container is as good as a standalone system with it’s own OS. Thus, you need to give such details which helps container understand installation process. Next, you may ask it to copy your package or manifest file and other installation steps in step-by-step manner. If it is a web application and needs to communicate with networks, you may specify the port to expose and then copy the source code. Once your Dockerfile is ready, it’s time to test if it builds the image. You can try it locally as well. For every step mentioned in Dockerfile, Docker creates something called as Layers. Depending on the number of steps you have in your Dockerfile, those many layers will be created. The advantage of this is, if you happen to make a change in one of the steps the previous layers untill that step are not rebuilt, they are just reused. This saves a lot of time. Let us take a look at this with an example.

Example

We have an example application in NodeJS. Find the link to Github repo here. This is a bare minimum web application which runs on port 3000. Let us try to Dockerize this.

  1. Clone the application repository on your localhost. Open it in an editor, feel free to run and test the application by visiting localhost on port 3000.
  2. In order to Dockerize, we begin by creating Dockerfile in the application root directory. The name of the file should be Dockerfile.
  3. Now we need to write the “steps” into this Dockerfile. The section below represents the contents of the Dockerfile for this application. In this example, every step is commented with it’s purpose. Before pasting the code into the file, do go through the comments below. This will give you an idea of how Docker is supposed to create a container for this application. Save the file.
# Use the official image as a parent image.
FROM node:lts-stretch-slim

# Set the working directory.
WORKDIR /usr/src/app

# Copy the file from your host to your current location.
COPY package.json .

# Run the command inside your image filesystem.
RUN npm install

# Inform Docker that the container is listening on the specified port at runtime.
EXPOSE 3000

# Run the specified command within the container.
CMD [ "npm", "start" ]

# Copy the rest of your app's source code from your host to your image filesystem.
COPY . .

  1. Open the terminal and navigate to application’s root directory (where Dockerfile is present as well). Now is the time we create an image. Below command tells the Docker daemon to build a local image and give it a name as “examplenodeapp” and tag it with “1.0” version. You can use any tag, need not be a version.
    docker build --tag examplenodeapp:1.0 .
    
  2. Once successful, the terminal should flash with message - Successfully tagged examplenodeapp:1.0. To check if the image exists locally, run docker images. It should show list of all the images available on your system, included the newly created examplenodeapp with tag 1.0
  3. Thus, you have successfully containerised the pre existing application. To run this container, and see if it actually works, run below command. The command makes use of some tags. --publish 8000:3000 lets the daemon know that the application runs on port within the container, and needs to be exposed to host system port 8000 - from where we will access the application. --detach makes the container run in background. --name ena just specifies the name for the container to be spun.
    docker run --publish 8000:3000 --detach --name ena examplenodeapp:1.0
    
  4. Test your application by visiting http://localhost:8000 in your browser. Let me know what you see.

We have scratched the surface of Docker. Docker has more interesting features if you decide to go deep into it. Many platforms today are already compatible, when it comes to using Docker containers. Docker has one of the best documentation available. Do hop on to their site for more. I love doing articles related to architectures where I get to explore technology. In coming weeks, I plan to explore Docker more and I shall write more about all the interesting finds. If you like what you read and want to be updated, do subscribe!

Join Newsletter
Get the latest news right in your inbox!
Sumeet
Written by Sumeet Follow
Hi, I am Sumeet, and I believe the world belongs to the doers. Here, I publish my technical tinkering experiences. I hope you like it!