Docker? It’s Easy If You Do It Smart

Contributor - 07 December 2016 - 12min
Contributor - 07 December 2016 - 12min
We all must have heard of docker but why and where exactly we need to use it, is still confusing. Also this question arises why we need to adopt this new technology if virtual machines already do the job.
Well, first of all we need to understand that, docker containers are not replacement of virtual machines. Both have got their own use case. In reality, both are complementary technologies—as hardware virtualization and containerization each have their distinct qualities and can be used in tandem for combinatoric benefits.
This blog explains core components of docker and writing docker files for dockerizing app.
Containerization is an operating system level (OS-level) virtualization method for deploying and running distributed applications without launching an entire virtual machine (VM) for each application. Adopting containerization leads us to gain in efficiency for memory, CPU and storage. As application containers run on the single control host and access a single kernel they don’t have the overhead required by VMs. Many containers can be supported on the same infrastructure. Also as all requirements of app are bundled in one package it could be portable anywhere. As long as server settings are identical across systems, an application container can run on any system and in any cloud without requiring code changes. We dint have to manage guest OS environment variables or library dependencies.
Although Containerization concept emerged several year back it gained popularity with the open source docker. Docker is an Open platform for developers and sysadmins to build ship and run distributed applications.
It is light-weight VM which:
Docker Containers allows us to package an application with all of its dependencies into a standardized unit similar to .exe file or other executables. It holds the components such as files, environment variables and other libraries necessary to run the desired software and ship it all out as one package. It is shipping container system for code.
Docker uses client-server architecture. These are the major components of docker:
Docker Engine: core component of docker which runs our docker container Docker Engine runs on Linux to create the operating environment for your distributed applications. The in-host daemon communicates with the Docker client to execute commands to build, ship and run containers.
Docker Hub: a SaaS platform for sharing and managing docker containers. It provides both public and private storage for images.
Docker Containers: It is similar to directory, holds everything that is needed for an application to run. Each container is an isolated and secure application platform.
Docker Images: Docker images are read-only templates from which Docker containers are launched. Each image consists of a series of layers. Docker makes use of union file systems to combine these layers into a single image. Union file systems allow files and directories of separate file systems, known as branches, to be transparently overlaid, forming a single coherent file system.
Docker Daemon: sits on the host machine answering requests for services.
Docker Client: the user interface that allows communication between the user and docker daemon. Docker client and the daemon can run on the system or we can connect a Docker client to a remote docker daemon. The Docker client and daemon communicate via sockets or through a RESTful API.
Docker registries: hold images, these are public or private stores from which we upload or download images.
A Dockerfile is a text based script that contains instructions and commands for building the image from the base image. Docker reads this Dockerfile when we request a build of an image, executes the instructions, and returns a final image. Instructions include actions like:
Below is an example of docker file for Dockerizing a nodejs application.
{ FROM centos: centos6 RUN yum install -y epel-release RUN yum install -y nodejs npm COPY package.json /src/package.json RUN cd /src; npm install –production COPY . /src EXPOSE 8080 CMD ["node", "/src/index.js"] }
Here is the explanation for each instruction added in DockerFile.
FROM centos:centos6
This instruction sets the base image for the application. For a valid docker file it must have FROM as its first instruction. Base image is the basic image on which we add layers and create a final image containing our App. In this example we are creating base image from Centos OS.
RUN yum install -y epel-release
This instruction installs the extra packages for Linux for CentOS.
RUN yum install -y nodejs npm
This instruction installs Nodejs and npm as it is required to run our application.
COPY package.json /src/package.json RUN cd /src; npm install –production
We are creating package.json file describing our app and dependencies and copying this file in source folder. Once file is created we are running “npm install” command to install all the dependencies.
COPY . /src
This instruction bundles our applications source code inside the docker image
EXPOSE 8080:8080
This instruction binds the application to specified port and exposes it on same port to outside.
CMD ["node", "/src/index.js"]
We are creating a file “index.js” describing what our application would have to do. With CMD we are defining commands which we have to run once image is built. Here we can specify which command our application will execute when we run it.
Once DockerFile is written we can build our image by executing “docker build .” command in “src” folder. This command will execute all the instructions specified in DockerFile.
Leave a Reply