Getting Started With Containers

To know the nitty-gritty of Kubernetes, what it can do, and how it will get it done, it is imperative to understand some basic terms that are associated with containers and Kubernetes. In this blog post, I will start with the very basics and cover the difference between containers and VMs, the main reasons why containerization has become a thing and give an introduction into Docker and getting started running your first container.

Why containers and not VMs?

The main reason is not far-fetched. Containers eliminate compatibility issues in development, testing and production, and thereby eradicate time, financial and human resources wastage during data migration on different web services platforms. Also, containers are extremely light in weight due to the usage of a single OS to spin up the entire application as opposed to Hypervisor and multiple guest OS required in VMs.

Though containers and virtual machines can both be used to get better output and productivity from every computer hardware and software, containers have some advantages over virtual machines which increases their current preferability.

The advantages of containers over virtual machines are:

  • Memory – Use less memory (MB) as against (GB) for VMs, thus, increase speed
  • Portability – The portability is less cumbersome and faster
  • Starting time – Because of containers’ lightweight, the starting time takes seconds as against minutes for VMs
  • Single host OS against multiple OS for VMs
  • VMs used Hypervisor on the hardware along with the dependencies, application and individual OS, thus, increasing the resources utilisation

Introduction to Containers

What is container?

According to Docker, “a container is a standard unit of software that packages code and all its dependencies, for the quick and secure running of applications from one computing environment to the other”. Also, containers deploy and manage microservices applications using isolated environments with their processes, network and services on the host Operating System.

Using a container for shipping items at the port as an analogy: there are different types and sizes of containers for packaging various items before placing them on a container ship. The container ship then takes the containers to their various destinations irrespective of the items in the containers without any hassle.

Therefore, the container ship represents a container, while the containers onboard the ship represent web applications, source codes and all its dependencies. The OS used during development in a containerised application might not be used in testing and production but will still give the same expected result.

What makes it a toast of the moment?

In the pre-containerisation era, when an application developed on Ubuntu is to be tested on another OS, there are high possibilities that some libraries and other services might not work the way it should due to OS disparity. Sometimes, a lot of modification is needed, which might take days, weeks or months for the application to run successfully on a different OS/platform. Thus the need for containerisation which has one of the features of allowing the testing and running of applications on any OS/platform irrespective of the OS used for the development.

Introduction to Docker and Docker images

What is Docker?

Docker is an open-source tool for packaging and deploying containerised applications and its dependencies into different architecture or environments conveniently. Docker makes the efficient running of containers less cumbersome.

What is a Docker Image?

Docker image is a lightweight, standalone, executable package of software that includes everything needed to run a containerised application. It consists of the source codes, dependencies, runtime, system tools and libraries. A Docker image is a read-only file built from a dockerfile, or can also denote the compile version of a dockerfile. The docker image can be self-created; however, the already created ones hosted on the Docker hub repository can also be used. A docker container needs a docker image to run effectively and not vice-versa.

Running a Docker Container

Running a Docker container requires docker to be first installed on the host machine depending on your operating system. A simple guide (documentation) on how to install Docker on different operating systems can be found on https://docs.docker.com/install/. Once this is done, open your terminal and confirm if Docker is installed correctly by running:

$ docker version.

If everything is done correctly, the output should be the same as below.

$ docker version
Client:
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        2d0083d
 Built:             Wed Jul  3 13:38:22 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.7
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       2d0083d
  Built:            Mon Jul  1 19:31:53 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Docker detached mode

Docker can either be run in foreground or detached mode. The detached mode, on the one hand, can be represented by –detached or -d flag; which means the docker container will run in the terminal background without input while foreground mode, on the other hand, is the default; when “-d” is not specified.

Running Docker container in foreground mode output:

$ docker run ubuntu
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu5bed26d33875: Pull complete
f11b29a9c730: Pull complete930bda195c84: Pull complete
78bf9a5ad49e: Pull complete
Digest: sha256:bec5a2727be7fff3d308193cfde3491f8fba1a2ba392b7546b43a051853a341d
Status: Downloaded newer image for ubuntu:latest

An Ubuntu Docker container can be run in a detached mode with the command:

$ docker run -d ubuntu

The above command stipulates that you are asking your system to go into the Docker Hub that was mentioned earlier, pull the latest version of an ubuntu image which is freely hosted on the Docker-hub repository and then run it on your host. Once completed, you will have a running Ubuntu Docker container on your host which can be confirmed by running: docker ps for only running containers or docker ps -a for all containers.

Running Docker container in detached mode output:

$ docker run -d ubuntu
15f0d559f52bb6c25e733b528b4baaed4b4d2c6274eac8cbd125de3d670c144d

Meanwhile, if you do not want to run the latest version of the Docker container, the version can be specified using:

docker run -d ubuntu:18.04.
docker run ubuntu:18.04.

The 18.04 is the version of the ubuntu Docker container.

Basic Docker Commands

$ docker Version                  To check the current docker version

$ docker pull <image name>        To pull image from the docker hub without running

$ docker run <image name>         To start a container with the latest version

$ docker ps                       To list running containers

$ docker ps -a                    To list all containers

$ docker stop<container ID>       To stop a running container

$ docker kill<container ID>       To terminate current processes and stop the container

$ docker rm <container ID>        To remove a stopped container

$ docker images                   To list the available image on the host

$ docker rmi<image ID>            To delete an image from the host

$ docker container prune          To remove all the stopped containers at once

More Docker commands and functionalities can be found using docker help command on the terminal.

What is Docker Compose?

Docker-compose is an automation tool for building and running multi-container Docker applications on a single host. This can be achieved by creating a docker-compose yaml file which contains the container images of the applications to be deployed. The docker-compose service can be deployed or stopped with a single command.

Basic Docker-Compose Commands

$ docker-compose version                           To check the docker-compose version

$ docker-compose config                            To verify the yaml file

$ docker-compose up                                To deploy the applications

$ docker-compose down                              To stop the applications

$ docker-compose logs                              To view the logs and status

Exposing Web Applications

Exposing web applications is essential because it enables two different containers on the same Docker network to communicate with each other by exposing the port. The flag EXPOSE can be used in a dockerfile or –expose on the command line.

Container Orchestration

What is container orchestration?

Container orchestration is an automation process for container management and control across infrastructures.

Features of Container Orchestration

Container orchestration has the following features:

  • Scaling
  • Scheduling
  • Load balancing
  • Clustering
  • Deployment
  • Fault tolerance
  • Monitoring
  • Services Exposure
  • Networking
Seyi Ewegbemi

Seyi Ewegbemi

Student Worker