top of page
Search
Javier Guillermo

How Containers Are Making Way for the 5G and an Edge-centric World – Part 2: Docker Architecture

In Part I of this container blog series, we covered the definition and origins of this technology and how companies like AT&T and Dell are leveraging the technology for 5G rollouts. In Part II, we’ll dive deeper into architecture and look at the major platforms available today.

I’m sure all of you have heard about Kubernetes and Docker, but do you really understand how they relate to container technology?

In short, Docker is an open-source project based on Linux containers. A container is a standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. It uses some Linux Kernel features like namespaces and control groups to create containers on top of an operating system.


The Rise of Docker


We discussed in Part I how the idea of containers can be tracked back to the 70s and there are currently other container technologies like Mesos, LXC, BSD Jails, containerd (yes, that’s the name), Solaris Zones, Windows server containers. But none of these technologies became popular with adoption rates of the current king: Docker. The latest estimates, although they vary depending on the sample method and who is doing the actual research, put Docker market share at about 80%, while its closer contender, CoreOS rkt, to an estimated 12-14%. The simple math leaves all the other options I briefly mentioned to less than 8% of the market share[1]. We are talking about a market poised to reach $5 Billion USD by 2023.



That’s a very impressive feat for a small company with less than 5 years of history, nevertheless, competition is growing because back in 2017, Docker market share was over 97%!


So, what’s the secret to Dockers success?

Moby Dock!

Yes, the cute mascot, a gigantic but hard-working whale, loaded with containers, which is beloved by the community and responsible for most of the success. Just kidding!

In all seriousness there were a few more features the competitors didn’t have, mainly:


  1. Open Source which allowed for rapid adoption. Docker came along in 2013 and most of the code was written by Solomon Hykes, and since then, the growth has been exponential. If Docker would have been released as a pay per usage software, I bet the adoption rate would have been much slower. But without software licensing fees and the backing of a whole community of users and developers to build on the initial code, adoption was quick.

  2. Ease of use. Solomon’s mantra was “build once, run anywhere.” Thanks to containers, a lot of program developers, administrators, consultants, architects and many others can now quickly build and test portable applications. It allows anyone to package an application on their computer, which in turn can run with no modifications on any cloud (public, private, hybrid) or even bare metal. All major cloud providers have developed tools for containers (AWS, Google Cloud, Azure, etc.).

  3. Speed. Docker containers are lightweight and crazy-fast because they are sandboxed environments running on the kernel and taking up fewer resources, as they have a lot less things to load before booting up. A Docker container can be created in less than 3 seconds, much quicker than VMs, which take longer because they have to boot up a full virtual operating system every time. It is not a coincidence then that the software company leader in virtualization (VMware) is spending billions bringing together traditional VM management tooling with containers and also buying a lot of companies with a focus on this technology (Bitnami, Cloud Health, Wavefront, Heptio, and just recently, the multi-billion-dollar acquisition of Pivotal).

  4. Docker Hub. The community of users can also benefit from the increasingly rich ecosystem of the Docker Hub, where thousands of public images created by the community of tens of thousands of users reside. Think of the docker hub as a “the apple store for containers” where you have all kind of images: storage, security, messaging services, DevOPS, application services, analytics, etc.

  5. Portability, modularity and scalability. Container technology allows for a much larger scale of applications in virtualized environments due to the efficiencies of virtualizing the Operating System, thereby making it easier to break out your application’s functionality into individual containers. For example, you might have your SQL database running in one container and your Apache web server in another while your Node.js application is in a third one. Thanks to containers and Docker’s ease of use, you can link these containers together to create your application, making it much simplified and quick to scale or update components independently in the future.

  6. Continuous deployment, integration and innovation. This would fall under the DevOps area and this trend has been growing to the cloud and the plethora of Software as a Service (SaaS) options out there. Containerization is built to assist in agile software development process and be coupled with microservices. Continuous deployment and integration are at the core of the agile philosophy and containers were designed with this paradigm in mind since inception.

The Docker Architecture

The core of docker consists of the Docker Engine, Docker Client, Docker Objects and Docker Registries.



The Docker engine is the system core. Think of an application that is built with the client-server architecture in mind. This engine has 3 components and it is installed on the host machine:

  1. Docker Server is the Daemon (persistent background process) called dockerd which contains Docker images, containers, networks, and storage volumes. It constantly listens for Docker API requests and processes them.

  2. Docker Engine REST API is an application program interface (API) used by applications to interact with the Docker daemon which be accessed by an HTTP client. REST stands for representational state transfer and it is the most common architecture style for creating and using web services.

  3. Docker Command Line Interface (CLI) is a command line interface client for interacting with the Docker daemon. Pretty straight forward commands making it easy to use.

  4. Docker Client. We couldn’t have a server without a client, could we? Users can interact with Docker through a client. Whenever a docker command is entered, the Docker client sends it to the dockerd daemon, which carries them out. Docker API is used by Docker commands.

  5. Docker Registries is the location where the Docker images are stored. If you come from the Openstack world, think of it as a containers Glance. Registries can be public or private. The main commands that affect the registry are the pull/run or the push commands. Click on the animations below to show you how they work.



  • 6. Finally, we have the Docker Objects, which are the images, volumes, networks and containers.

    • Images are read-only templates containing instructions to create a docker container. You can either create your own images using dockerfile or use an image already made from the docker hub (explained before).

    • After you run a docker image, it creates a docker container. All the applications and their environment run inside this container. You can use Docker API or CLI to start, stop, delete a docker container.

    • Think of volumes as drives. It is persisting data generated by docker and used by Docker containers and stored in Volumes. Volumes are managed through the docker CLI or Docker API. Volumes work on both Windows and Linux containers. Rather than persisting data in a container’s writable layer, it is always good practice to use volumes for it. Volume’s content exists outside the lifecycle of a container, so using volume does not increase the size of a container.

    • Networks Docker networking is a passage through which all the isolated containers communicate with one other. There are mainly five network drivers in docker: Hosts (removes the network isolation between docker containers and docker host), Bridges (default network driver), Overlays (used when the containers are running on different Docker hosts or Swarms), Maclan (assigns mac address to make it look like physical devices) and none (disables networking).


Summary

I hope you enjoyed learning the architectural features and overview about Docker. Please stay tuned as additional blogs in this series will discuss Docker alternatives, Kubernetes and lastly, Airship!

75 views0 comments

Comments


bottom of page