Introduction to Container Virtualization and Orchestration

with Docker and Kubernetes
Chris Bryant
August 9, 2022

What are containers?

Containers are virtual packages that store software applications in an easily deployable and scalable way with all the dependencies needed for runtime on many different types of computer infrastructures. Containers are runtime environments for container images.

Images make up the contents of containers. Images are created from other images called base images, which contain some of the most basic dependencies and libraries. Further configuration needs can be added by writing new image layers on top of the base image. These image stacks are read-only image layers that hold the application code and dependencies making them more hardware-agnostic for scalable deployment. 

Container images can be modified using a read-write “container layer” added to the top of the stack. All code modifications and data generated during runtime are written on this layer. The container layer is best for ephemeral data. It is a good practice to leave this layer light because when a container is deleted, the read-write layer is also deleted while the underlying images remain. Other data storage options like volumes can be used to store persisting data that can be accessed for future projects and by other containers. The images left after deletion can still be accessed and used by other containers, even at the same time.

This figure is a visual representation of a container including read-only (R/O) layers, and the read-write (R/W) layer atop a base image. Photo Credits

How Containers Run

There are different platforms for developing and running containers. Docker is one of  the most popular such platforms, so we will reference it as the standard for the purposes of this article. 

Docker provides standard base images like Ubuntu and Alpine that can be searched from a library of base images. A Dockerfile is created to reference images and files for the building of a container and works like a recipe of commands to create, delete, and store those images. The “build” command is used to turn applications into images in the Dockerfile. A container is created from these images when Docker adds a read-write layer where changes made to the running container are written. Files are created, updated, and deleted at this layer. Containers use a host operating system’s (OS) kernel to communicate with system resources, negating the need to virtualize a kernel. This allows containers to start up almost immediately.

This figure shows the structure of how the Docker engine connects applications to the host OS. Photo Credits 

A component called a storage driver manages the contents of the image layers and the writable layer allowing for changes to be made. If these changes need to be made to underlying layers, a “copy-to-write” strategy is employed to copy that layer at the point of modification to the writable layer. This strategy lowers input/output (I/O) and the overall size of the image layers because a new layer does not need to be downloaded. Because the writeable layer is deleted when the container is, “Docker volumes” are used to store data for later and for other containers within the host system to access. Containerization is a dynamic deployment method, but container security should not be forgotten.

Container Hardening 

There are many strategies to add more security to containers and their virtual environments. Hardening containers generally involves performing security scans, implementing firewalls, and running them through a secure environment to test for runtime vulnerabilities. Potential attack surfaces include images, container registries (secure banks where images are saved), container runtimes, and host OSs. Across all of these attack surfaces, regular scanning, continuous monitoring, and implementing access controls are standard security practices.

Hardening containers provides security for application source code and other sensitive data. Photo Credits

Regularly updating images ensures that they maintain the most up-to-date security patches for the latest vulnerability exploits. Employing image signing allows administrators to keep track of who is making what changes to images similar to a virtual fingerprint. Image registries should be private and allow the administrator full control over the number and types of images allowed. A registry’s host server is also a vulnerable point that needs reinforcements. The container runtime, a component of the container engine, is responsible for starting containers. Most container security tools monitor the external environment of the container, so security within relies on the security measures written into the application itself. The runtime host, network protocols, and network payloads should be continuously monitored to ensure application availability. These security practices will harden containers and secure supporting components, but there is another layer to containerization: orchestration.

Container Orchestration

Container orchestration platforms, such as Kubernetes, handle the operational tasks of creating, starting, organizing, destroying, and monitoring containers by automating them, allowing developers to quickly scale for deployment. Kubernetes is one of the most popular container orchestration platforms, so we will reference it as the standard for the purposes of this article. 

Kubernetes has a few key functions that help users realize the full scalability potential of containerization. Kubernetes ensures that containers are run when and where they are needed by automating the scheduling and distribution of applications. It also helps manage containers’ access to resources and tools they need to work using objects like “manifests,” or files that describe the desired configuration of resources. Kubernetes even manages communication permissions between applications within a cluster. Clusters are a main feature of Kubernetes architecture.

Some of the basic components of Kubernetes architecture include:

  • Cluster - Clusters are the main operating structures of Kubernetes’s architecture. A cluster is made of connected, highly available computers or virtual machines called control planes and nodes.
  • Node - Nodes are worker computers that have a component called a kubelet that is needed to communicate with the control plane, as well as tools needed to run container operations like Docker.
  • Control Plane - A control plane is a master server that coordinates nodes in a cluster. The control plane manages the distribution and activation of pods. (It is advantageous to keep multiple control planes in a cluster in the case one fails).
  • Pods - Pods are objects that enclose 1 to 2 containers. They are the smallest deployable units in Kubernetes.
  • Namespaces - Namespaces are Kubernetes objects that are used to divide system resources within a cluster.
  • Kubernetes API - Kubernetes’s API is the core component of the control plane and allows users, different parts of the cluster, and external components to communicate. The API is responsible for enabling the query and manipulation of API objects like pods. 

This figure is a visual representation of a cluster including nodes, a control plane, kubelets, and container operation (Docker). Photo Credits

From a security standpoint, there are practices, tools, and built-in security qualities for Kubernetes to mitigate vulnerabilities and attacks. Possible attack surfaces include nodes, the Kubernetes API, networks, pods, and data. Kubernetes automatically detects and replaces failing or compromised containers and pods, as opposed to updating or patching them. Kubernetes also has built-in access controls and security contexts for its attack surfaces. Attack surfaces can be minimized by cutting unnecessary user accounts and surplus applications, libraries, and other operating system components. To learn more about Kubernetes security, visit sysdig.com.

Containerization Summarized

Virtual containerization is surely a complex topic worth diving into for the deployability and scalability benefits it offers. With a containerization engine like Docker, applications can be conveniently packaged, ready-to-start for consistent performance on many different computer infrastructures. Security can be realized through the attritable qualities and isolated environments of containerized applications. Container orchestration platforms like Kubernetes make deploying containers at speed and scale practical. They also add layers of security through continuous monitoring and rigid access controls. These aspects of containerization may make it a useful method of deployment for those seeking discrete, secure, system agnostic, and highly scalable deployment methods.

Oops! Something went wrong while submitting the form.

Get great content updates from our team to your inbox.

GDPR and CCPA compliant.
Second Front Systems logo.
Second Front Systems is a public benefit, venture-backed software company that equips national security professionals for long-term, continuous competition for access to emerging technologies.
An icon of a location marker.
1775 Tysons Blvd, Suite 6193
Tysons, VA 22102
Second Front Systems logo.
© 2022 Second Front Systems, Inc.