Ready…Set…Start Your Containers
61% of container technology adopters expect more than 50% of their existing and new applications to be packaged on containers over the next two years.
This article was originally published on cio.com, authored by Siva Sreeraman, Vice President, CTO and Modernization Tribe Leader at Mphasis.
In the decades past, developers faced many errors when porting applications created for a specific computing environment. Configuration differences such as versions of compilers, loaders, runtime libraries, middleware and operating system in new environments created incompatibility and unreliability, and led to undesired increases in project effort, cost, and timelines.
Containers provide an elegant solution to this problem. Each container leverages a shared operating system kernel and encapsulates everything needed to run an application (application code, dependencies, environment variables, application runtimes, libraries, system tools etc.) in an isolated and executable unit. Differences in operating system distributions and underlying infrastructure configurations are thus abstracted away, allowing application programs to run correctly and identically even when deployed to different environments.
How we got here
Containerization originated in 2001 as a project allowing several general-purpose Linux servers to run on a single box with autonomy and security. Subsequent projects at IBM, Red Hat, and Docker moved this technology forward over the years. In 2014, Google launched their container orchestration platform Kubernetes (K8s) and declared that it started over 2 billion containers on a weekly basis. In 2020, the Cloud Native Container Foundation released data that indicated an overwhelming preference for Kubernetes among companies that used containers in production.
Many organizations today decouple their complex monolithic applications into modular, manageable microservices packaged in containers which can be linked together. Container orchestrators such as Kubernetes further automate installation, deployment, scaling, and management of containerized application workloads on clusters, perform logging, debugging, version updates, and more.
Containers are grouped into deployable computing units called pods, which contain shared network and storage resources and specifications on how to run the containers. Pods run on nodes – physical or virtual machines containing a set of CPU and RAM resources. Nodes are managed by the container orchestration layer and pool together into more powerful machines called clusters. Clusters distribute work among individual nodes as needed to execute programs. If any nodes are attached or removed, the cluster manages this, and it remains transparent to the program.
Containers appeal to the software development community because of the agility, uniformity, and portability they provide in creating and deploying applications and their consistent performance of code execution irrespective of the run time environment – a ‘write once, run anywhere’ approach across different infrastructures, on premise, or in the cloud. Container images can be quickly rolled back in case of any issues observed. They can be rapidly spun up, adding business functionality and scalability on demand, and torn down, reducing resource usage and infrastructure costs.
Since containers do not need to run a full operating system and share the host machine’s operating system kernel among each other, they are lightweight, and do not have the same resource utilization needs as virtual machines do. Containers are faster to start up, drive higher server efficiencies, and reduce server and licensing costs.
Containers allow developers to focus on business functionality and not worry about the underlying configurations of applications. A consistent and short deployment process enables faster delivery of new applications. 75% of companies using containers achieved a moderate to significant increase in application delivery speed.
A great benefit of isolating applications into containers is the inherent security provided. As images are the building blocks of containers, maliciously introduced code as well as unnecessary components can be prevented from entering containers by using trusted image registries, enhanced access control methods, and strict policies applied to both accounts and operations. Whenever changes are made to container configurations, or containers started, auditability must be implemented.
Though containers solve a lot of security problems compared to traditional virtualization methods, they also introduce new security challenges. As the Kubernetes cluster attack surface vector area is so large and increasing exponentially – there are layers upon layers of images that span thousands of machines and services – this has provided many opportunities for cybercriminals to launch coordinated attacks on Kubernetes to access company networks by taking advantage of any misconfigurations.
Recent attacks have introduced cryptojacking, wherein an organization’s vast compute resources on the cloud are unsuspectingly diverted towards mining cryptocurrency. As Kubernetes manages other machines and networks, enterprises should continuously strengthen their security postures and take proactive measures to defend themselves.
Though container cluster managers such as Docker Swarm and Apache Mesos have enabled developers to build, ship, and schedule multi-container applications, and access, share, and consume container pools through APIs, container scaling is still evolving. Container orchestration tools and container cluster managers have not fully integrated with each other. Cluster managers today are not able to provide security at enterprise-class levels, and a common set of standards is lacking.
Containerization best practices
Current best practices for container operations include:
Despite challenges, containers present many benefits, and offer enterprises an attractive choice for software application development. 61% of container technology adopters expect more than 50% of their existing and new applications to be packaged on containers over the next two years. By 2026, Gartner estimates that 90% of global organizations will be running containerized applications in production.
The usage of managed public cloud Container-as-a-Service (CaaS) such as Amazon Web Services (AWS) Elastic Kubernetes Service (EKS), Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE) is widespread among enterprises today. Container-based Platform-as-a-Service (PaaS) offerings such as Google Cloud Anthos, Red Hat Open Shift, VMWare Tanzu Application Service, and SUSE Rancher are also prevalent. Lightweight Kubernetes distributions (with half the memory needed for K8s, and smaller binary sizes) like SUSE Rancher K3s and Mirantis K0s. can be seen in Edge, Internet of Things, and Reduced Instruction Set Computing applications.
While the introduction of containers may add some vulnerabilities, the speed, efficiency, and savings they provide in return are well worth the easily managed risk. Thanks to these considerable benefits, container technology will continue to be a foundational element of the enterprise software technology stack over the coming years. Companies should continue to invest in and utilize containerization in their digital transformation journeys.