As containers are increasingly used to deploy new microservices, and to virtualize some existing workloads, much of the control over system operations will shift up the stack to container management platforms, which orchestrate the placement of containers across multiple container hosts based on business-driven policies. A container management platform automates the creation and placement of containers in order to support higher-level services based on containerized application code, and it provides distributed computing functions such as service discovery, load balancing, failure detection and recovery, multi-tenancy, and distributed security management to orchestrate the operation of containers.
Much of the industry momentum for container management has consolidated around Kubernetes, which has been declared a “graduated” project by the Cloud Native Computing Foundation (CNCF), and is now supported by leading software vendors such as Docker, Google, IBM, Mesosphere, Microsoft, Oracle, Pivotal, Red Hat, SAP, SUSE and VMware. There are a variety of ways to deploy Kubernetes, and many of these approaches will introduce a steep learning curve for I&O administrators. Kubernetes is open source software (OSS) that can be installed on bare metal servers, virtual infrastructure based on hypervisors such as VMware, Hyper-V, or KVM, or on compute instances hosted by an IaaS public cloud platform. The easiest way to deploy Kubernetes is to use a managed container-as-a-service (CaaS) offering such as Google Kubernetes Engine (GKE), Microsoft Azure Container Service (AKS), or Amazon Elastic Container Service for Kubernetes (EKS). Indeed, if you do not have the technical expertise for the significant engineering effort of installing and maintaining Kubernetes in production on existing infrastructure, you should choose a managed CaaS approach.
However, some organizations may prefer to manage their own deployment of Kubernetes for technical, business or governance reasons. In these environments, I&O administrators will be responsible for provisioning the machines to host Kubernetes clusters, integrating the clusters with underlying storage and network infrastructure, and installing and upgrading the Kubernetes software running on the machines. This document (Gartner subscription required) describes how to integrate Kubernetes with existing infrastructure, focusing on the steps needed to prepare for its deployment, and reviewing its impact on infrastructure operations.
Most of the responsibility for deploying applications and services on Kubernetes will fall to developers, or more likely, to a DevOps team. After all, Kubernetes was designed by Google to allow developers to deploy and change an elastic fleet of applications and services with minimal involvement from I&O teams. However, as the use of self-managed Kubernetes starts to scale up, I&O personnel will have to be involved in some important aspects of its operation. It is therefore important for I&O administrators not just to become familiar with the architecture of Kubernetes as soon as possible, but also to start collaborating with developers to establish key Kubernetes operating processes for its “Day 2” operations.