Blog post

Embracing the Evolution to Cloud-Native Infrastructure

By Tony Iams | January 24, 2023 | 0 Comments

ITCloud and Edge InfrastructureCloud ComputingCloud Computing for Technical ProfessionalsData Center InfrastructureData Center Infrastructure for Technical ProfessionalsDevOpsInfrastructure and Operations LeadersInfrastructure, Operations and Cloud ManagementInfrastructure, Operations and Cloud Management for Technical Professionals

The IT industry regularly undergoes significant evolutions in infrastructure approaches, starting with the shift from mainframes to minicomputers in the 1970s; the adoption of client/server architecture based on industry-standard hardware and software in the 1980s-90s and the rise of virtual machines in the early 2000s. Now, the emergence of cloud-native infrastructure represents a change at similar scale.

Gartner broadly defines the term “cloud-native” as something created to enable or leverage cloud characteristics. Cloud-native infrastructure is used to deliver platforms with agility that mirrors the agile processes for delivering cloud-native applications. Cloud-native infrastructure therefore needs to be programmable, resilient, immutable, modular, elastic, and declarative (PRIMED). There are different ways to deploy cloud-native infrastructure, but in practice, large-scale cloud-native initiatives will most likely be based on containers and Kubernetes. As Kubernetes becomes the foundation for a growing number of applications, both internally developed and delivered by ISVs, it effectively becomes the “infrastructure” on which these applications are deployed.

Compared with machine-centric virtual infrastructure, cloud-native infrastructure is fundamentally application-centric. When based on Kubernetes, cloud-native infrastructure introduces some practical changes, such as pods effectively becoming the CPUs; persistent volume claims (PVCs) becoming the data storage devices, and service connectivity layers such as services meshes becoming the network. Cloud-native infrastructure will also exploit the evolution of compute, storage, and networking technology at lower levels of the infrastructure, such as running containers on bare metal servers; offloading tasks to specialized function accelerator cards (FACs); using processors based on architectures such as ARM, and running code with micro-VM approaches like WebAssembly (Wasm).

More importantly, unlike previous waves of infrastructure evolution, the introduction of cloud-native infrastructure will require more than just the adoption of new architecture principles and technology. New operational practices like GitOps, which leverage the active control plane in Kubernetes and its declarative configuration, and consumption-based models for infrastructure sourcing are also fundamental to its implementation. To achieve the full potential of cloud-native infrastructure, all three of these aspects must be addressed holistically.

This document (Gartner subscription required) offers guidance to I&O technical professionals for implementing infrastructure that is optimized for cloud-native architecture using Kubernetes. The goal for deploying cloud-native infrastructure is to support a self-service platform for developing and/or delivering applications based on cloud-native architecture. Kubernetes is typically the core of cloud-native infrastructure today, but it will become less visible over time as it is increasingly delivered with a serverless experience, even as it is pushed further to the edge. Ultimately, software product teams will expect to work in an environment where low-level compute, storage and networking resources are abstracted away.

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Leave a Comment