Pro
Siirry sisältöön
Digitalisation

Accelerating platform evolution for an AI-powered future

Kirjoittajat:

Johanna Heinonen

lehtori
Haaga-Helia ammattikorkeakoulu

Published : 22.04.2026

For years, traditional infrastructure defined how we built and ran software. Applications lived on static, often manually provisioned servers (physical or virtual machines) that were running tightly coupled monolithic applications. When something needed to change, teams opened a ticket and waited. Release cycles moved at the pace of quarterly planning, not customer demand.

Modern applications are increasingly distributed, global, and constantly evolving. User expectations shift in real time and customers expect new features in hours, not months. AI workloads push infrastructure harder than anything before them. These workloads require elasticity, Graphics Processing Unit (GPU) orchestration, and automation at a scale we have never seen before.

The old model – built for stability, not speed- simply cannot keep up with the scale, dynamism, and complexity of modern systems.

Transition to cloud-native infrastructure

The cloud-native transition – maybe the biggest shift in the history of infrastructure – is happening because modern applications, global scale, and AI workloads demand an automated, resilient, and open infrastructure. Organizations everywhere are moving to cloud-native, not because it is fashionable, but because the world they operate in has fundamentally changed.

Cloud-native introduces an entirely new operating model – one built on containers, declarative APIs, automation, and open-source collaboration. At the center of this transformation is Kubernetes. What started as a container orchestrator has become the universal control plane, the backbone of modern infrastructure and is now developing into the operating system for AI.

Kubernetes brings cloud-native principles to life: it automates deployment, scaling, networking, and the full lifecycle of containerized applications. This transition is about speed, resilience, cost efficiency, and the ability to innovate without friction and scale without limits.

This transition will also shape ICT infrastructure education at Haaga Helia, since understanding modern cloud native practices is essential for working within the rapidly developing ICT ecosystem.

Cloud native computing foundation (CNCF)

The Cloud Native Computing Foundation (CNCF) is part of the Linux Foundation, an open source organization that hosts many open-source projects in industry including Linux kernel and open-source cloud platform OpenStack. CNCF itself is dedicated to advancing cloud-native technologies. While Kubernetes is central to this ecosystem, the cloud-native landscape now spans hundreds of projects (CNCF Landscape 2024) across monitoring, networking, storage, and increasingly AI. CNCF provides vendor-neutral governance for open-source projects, defines standards for modern infrastructure and ensures interoperability across tools and platforms.

KubeCon + CloudNativeCon is the CNCF’s flagship global conference and the largest gathering of Kubernetes developers, open-source maintainers, platform engineers, and cloud architects in the world. It is the place where the cloud-native community comes together and where the future of modern infrastructure is shaped.

The recent KubeCon Europe, held 23–26 March 2026 in Amsterdam, made history perhaps as the largest open-source event ever organized. It brought together more than 13 000 participants from over 100 countries and 3474 organizations. The CNCF ecosystem itself spans more than 230 projects and is supported by over 300 000 contributors from 191 countries. KubeCon is not just a conference: it is the heartbeat of the cloud-native movement.

Kubernetes’ adoption and its evolution

The CNCF’s annual Cloud Native Survey (CNCF Announcement; CNCF Report 2026) shows that 82 % of container users now run Kubernetes in production, making it the de facto operating layer for modern infrastructure. Its platform-agnostic design gives organizations the freedom to run on any hardware or cloud without vendor lock-in.

Kubernetes did not start as the global standard it is today. In the beginning, it was simply a scheduler, an effective, if somewhat complex, way to run containers consistently across environments and a tool for teams dealing with containers at scale.

Today, Kubernetes has stopped being ‘just infrastructure’ and has evolved into the programmable control plane for modern distributed systems. It now runs mission-critical workloads (Masolo 2025; Theirstack; Kubernetes Case Studies), for example Uber’s Michelangelo powers global-scale services and increasingly serves as the foundation for their AI workloads. But AI also introduces an entirely new set of challenges – massive GPU clusters, heterogeneous hardware, high-throughput data pipelines, and complex scheduling requirements – that are pushing Kubernetes into its next phase of evolution.

Kubernetes: evolving into the operating system for AI workloads

By 2026, AI workloads are shifting to an estimated 67 % inference (Google blog) and 33 % training, marking a transition from building models to deploying them. Agents can only move as fast as their platform. If the platform cannot schedule GPUs, manage data throughput, or isolate workloads, AI agents stall.

The real challenges now lie in running inference (the process of using a trained AI model to make predictions or generate answers based on new input) reliably at massive scale, managing the skyrocketing cost of GPUs and specialized hardware, and creating standardized, repeatable patterns for deploying models and AI agents across organizations.

This is pushing cloud-native platforms, and Kubernetes specifically, into a new evolutionary phase. The key capabilities in this evolution are the following:

  • Hardware & NUMA aware scheduling ensures that workloads land on the right CPU/memory topology.
  • Specialized hardware discovery & allocation, as Kubernetes can discover accelerators like GPUs and allocate these to workloads.
  • Agent sandboxing provides a secure, isolated environment where AI agents can run code, use tools, or interact with systems within strict boundaries.

GPUs have been central to modern machine learning and high-performance computing workloads yet integrating them into Kubernetes has required specialized tooling and vendor specific components. At KubeCon Europe in Amsterdam, NVIDIA announced that they will donate the GPU Dynamic Resource Allocaton (DRA) driver to the CNCF (Boitano 2026), which is a major step towards standardizing GPU orchestration in cloud-native environments.

It would be preferable for open-source to win as the foundational platform for AI workloads, because otherwise AI infrastructure could become closed and centralized, putting the intelligence layer of the internet under the control of only a few dominant platforms.

Regulation, sovereignty, and the future of open collaboration

Cloud-native technology has grown into a global driver of innovation because of its open, modular architecture, which allows developers everywhere to build on shared components rather than reinventing them. This shared-upstream model accelerates innovation and encourages worldwide collaboration, as contributors across the globe continuously improve the same codebases – strengthening reliability, security, and development speed for all.

Cloud-native keeps the Internet running. The world’s online services depend on shared open-source infrastructure for deployment, scaling, reliability, and security. But new pressures are reshaping the cloud native landscape, as governments, enterprises, and critical industries seek greater control over data, infrastructure, and software supply chains.

Europe is introducing major regulatory frameworks such as the EU Cyber Resilience Act (CRA). Governments are launching sovereign tech funds (STFs) and new certification programs. Across regions, jurisdictional requirements are increasing the friction between global open-source collaboration and demands for local oversight. The term ‘sovereign AI’ increasingly refers to efforts that allow nations or organizations to develop AI capabilities with minimal external dependency, maintaining control over systems, data, and decision-making. In this environment, protecting the global code commons becomes essential (Gerosa et al. 2025).

If the requirement of sovereignty leads to fragmentation, we risk the value open source has brought globally.

Sovereignty, however, should never be confused with isolation. It can be achieved through globally shared open-source code combined with locally governed deployments. This way the innovation layer remains global, but the operational layer adapts to regional requirements. Open-source foundations play critical enabling role by providing governance structures, coordinating security practices and sustaining the shared technical commons.

This way the apparent paradox between sovereignty and collaboration resolves through open source methodologies that enable nations and organizations to control their data, deployments and AI capabilities while still benefiting from collective innovation.

Key takeaways
• Traditional infrastructure cannot meet modern demands, as static servers, manual processes, and monolithic apps were built for a slower, simpler era that no longer exists.
• Cloud native has emerged as the new operating model built on containers, declarative APIs, automation, and open-source collaboration, with hardware agnostic Kubernetes at its center.
• Kubernetes is evolving into the operating system for AI enabling GPU orchestration, hardware-aware scheduling, and secure agent execution at massive scale.
• If open source does not win as the foundational platform for AI, the infrastructure powering AI could become closed and centralized, and placing control of the internet’s intelligence layer with only a few dominant platforms.
• Open collaboration and sovereignty must coexist, as global open-source innovation paired with locally governed deployments ensures both interoperability and control without fragmenting the shared code commons.

References

CNCF Landscape. 2024. Accessed: 21.4.2026.

Gerosa, M., Hermansen, A., Lai A., Lawson, A. 2025. The State of Sovereign AI. Linux Foundation. Accessed: 21.4.2026.

Google blog. What is AI inference? Accessed: 21.4.2026.

Kubernetes Case Studies. Accessed: 21.4.2026.

Theirstack. Companies that use Kubernetes. Accessed: 21.4.2026.

The author has used artificial intelligence to better formulate the text.

Picture: Shutterstock