Running Kubernetes Where Your Data Lives: AKS on Azure Local

Running Kubernetes Where Your Data Lives: AKS on Azure Local

Running Kubernetes Where Your Data Lives

Containers and Kubernetes have moved from being the domain of cloud-native start-ups to a mainstream deployment model across enterprises of all sizes. The challenge for many organisations has been that while Kubernetes works brilliantly in the public cloud, running it on-premises has historically been a “you’re on your own” exercise. Build your own clusters, manage your own control plane, figure out your own upgrade process, handle your own networking. It’s doable, but it’s a lot of operational overhead.

Azure Kubernetes Service (AKS) enabled by Azure Arc on Azure Local changes this equation. It brings the managed Kubernetes experience that Azure customers expect in the public cloud down to your on-premises infrastructure. And with the Azure Local 2411 release, it’s become materially better.

What AKS on Azure Local Actually Is

At its simplest, AKS on Azure Local lets you create and manage Kubernetes clusters on your Azure Local infrastructure using the same Azure APIs, tools, and portal experience that you’d use in Azure itself. The AKS control plane runs locally on your Azure Local cluster, managed through Azure Arc, and you deploy workloads to it just as you would to an AKS cluster in Azure.

The key differentiation from running vanilla Kubernetes on-premises is management. Microsoft handles the Kubernetes control plane lifecycle, including upgrades and patching. You manage your workloads. This is the same value proposition that makes AKS popular in Azure, and having it available on-premises is significant.

The 2411 release supports Kubernetes versions 1.27 through 1.29, and the AKS Arc CLI extension has been updated with new capabilities for cluster management, node pool operations, and monitoring.

Why This Matters

The use cases for on-premises Kubernetes are growing rapidly.

Application modernisation is the most common driver I see. Organisations want to containerise applications but can’t or won’t move them to the public cloud. Data residency requirements, latency sensitivity, or simply the economics of running persistent workloads on-premises all contribute to this. AKS on Azure Local gives you a managed Kubernetes platform without needing to send your data to the cloud.

Edge and branch deployments are increasingly relevant. Think retail locations running inventory management, manufacturing floors running quality control, or healthcare facilities running diagnostic workloads. These need local compute, and increasingly they need Kubernetes to run modern applications.

Developer consistency is underappreciated. If your developers are building containerised applications targeting AKS in Azure, having an identical API surface on-premises means the same CI/CD pipelines, the same Helm charts, the same deployment manifests work in both locations. This reduces friction and speeds up delivery.

The Dell AX Experience

Running AKS on Dell AX nodes for Azure Local works well in practice. The hardware validation that Dell puts into the AX system extends to the AKS workloads running on it, and the Solution Builder Extension handles firmware and driver updates for the underlying infrastructure without disrupting the Kubernetes clusters running above.

One consideration worth calling out is resource planning. AKS on Azure Local runs alongside your VM workloads on the same physical infrastructure. The AKS control plane VMs consume compute and memory resources, and your Kubernetes worker nodes will compete with other workloads for the same pool of resources. Plan your node sizes and resource reservations carefully, especially if you’re running a mixed workload environment.

For storage, AKS on Azure Local can use persistent volumes backed by Storage Spaces Direct, which means your containerised workloads get the same storage performance and resiliency characteristics as your VM workloads. If you’re running Dell AX nodes with PowerFlex, the CSI driver support for AKS extends the storage options further.

What’s Coming

The AKS on Azure Local story is one of continuous improvement. GPU support for Kubernetes node pools is increasingly relevant as AI inference workloads move to the edge. Improvements to networking with Azure CNI overlay reduce the IP address consumption overhead. And the monitoring story through Azure Monitor for containers provides the same observability experience you get with AKS in Azure.

I also think the combination of AKS on Azure Local with Azure Arc’s GitOps capabilities is worth watching. Being able to declaratively manage the state of your on-premises Kubernetes clusters from a Git repository, using Flux or ArgoCD through Azure Arc, is a powerful operational model that brings cloud-native practices to on-premises infrastructure.

The Bottom Line

If you’re running Azure Local and you have containerised workloads, or plan to, AKS enabled by Azure Arc should be your default Kubernetes platform. The operational simplicity of a managed control plane, the consistency with Azure AKS, and the integration with the broader Azure management story make it the obvious choice. It’s not perfect, there are still gaps and rough edges as the platform matures, but the trajectory is clear and the experience is getting meaningfully better with each release.