Edit

Share via


Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)

This article covers the setup process, dual-stack networking configuration, and an example workload deployment for Azure CNI Overlay AKS clusters. For an overview of Azure CNI Overlay networking, see Azure Container Networking Interface (CNI) Overlay networking in Azure Kubernetes Service (AKS) overview.

Important

Starting on 30 November 2025, AKS will no longer support or provide security updates for Azure Linux 2.0. Starting on 31 March 2026, node images will be removed, and you'll be unable to scale your node pools. Migrate to a supported Azure Linux version by upgrading your node pools to a supported Kubernetes version or migrating to osSku AzureLinux3. For more information, see [Retirement] Azure Linux 2.0 node pools on AKS.

Prerequisites

  • An Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
  • Azure CLI version 2.48.0 or later. To install or upgrade Azure CLI, see Install Azure CLI.
  • An existing Azure resource group. If you need to create one, see Create Azure resource groups.
  • For dual-stack networking, you need Kubernetes version 1.26.3 or later.

Key parameters for Azure CNI Overlay AKS clusters

The following table describes the key parameters for configuring Azure CNI Overlay networking in AKS clusters:

Parameter Description
--network-plugin Set to azure to use Azure CNI networking.
--network-plugin-mode Set to overlay to enable Azure CNI Overlay networking. This setting applies only when --network-plugin=azure.
--pod-cidr Specify a custom pod CIDR block for the cluster. The default is 10.244.0.0/16.

The default network plugin behavior depends on whether you explicitly set --network-plugin:

  • If you don't specify --network-plugin, AKS defaults to Azure CNI Overlay.
  • If you specify --network-plugin=azure and omit --network-plugin-mode, AKS intentionally uses VNet (node subnet) mode for backward compatibility.

Create an Azure CNI Overlay AKS cluster

  • Create an Azure CNI Overlay AKS cluster using the az aks create command with --network-plugin=azure and --network-plugin-mode=overlay. If you don't specify a value for --pod-cidr, AKS assigns the default value of 10.244.0.0/16.

    az aks create \
        --name $CLUSTER_NAME \
        --resource-group $RESOURCE_GROUP \
        --location $REGION \
        --network-plugin azure \
        --network-plugin-mode overlay \
        --pod-cidr 192.168.0.0/16 \
        --generate-ssh-keys
    

Add a new node pool to a dedicated subnet

Add a node pool to a different subnet within the same VNet to control VM node IP addresses for network traffic to VNet or peered VNet resources.

  • Add a new node pool to the cluster using the az aks nodepool add command and specify the subnet resource ID with the --vnet-subnet-id parameter. For example:

    az aks nodepool add \
      --resource-group $RESOURCE_GROUP \
      --cluster-name $CLUSTER_NAME \
      --name $NODE_POOL_NAME \
      --node-count 1 \
      --mode system \
      --vnet-subnet-id $SUBNET_RESOURCE_ID
    

About Azure CNI Overlay AKS clusters with dual-stack networking

You can deploy your Azure CNI Overlay AKS clusters in a dual-stack mode with an Azure virtual network (VNet). In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure VNet subnet. Pods receive an IPv4 and IPv6 address from a different address space to the Azure VNet subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure VNet. The source IP address of the traffic is NAT'd to the node's primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6).

Note

You can also deploy dual-stack networking clusters with Azure CNI Powered by Cilium. For more information, see Azure CNI Powered by Cilium dual-stack networking.

Dual-stack networking limitations

The following features aren't supported with dual-stack networking:

Key parameters for dual-stack networking

The following table describes the key parameters for configuring dual-stack networking in Azure CNI Overlay AKS clusters:

Parameter Description
--ip-families Takes a comma-separated list of IP families to enable on the cluster. Only ipv4 or ipv4,ipv6 are supported.
--pod-cidrs Takes a comma-separated list of CIDR notation IP ranges to assign pod IPs from. The count and order of ranges in this list must match the value provided to --ip-families. If no values are supplied, the default value of 10.244.0.0/16,fd12:3456:789a::/64 is used.
--service-cidrs Takes a comma-separated list of CIDR notation IP ranges to assign service IPs from. The count and order of ranges in this list must match the value provided to --ip-families. If no values are supplied, the default value of 10.0.0.0/16,fd12:3456:789a:1::/108 is used. The IPv6 subnet assigned to --service-cidrs can be no larger than a /108.

Create an Azure CNI Overlay AKS cluster with dual-stack networking (Linux)

  1. Create an Azure resource group for the cluster using the az group create command.

    az group create --location $REGION --name $RESOURCE_GROUP
    
  2. Create a dual-stack AKS cluster using the az aks create command with the --ip-families parameter set to ipv4,ipv6.

    az aks create \
        --location $REGION \
        --resource-group $RESOURCE_GROUP \
        --name $CLUSTER_NAME \
        --network-plugin azure \
        --network-plugin-mode overlay \
        --ip-families ipv4,ipv6 \
        --generate-ssh-keys
    

Create an Azure CNI Overlay AKS cluster with dual-stack networking (Windows)

Important

AKS preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see the following support articles:

Before you create an Azure CNI Overlay AKS cluster with dual-stack networking with Windows node pools, you need to install the aks-preview Azure CLI extension and register the AzureOverlayDualStackPreview feature flag in your subscription.

Install the aks-preview Azure CLI extension

  1. Install the aks-preview extension using the az extension add command.

    az extension add --name aks-preview
    
  2. Update to the latest version of the extension released using the az extension update command.

    az extension update --name aks-preview
    

Register the AzureOverlayDualStackPreview feature flag

  1. Register the AzureOverlayDualStackPreview feature flag using the az feature register command.

    az feature register --namespace "Microsoft.ContainerService" --name "AzureOverlayDualStackPreview"
    

    It takes a few minutes for the status to show Registered.

  2. Verify the registration status using the az feature show command:

    az feature show --namespace "Microsoft.ContainerService" --name "AzureOverlayDualStackPreview"
    
  3. When the status reflects Registered, refresh the registration of the Microsoft.ContainerService resource provider using the az provider register command.

    az provider register --namespace Microsoft.ContainerService
    

Create a dual-stack Azure CNI Overlay AKS cluster and add a Windows node pool

  1. Create a cluster with Azure CNI Overlay using the az aks create command.

    az aks create \
        --name $CLUSTER_NAME \
        --resource-group $RESOURCE_GROUP \
        --location $REGION \
        --network-plugin azure \
        --network-plugin-mode overlay \
        --ip-families ipv4,ipv6 \
        --generate-ssh-keys
    
  2. Add a Windows node pool to the cluster using the az aks nodepool add command.

    az aks nodepool add \
        --resource-group $RESOURCE_GROUP \
        --cluster-name $CLUSTER_NAME \
        --os-type Windows \
        --name $WINDOWS_NODE_POOL_NAME \
        --node-count 2
    

Deploy an example workload to the Azure CNI Overlay AKS cluster

Deploy dual-stack AKS CNI Overlay clusters with IPv4/IPv6 addresses on VM nodes. This example deploys an NGINX web server and exposes it using a LoadBalancer Service with both IPv4 and IPv6 addresses.

Note

We recommend using the application routing add-on for ingress in AKS clusters. However, for demonstration purposes, this example deploys an NGINX web server without the application routing add-on. For more information about the add-on, see Managed NGINX ingress with the application routing add-on.

Expose the workload using a LoadBalancer Service

Expose the NGINX deployment using either kubectl commands or YAML manifests.

Important

There are currently two limitations pertaining to IPv6 services in AKS:

  • Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, you can't route this traffic to a pod, so traffic flowing to IPv6 services deployed with externalTrafficPolicy: Cluster fails.
  • You must deploy IPv6 services with externalTrafficPolicy: Local, which causes kube-proxy to respond to the probe on the node.
  1. Expose the NGINX deployment using the kubectl expose deployment nginx command.

    kubectl expose deployment nginx --name=nginx-ipv4 --port=80 --type=LoadBalancer'
    kubectl expose deployment nginx --name=nginx-ipv6 --port=80 --type=LoadBalancer --overrides='{"spec":{"ipFamilies": ["IPv6"]}}'
    

    Your output should show the exposed services. For example:

    service/nginx-ipv4 exposed
    service/nginx-ipv6 exposed
    
  2. Once the deployment is exposed and the LoadBalancer services are fully provisioned, get the IP addresses of the services using the kubectl get services command.

    kubectl get services
    

    Your output should show the services with their assigned IP addresses. For example:

    NAME         TYPE           CLUSTER-IP               EXTERNAL-IP         PORT(S)        AGE
    nginx-ipv4   LoadBalancer   10.0.88.78               20.46.24.24         80:30652/TCP   97s
    nginx-ipv6   LoadBalancer   fd12:3456:789a:1::981a   2603:1030:8:5::2d   80:32002/TCP   63s
    
  3. Get the service IP using the kubectl get services command and set it to an environment variable.

    SERVICE_IP=$(kubectl get services nginx-ipv6 -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
  4. Verify functionality using a curl request from an IPv6 capable host (Azure Cloud Shell isn't IPv6 capable).

    curl -s "http://[${SERVICE_IP}]" | head -n5
    

    Your output should show the NGINX welcome page HTML. For example:

    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    

To learn more about Azure CNI Overlay networking on AKS, see the following articles: