Monitor the release status by regions at AKS-Release-Tracker.
- Draft is looking to get feedback. If you have used Draft or are interested in Draft, please click here to start a conversation with the AKS team.
- Starting with Kubernetes 1.25, the following changes will be made default:
- Ubuntu 22.04 for x86, AMD and ARM64 architectures will be the default host.
- Windows Server 2022 will be the default Windows host. Important, old windows 2019 containers will not work on windows server 2022 hosts.
- Azure Cloud Provider for Azure will use v1.25
- Kubernetes 1.21 version has been deprecated as of July 31st, 2022. See documentation on how to upgrade your cluster.
- Some AKS labels have been deprecated with the Kubernetes 1.24 release. Update your AKS labels to the recommended substitutions. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Docker is no longer supported as a container runtime on Windows. Follow these steps in our documentation to upgrade your Kubernetes cluster to change your container runtime to containerd.
- Features
- AKS as an EventGrid event source is now Generally Available.
- Updating the Kubelet managed identity is now generally available.
- Multi-instance GPU support for AKS nodepools is now Generally Available.
- Disable CSI Storage Drivers is now Generally Available.
- Preview Features
- Azure CNI Overlay now supports 5th generation VM SKUs (v5 SKUs) to be used as nodes.
- Image Cleaner, for removal of insecure container images cached in the nodes, is now in public preview.
- Azure Network Policy Manager (NPM) is now supported in public preview for Windows nodepools and containers (using Windows Server 2022). Security rules from Kubernetes Network Policy resources can now be enforced on all pod traffic on/across Linux and Windows Server 2022 nodes for clusters with
--network-policy=azure
. NPM continues to be a managed solution, configurable at cluster creation.
- Behavioral Changes
- For Kubernetes 1.24+ the services of type
LoadBalancer
with appProtocol HTTP/HTTPS will switch to use HTTP/HTTPS as health probe protocol (while before v1.24.0 it uses TCP). And/
will be used as the default health probe request path. If your service doesn’t respond200
for/
, please ensure you're setting the service annotationservice.beta.kubernetes.io/port_{port}_health-probe_request-path
orservice.beta.kubernetes.io/azure-load-balancer-health-probe-request-path
(applies to all ports) with the correct request path to avoid service breakage.
- For Kubernetes 1.24+ the services of type
- Component Updates
- Update Windows NPM to v1.4.34.
- Update Azure CNI to v1.4.32.
- OSM updated to v1.2.1.
- Azure Cloud Provider for kubernetes was updated to v1.24.5, v1.23.18 (for these respective kubernetes minor versions), and to v1.1.21 for kubernetes minor version 1.22.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.09.05
Monitor the release status by regions at AKS-Release-Tracker.
- Draft is looking to get feedback. If you have used Draft or are interested in Draft, please click here to start a conversation with the AKS team.
- Starting with Kubernetes 1.25, the following changes will be made default:
- Ubuntu 22.04 for x86, AMD and ARM64 architectures will be the default host.
- Windows Server 2022 will be the default Windows host. Important, old windows 2019 containers will not work on windows server 2022 hosts.
- Kubernetes 1.21 version has been deprecated as of July 31st, 2022. See documentation on how to upgrade your cluster.
- Some AKS labels have been deprecated with the Kubernetes 1.24 release. Update your AKS labels to the recommended substitutions. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Docker is no longer supported as a container runtime on Windows. Follow these steps in our documentation to upgrade your Kubernetes cluster to change your container runtime to containerd.
- Features
- Bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service is now generally available.
- ARM64 AKS nodepool is now generally available.
- AKS now supports aborting a long running operation, allowing you to take back control and run another operation seamlessly.
- Preview Features
- Azure CNI Overlay for AKS is now Public Preview.
- Bug fixes
- DNS resolution failure due to Ubuntu security patch is fixed.
- Behavior changes
- The memory limits of liveness-probe container and node-driver-registrar container running in AzureDisk and AzureFile pods on Windows nodes are increased from 100MiB to 150MiB.
- Component Updates
- The Open Service Mesh addon has been updated from version 1.1.1 to version 1.2.0 for AKS clusters running 1.24.0+. Please note the breaking changes mentioned in the version 1.2.0 release notes
- The Azure File CSI driver has been updated from v1.20.0 to v1.21.0
- Microsoft Defender for Containers images updated 1.0.70
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.08.29
Monitor the release status by regions at AKS-Release-Tracker.
- Starting with Kubernetes 1.25, the following changes will be made default:
- Ubuntu 22.04 for x86, AMD and ARM64 architectures will be the default host.
- Windows Server 2022 will be the default Windows host. Important, old windows 2019 containers will not work on windows server 2022 hosts.
- Kubernetes 1.21 version has been deprecated as of July 31st, 2022. See documentation on how to upgrade your cluster.
- Some AKS labels have been deprecated with the Kubernetes 1.24 release. Update your AKS labels to the recommended substitutions. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Docker will no longer be supported as a container runtime on Windows after September 1, 2022. Follow these steps in our documentation to upgrade your Kubernetes cluster to change your container runtime to containerd.
- The Open Service Mesh addon has been updated from version 1.1.1 to version 1.2.0 for AKS clusters running 1.24.0+. Please note the breaking changes mentioned in the version 1.2.0 release notes
- Bug fixes
- Missing CWD(Current Working Directory) field in process creation events fixed. Update low level collector image version from 1.3.42 to 1.3.49.
- Component Updates
- Upgrade Azure Disk V2 CSI Driver to v2.0.0-beta.6
- Upgrade Azure Disk CSI driver to v1.22.0
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.08.15
Monitor the release status by regions at AKS-Release-Tracker.
- Starting with Kubernetes 1.25, the following changes will be made default:
- Ubuntu 22.04 for x86, AMD and ARM64 architectures will be the default host.
- Windows Server 2022 will be the default Windows host. Important, old windows 2019 containers will not work on windows server 2022 hosts.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Docker will no longer be supported as a container runtime on Windows after September 1, 2022. Follow these steps in our documentation to upgrade your Kubernetes cluster to change your container runtime to containerd.
- Behavioral Changes
- Remove responseObject from kube-audit logs when its size is reaching log analytics column size limit (32K) and customer enables kube-audit/kube-audit-admin diagnostics.
- Bug fixes
- Fix bug in processing fractional memory limits on Windows Nodes
- Fix log loss due to inode reuse on Windows Nodes
- Fix issue with cert rotation on Windows nodes that caused VMSS inconsistency
- Removed
Microsoft.Resources/deployments/write
,Microsoft.Insights/alertRules/*
, andMicrosoft.Support/*
from the built-in Azure RBAC data plane roles for AKS. - Component Updates
- Azure Monitor for container insights addon updated for Windows to win-ciprod08102022
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.08.10
- AKS Windows 2019 image has been updated to 17763.3287.220810
- AKS Windows 2022 image has been updated to 20348.887.220810
Monitor the release status by regions at AKS-Release-Tracker.
- Starting with Kubernetes 1.25, the following changes will be made default:
- Ubuntu 22.04 for x86, AMD and ARM64 architectures will be the default host.
- Windows Server 2022 will be the default Windows host. Important, old windows 2019 containers will not work on windows server 2022 hosts.
- Starting with Kubernetes 1.24, the following changes will be made default:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled. This will allow users to enable/disable node restriction.
- CoreDNS version 1.9.2 will be default version. With this new version of CoreDNS wildcard queries are no longer allowed.
- metrics-server version 0.6.1 will be the default version.
- metrics-server vertical pod autoscaler will be enabled.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Konnectivity rollout is finished in global and started in Sovereign (China, USGov).
- Docker will no longer be supported as a container runtime on Windows after September 1, 2022. Follow these steps in our documentation to upgrade your Kubernetes cluster to change your container runtime to containerd.
- Features
- GA of Kubernetes 1.24
- Behavioral Changes
- Deprecation of Kubernetes 1.21
- Increased memory request (20Mi -> 40Mi) for azuredisk and node-driver-registrar containers in azurediskcsi-azuredisk-v2-node
- Component Updates
- Calico is updated to v3.21.6
- CSI Secret Store now supports Windows Server 2022
- Microsoft Defender for Containers images updated 1.0.67
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.08.02.
Monitor the release status by regions at AKS-Release-Tracker.
-
Starting with Kubernetes 1.25, the host VM operating system will be Ubuntu 22.04 for Intel and ARM64 architectures
-
Starting with Kubernetes 1.24, the following changes will be made default:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled. This will allow users to enable/disable node restriction.
- CoreDNS version 1.9.2 will be default version. With this new version of CoreDNS wildcard queries are no longer allowed.
- metrics-server version 0.6.1 will be the default version.
- metrics-server vertical pod autoscaler will be enabled.
-
Kubernetes 1.21 version deprecation will start taking effect from July 31st, 2022.
-
Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
-
Konnectivity rollout is finished in global and started in Sovereign (China, USGov).
-
Docker will no longer be supported as a container runtime on Windows after September 1, 2022. Follow these steps in our documentation to upgrade your Kubernetes cluster to change your container runtime to containerd.
- Features
- Dedicated Host Support is now generally available.
- KMS etcd encryption is now generally available.
- Confidential Virtual Machines is now in Public Preview.
- Behavioral Changes
- Use QuotaExceeded error code instead of OperationNotAllowed when receiving quota exceed errors from ARM
- Bug Fixes
- Azure Monitor for Containers, fixes issue with node allocatable cpu and memory value when limits are not set
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.07.28.
Monitor the release status by regions at AKS-Release-Tracker.
- Starting with Kubernetes 1.24, the following changes will be made default:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled. This will allow users to enable/disable node restriction.
- CoreDNS version 1.9.2 will be default version. With this new version of CoreDNS wildcard queries are no longer allowed.
- metrics-server version 0.6.1 will be the default version.
- metrics-server vertical pod autoscaler will be enabled.
- Kubernetes 1.21 version deprecation will start taking effect from July 31st, 2022.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Konnectivity rollout is finished in global and started in Sovereign (China, USGov).
- Docker will no longer be supported as a container runtime on Windows after September 1, 2022. Follow these steps in our documentation to upgrade your Kubernetes cluster to change your container runtime to containerd.
- Preview Features
- Draft is now available in VsCode through the AKS DevX extension. To install the DevX extension for Vscode, check out the marketplace. To check out the open source code, visit the GitHub repo.
- Automated Deployments is now Public Preview on AKS. Automated Deployments allows you to take your containerized application and deploy it to an AKS cluster easily with GitHub Actions. Read more here.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.07.18.
- AKS Windows 2019 image has been updated to 17763.3232.220722.
- AKS Windows 2022 image has been added with version 20348.859.220722.
Monitor the release status by regions at AKS-Release-Tracker.
- Starting with Kubernetes 1.24, the following changes will be made default:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled. This will allow users to enable/disable node restriction.
- CoreDNS version 1.9.2 will be default version. With this new version of CoreDNS wildcard queries are no longer allowed.
- metrics-server version 0.6.1 will be the default version.
- metrics-server vertical pod autoscaler will be enabled.
- Kubernetes 1.21 version deprecation will start taking effect from July 31st, 2022.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Konnectivity rollout is finished in global and started in Sovereign (China, USGov).
- Docker will no longer be supported as a container runtime on Windows after September 1, 2022. Follow these steps in our documentation to upgrade your Kubernetes cluster to change your container runtime to containerd.
- Preview Features
- KEDA Addon is now supported on ARM64-based nodes.
- Azure Blob CSI Driver is now supported in public preview in AKS. Follow these instructions to use blob csi driver as a managed addon to mount blob storage to a pod via blobfuse or NFS 3.0 options.
- Features
- The annotation
kubernetes.azure.com/set-kube-service-host-fqdn
can now be added to pods to set the KUBERNETES_SERVICE_HOST variable to the domain name of the API server instead of the in-cluster service IP. This is useful in cases where the cluster egress is via a layer 7 firewall, like Azure Firewall with Application Rules.
- The annotation
- Bug Fixes
- Fixed issue where removed nodepool labels would still incorrectly show on autoscaled nodes.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.07.11.
- AKS Windows 2019 image has been updated to 17763.3165.220713.
- AKS Windows 2022 image has been added with version 20348.825.220713.
This release is rolling out to all regions - estimated time for completed roll out is 2022-07-22 for public cloud and 2022-07-25 for sovereign clouds. Monitor the release status by regions at AKS-Release-Tracker.
- Starting with Kubernetes 1.24, the following changes will be made default:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled. This will allow users to enable/disable node restriction.
- CoreDNS version 1.9.2 will be default version. With this new version of CoreDNS wildcard queries are no longer allowed.
- metrics-server version 0.6.1 will be the default version.
- metrics-server vertical pod autoscaler will be enabled.
- Kubernetes 1.21 version deprecation will start taking effect from July 31st, 2022.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Konnectivity rollout is finished in global and started in Sovereign (China, USGov).
- Features
- Microsoft Defender cloud-native security agent for AKS clusters is now generally available.
- Bug Fixes
- The nodepools will not inherit node resource group tags in
az aks create --tags
andaz aks update --tags
scenarios. Because nodepools haveaz aks nodepool add --tags
andaz aks nodepool update --tags
.
- The nodepools will not inherit node resource group tags in
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.07.04.
- omsagent update ciprod06272022.
This release is rolling out to all regions - estimated time for completed roll out is 2022-07-15 for public cloud and 2022-07-18 for sovereign clouds. Monitor the release status by regions at AKS-Release-Tracker.
- Starting with this release, the pod memory limit for Azure NPM has been increased from 300 MB to 1 GB for clusters with the uptime SLA enabled. Requests will stay at 300 MB.
- Starting with Kubernetes 1.24, the following changes will be made default:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled. This will allow users to enable/disable node restriction.
- CoreDNS version 1.9.2 will be default version. With this new version of CoreDNS wildcard queries are no longer allowed.
- metrics-server version 0.6.1 will be the default version.
- metrics-server vertical pod autoscaler will be enabled.
- Kubernetes 1.21 version deprecation will start taking effect from July 31st, 2022.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Kubernetes patch versions 1.21.14, 1.22.11, and 1.23.8 are now available; Kubernetes patch versions 1.21.7, 1.22.4, and 1.23.3 are deprecated and removed. Learn more about Kubernetes version support policy followed by AKS here.
- Konnectivity rollout is done for most regions. Targeting end of this week for completion of rollout to the remaining regions -
centralus, westus, germanynorth, westeurope, australiacentral2, australiasoutheast, brazilsoutheast, canadaeast, francesouth, japanwest, jioindiacentral, koreasouth, norwaywest, southafricawest, southcentralus, southeastasia, southindia, swedensouth, switzerlandwest, uaecentral, westus3
.
- Features
- Node pool start/stop is now generally available.
- Bug Fixes
- Fixed issue on 1.24+ clusters with Windows node pools and Calico as network policy to automatically create the service account required for installing Calico.
- Set
priorityClassName
tosystem-node-critical
for Azure Key Vault Provider for Secrets Store CSI Driver addon to prevent scheduling issues arising from saturation by non-critical workloads. - Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.06.29.
This release is rolling out to all regions - estimated time for completed roll out is 2022-07-08 for public cloud and 2022-07-11 for sovereign clouds. Monitor the release status by regions at AKS-Release-Tracker.
- Starting with the July 3rd, 2022 AKS release, Azure NPM will increase its pod memory limit from 300 MB to 1 GB for clusters with the uptime SLA enabled. Requests will stay at 300 MB.
- Starting with Kubernetes 1.24, the following changes will be made default:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled. This will allow users to enable/disable node restriction.
- CoreDNS version 1.9.2 will be default version. With this new version of CoreDNS wildcard queries are no longer allowed.
- metrics-server version 0.6.1 will be the default version.
- metrics-server vertical pod autoscaler will be enabled.
- Kubernetes 1.21 version deprecation will start taking effect from July 31st, 2022.
- Konnectivity rollout will continue in May 2022 and is expected to complete by end of June.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Features
- Calico Network Policy is now generally available for Windows Server 2019 and 2022. This new feature allows customers to use network policies with Windows Server on AKS. Customers can also enable and use both Linux and Windows network policies in a single cluster. This feature will be available from Kubernetes 1.20. Please take note of common issues related to this change in our troubleshooting documentation.
- Preview Features
- API Server VNet Integration is available in preview.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.06.22.
- AKS Windows 2019 image has been updated to 17763.3046.220624.
- AKS Windows 2022 image has been added with version 20348.768.220624.
- Application Gateway Ingress Controller add-on has been updated to version 1.5.2.
- The Open Service Mesh addon image has been updated from version 1.0.0 to version 1.1.1 for AKS clusters running 1.23.5+. Please note the breaking change mentioned in the version 1.1.0 release notes.
This release is rolling out to all regions - estimated time for completed roll out is 2022-07-01 for public cloud and 2022-07-04 for sovereign clouds. Monitor the release status by regions at AKS-Release-Tracker.
- Starting with the June 26th, 2022 AKS release, Azure NPM will increase its pod memory limit from 300 MB to 1 GB for clusters with the uptime SLA enabled. Requests will stay at 300 MB.
- Starting with Kubernetes 1.24, the following changes will be made default:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled. This will allow users to enable/disable node restriction.
- CoreDNS version 1.9.2 will be default version. With this new version of CoreDNS wildcard queries are no longer allowed.
- metrics-server version 0.6.1 will be the default version.
- Kubernetes 1.21 version deprecation will start taking effect from July 31st, 2022.
- Konnectivity rollout will continue in May 2022 and is expected to complete by end of June.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Preview Features
- Disable CSI Storage Drivers available in preview.
- Behavioral Changes
- PersistentVolumeClaim mounts will now work in clouds with custom root CAs.
- Nodepool snapshots will only allow taking snapshots from Nodepools with provisioning status as Succeeded.
- Bug Fixes
- Fixed issue that prevented KEDA from scaling workloads. This could be observed previously as following status condition when describing the HorizontalPodAutoscaler for the KEDA scaled object:
Cannot list resource "<external-metric-name>" in API group "external.metrics.k8s.io " in the namespace "<namespace-name>": RBAC: clusterrole.rbac.authorization.k8s.io "keda-operator-external-metrics-reader" not found
. - Update cloud-controller-manager versions to v1.24.2, v1.23.14, v1.1.17, v1.0.21 for Kubernetes 1.24, 1.23, 1.22, and 1.21 -
- A new annotation is added in order to specify the PublicIP Prefix for creating IP of LB-service.beta. kubernetes.io/azure-pip-prefix-id: "/subscriptions/8ecadfc9-ffff-4ea4-ffff-0d9f87e4d7c8/resourceGroups/lodrem/providers/Microsoft.Network/publicIPPrefixes/bb" #1848.
- Fix unexpected managed PLS deletion issue when ILB subnet is specified. #1835
- Fix: avoid unnessary NSG updating on service reconciling #1850
- Fix: panic when create private endpoint using azurefile NFS #1816
- Remove redundant restriction on pls autoApproval and visibility.User can specify a list of subscriptions for visibility (e.g. "sub1 sub2") and a subset of this list for autoApproval (e.g. "sub1"). #1867
- Fixed issue that prevented KEDA from scaling workloads. This could be observed previously as following status condition when describing the HorizontalPodAutoscaler for the KEDA scaled object:
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.06.13.
- AKS Windows 2019 image has been updated to 17763.2928.220615.
- AKS Windows 2022 image has been added with version 20348.707.220525.
- Updated Windows containerd package to v1.6.6
This release is rolling out to all regions - estimated time for completed roll out is 2022-06-24 for public cloud and 2022-06-27 for sovereign clouds.
- Starting with the June 26th, 2022 AKS release, Azure NPM will increase its pod memory limit from 300 MB to 1 GB for clusters with the uptime SLA enabled. Requests will stay at 300 MB.
- Starting with Kubernetes 1.24, the following changes will be made:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled
- CoreDNS version 1.9.2 will be default version. With this new version of CoreDNS wildcard queries are no longer allowed.
- metrics-server version 0.6.1 will be the default version.
- Konnectivity rollout will continue in May 2022 and is expected to complete by end of June.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Behavioral Changes
- Upgrades spot node pools is now available starting this week: When upgrading a spot node pool, AKS will issue a cordon and an eviction notice, but no drain is applied. There are no surge nodes available for spot node pool upgrades.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.06.08.
- Upgrade Azure File CSI driver to v1.19.0
- Upgrade Azure Disk CSI driver to v1.19.0
- Cloud-controller-manager, Azure SDK, & API version has been updated for v1.21.7 and v1.21.9 (see the version matrix to see which CCM version maps to which AKS version.
This release is rolling out to all regions - estimated time for completed roll out is 2022-06-17 for public cloud and 2022-06-20 for sovereign clouds.
- Starting with the June 26th, 2022 AKS release, Azure NPM will increase its pod memory limit from 300 MB to 1 GB for clusters with the uptime SLA enabled. Requests will stay at 300 MB.
- Starting with Kubernetes 1.24, the following changes will be made:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled
- CoreDNS version 1.9.2 will be default version. With this new version of CoreDNS wildcard queries are no longer allowed.
- metrics-server version 0.6.1 will be the default version.
- Konnectivity rollout will continue in May 2022 and is expected to complete by end of June.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Features
- AKS Release Tracker is now generally available.
- Behavioral Changes
- Set agentPoolProfile default maxPods for new agentpools to align with the expected default maxPods based on the cluster's network configuration.
- Reverted the changes of request values to api server to reduce churn on Uptime SLA enabled AKS clusters.
- Konnectivity agent now uses a new Service Account konnectivity-agent, instead of the default Service Account.
- Bug fixes
- CSI Secret Store removed limit of node-driver-registrar to addressAKS Issue
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.05.31.
This release is rolling out to all regions - estimated time for completed roll out is 2022-06-10 for public cloud and 2022-06-13 for sovereign clouds.
- Starting with Kubernetes 1.24, the following changes will be made:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled
- Konnectivity rollout will continue in May 2022 and is expected to complete by end of June.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Features
- Azure Key Vault with Private Link with KMS is now supported
- Preview of Kubernetes 1.24
- Bug fixes
- Add extra information in error messages when a subnet is full or drain issues are found
- Component Updates
- Upgrade Azure File CSI driver to v1.18.0
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.05.24.
- AKS Windows 2019 image has been updated to 17763.2928.220525.
- AKS Windows 2022 image has been added with version 20348.707.220525.
This release is rolling out to all regions - estimated time for completed roll out is 2022-06-03 for public cloud and 2022-06-06 for sovereign clouds.
- From Kubernetes 1.23, containerd will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here.
- Starting with Kubernetes 1.24, the following changes will be made:
- The default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- The NodeRestriction Admission Controller will be enabled
- Konnectivity rollout will continue in May 2022 and is expected to complete by end of May.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Features
- AKS Cluster Extensions is now generally available.
- Azure CNI dynamic IP allocation and enhanced subnet support is now generally available.
- Alias minor version is now generally available.
- Custom node configuration is now generally available.
- Subnet per node pool is now generally available.
- Preview features
- ARM64 agent pools is now in public preview.
- Azure Disk CSI driver v2 is now in public preview.
- Draft extension for Azure Kubernetes Service (AKS) is now in public preview.
- KEDA add-on is now in public preview.
- Web application routing add-on is now in public preview.
- Windows Server 2022 host support is now in public preview.
- Bug fixes
- BYOCNI nodes will no longer be provisioned with additional secondary IPs
- Calls to admission webhooks in Konnectivity clusters will properly use the Konnectivity tunnel to reach the webhook URL
- Component Updates
- Azure Disk CSI driver has been updated to v1.18.0
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.05.10.
- AKS Windows 2019 image has been updated to 17763.2928.220511.
- AKS Windows 2022 image has been added with version 20348.707.220511.
- Cloud controller manager has been updated to versions v1.23.12/v1.1.15/v1.0.19 (see the version matrix to see which CCM version maps to which AKS version)
- CoreDNS has been updated to v1.8.7 for AKS clusters >=1.20.0. Clusters before 1.20.0 remain on 1.6.6.
- external-dns has been updated to v0.10.2
This release is rolling out to all regions - estimated time for completed roll out is 2022-05-21 for public cloud and 2022-05-24 for sovereign clouds.
- From Kubernetes 1.23, containerd will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here.
- Starting with 1.24 the default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- Konnectivity rollout will continue in May 2022 and is expected to complete by end of May.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Public preview
- Bug Fixes
- Fixes a bug with the AKS-EnableDualStack preview feature that would delete managed outbound IPv6 IPs if updating the cluster with a version of the API before the dual-stack parameters were added.
- A validation to prevent adding clusters to a subnet with a NAT Gateway without setting the appropriate outboundType was applied to updates as well as creates, preventing changes to clusters in this situation. The validation has been removed from update calls.
- Component Updates
- Azure File CSI driver has been updated to v1.6
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.05.04.
This release is rolling out to all regions - estimated time for completed roll out is 2022-05-13 for public cloud and 2022-05-16 for sovereign clouds.
- From Kubernetes 1.23, containerd will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here.
- Starting with 1.24 the default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- Konnectivity rollout will continue in May 2022 and is expected to complete by end of May.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Public preview
- The
aks-preview
Azure CLI extension (version 0.5.66+) now supports runningaz aks update -g <resourceGroup> -n <clusterName>
without any optional arguments. This will perform an update operation without performing any changes, which can recover a cluster stuck in a failed provisioning state. - AKS now supports updating kubelet on node pools to use a new or changed user-assigned managed identity.
- The
- Behavioral changes
- Kube-proxy now detects local traffic using the local interface subnet instead of cluster CIDR when using Azure CNI. For clusters that have agent pools in separate subnets, this ensures that kube-proxy NAT rules do not interfere with network policies enforced by Azure NPM. The configuration change applies to clusters running Azure CNI and Kubernetes version 1.23.3 or later.
- Clusters deployed with outboundType loadBalancer but deployed in a subnet with an attached NAT gateway will be updatable. Deployment of clusters into a bring-your-own-vnet subnet with a NAT Gateway already attached will be blocked unless
outboundType userAssignedNATGateway
is passed. See NAT Gateway in the AKS Documentation for more details.
- Component Updates
This release is rolling out to all regions - estimated time for completed roll out is 2022-05-06 for public cloud and 2022-05-09 for sovereign clouds.
- From Kubernetes 1.23, containerd will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here.
- Starting with 1.24 the default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- Konnectivity rollout will continue in May 2022 and is expected to complete by end of May.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Preview features
- AKS now supports enabling encryption at rest for data in etcd using Azure Key Vault with Key Management Service (KMS) plugin.
- Bug Fixes
- Fixed CSI driver version display issue in Azure disk and file CSI Driver objects.
- Fixed bug where cloud-controller-manager was not deleting Node Object after deletion of VMSS instance.
- Behavioral changes
- Taints and labels applied using the AKS nodepool API are not modifiable from the Kubernetes API and vice versa.
- Component Updates
- Azure Disk CSI driver has been updated to 1.16.
- Azure File CSI driver has been rolled back to 1.12 to avoid storage account creation every time when a new Azure file share volume is created.
- On AKS clusters of versions >= 1.22, nginx-ingress-controller images are updated from 1.0.5 to 1.2.0 to address CVE-2021-25745 and CVE-2021-25746 vulnerabilities.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.04.27.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.2803.220413.
This release is rolling out to all regions - estimated time for completed roll out is 2022-04-15 for public cloud and 2022-04-18 for sovereign clouds.
- From Kubernetes 1.23, containerd will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here.
- Starting with 1.24 the default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Kubernetes version 1.20 will be deprecated and removed from AKS on April 7th 2022.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Preview Features
- AKS now supports Host Process Containers as a preview feature on versions 1.23+.
- Features
- Custom node configuration for AKS is now generally available.
- gMSAv2 security policy support on Windows is now generally available.
- Bug Fixes
- Fixed a bug where deployments done via the AKS run command would incorrectly display a server error when pods in a deployment did not become ready in 30s. This is now correctly flagged as a client error and will ask the user to retry or take action to ensure the pods of the deployment become ready within the allocated time.
- Component Updates
- Azure Keyvault Secrets Provider has been updated to v1.1.0.
- Azure Disk CSI driver has been updated to 1.14.
- Azure File CSI driver has been updated to 1.13.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.03.29.
This release is rolling out to all regions - estimated time for completed roll out is 2022-04-08 for public cloud and 2022-04-11 for sovereign clouds.
- Upgrade your AKS Ubuntu 18.04 worker nodes to VHD version 2022.03.20 or newer to address CVE-2022-0492 and CVE-2022-23648.
- From Kubernetes 1.23, containerd will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here.
- Starting with 1.24 the default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Kubernetes version 1.20 will be deprecated and removed from AKS on April 7th 2022.
- Update your AKS labels to the recommended substitutions before deprecation after the Kubernetes v1.24 release. See more information on label deprecations and how to update your labels in the Use labels in an AKS cluster documentation.
- Node Pool Snapshot CLI experience is changing by April 6, 2022. The current nodepool snapshot commands i.e az
aks snapshot
will now beaz aks nodepool snapshot
.
- Preview Features
- You can now Bring your Own CNI plugin to AKS
- Features
- Node pool Scale-down Mode is not Generally available and supports Spot Node Pools.
- Bug Fixes
- Fixed kubernetes-sigs/cloud-provider-azure#1317 in kubernetes v1.22+.
- Fixed kubernetes-sigs/cloud-provider-azure#1346 in kubernetes v1.22+.
- Fixed bug with auto-scaling from zero with pods that utilize an
agentpool=
label selector. - Fixed bug for IPv6-enabled clusters using OpenVPN and BYO VNET that checked the incorrect IPv6 CIDR.
- Behavioral changes
- An AKS API call on the cluster after a control plane upgrade was incorrectly causing many nodepool upgrades. We have amended the behavior such that if you dont specify nodepools or specify some nodepools in the call, then the nodepools are not upgraded to the control plane version implicitly. In order to upgrade the nodepools following the control plane upgrade, an explicit kubernetes version upgrade in the respective nodepool(s) should be added in the request.
- Component Updates
- Azure CNI for Windows updated to v1.4.22.
- Azure Disk CSI driver to v1.13.0.
- Azure Monitor for Containers addon updated to ciprod03172022.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.03.23.
This release is rolling out to all regions - estimated time for completed roll out is 2022-04-03 for public cloud and 2022-04-06 for sovereign clouds. Please note that the AKS release cadence has shifted; new releases will now be cut on Sunday.
- Upgrade your AKS Ubuntu 18.04 worker nodes to VHD version 2022.03.20 or newer to address CVE-2022-0492 and CVE-2022-23648.
- From Kubernetes 1.23, containerd will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here.
- Starting with 1.24 the default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Kubernetes version 1.20 will be deprecated and removed from AKS on April 7th 2022.
- Behavioral changes
- Accelerated networking will now be enabled by default for newly-created Windows nodepools.
- The single placement group VMSS flag will now be enabled for newly-created node pools using InfiniBand/RDMA-capable VM sizes. InfiniBand/RDMA-capable SKUs, like most H-series and some N-series sizes, can be identified by the "r" in the additional features section of the size name (e.g. Standard_HB120rs_v3, Standard_ND96asr_v4). Note that the InfiniBand drivers are not currently loaded to nodes. Loading these via a DaemonSet will come in the near future.
- Bug fixes
- The 2022.03.20+ AKS Ubuntu 18.04 images fix an issue (present since 2022.02.19) in which an unneeded Azure security agent was installed, leading to higher than expected memory consumption on nodes.
- Improved error handling to resolve a bug where a cluster stop operation may show an inconsistent state, leading to a cluster that is stuck in the "Stopping" state or moves to the "Failed" state. If a cluster is stuck in this state currently, running
az resource update --ids <cluster resource ID>
should resolve the issue.
- Features
- Node pool snapshot is now GA.
- Component updates
- Containerd updated to 1.6 for AKS Windows nodes on AKS v1.23+
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.03.20
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.2686.220317.
This release is rolling out to all regions - estimated time for completed roll out is 2022-03-23 for public cloud and 2022-03-26 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here.
- Starting with 1.24 the default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Kubernetes version 1.20 will be deprecated and removed from AKS on April 7th 2022.
- Component updates
- AKS clusters >= 1.19 will now have Application Gateway Ingress Controller (AGIC) version 1.5.1 which adds support for ingress class and path prefix
- Upgrade Azure disk CSI driver to 1.12.0 on 1.21+ clusters
- Upgrade Azure Defender pod-collector image to 0.3.19 from 0.3.18
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.03.07
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.2686.220309.
This release is rolling out to all regions - estimated time for completed roll out is 2022-03-16 for public cloud and 2022-03-19 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here.
- Starting with 1.24 the default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Kubernetes version 1.20 will be deprecated and removed from AKS on April 7th 2022.
- AKS x OSS Integration Blog Series: This month’s article highlights how to deploy a highly available Redis Cluster to AKS. Run scalable and resilient Redis with Kubernetes and Azure Kubernetes Service - Microsoft Tech Community. Previous two articles explore storing Prometheus metrics with Thanos/AKS and Cluster monitoring with Prometheus/Grafana/AKS.
- Preview features
- Associate capacity reservation to node pools is now previewed in all regions. Documentation available here.
- Component updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.03.03 contains hotfix for containerd-1602.
- Introducing Prometheus performance metrics, measuring execution time of handling pod/namespace/network policy CRUD events. The pre-existing npm_add_policy_exec_time metric now has an "error" label.
This release is rolling out to all regions - estimated time for completed roll out is 2022-03-09 for public cloud and 2022-03-12 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here.
- Starting with 1.24 the default format of clusterUser credential for AAD enabled clusters will be ‘exec’, which requires kubelogin binary in the execution PATH. If you are using Azure CLI, it will prompt users to download kubelogin. There will be no behavior change for non-AAD clusters, or AAD clusters whose version is older than 1.24. Existing downloaded kubeconfig will still work. We provide an optional query parameter ‘format’ when getting clusterUser credential to overwrite the default behavior change, you can explicitly specify format to ‘azure’ to get old format kubeconfig.
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Behavioral changes
- The default VNET address for managed VNETs will change from 10.0.0.0/8 to 10.240.0.0/16 and the default node subnet address will change from 10.224.0.0/12 to 10.224.0.0/16. New clusters will be required to have service and pod CIDR ranges that do not overlap with these new VNET ranges.
- Bug fixes
- Fix azure disk resize timeout issue on aks 1.21+
- Preview features
- Associate capacity reservation to node pools. Documentation available here.
- Component updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.02.19.
- Azure Policy for AKS updated to Gatekeeper 3.7.1
This release is rolling out to all regions - estimated time for completed roll out is 2022-03-02 for public cloud and 2022-03-05 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here https://docs.microsoft.com/en-us/azure/aks/windows-container-cli#add-a-windows-server-node-pool-with-containerd-preview.
- Konnectivity rollout will continue in Feb 2022.
- Starting with 1.23 AKS will follow upstream kubernetes and deprecate in-tree azure authentication which is marked for deprecation to be replaced with 'exec'. If you are using Azure CLI or Azure clients, AKS will download kubelogin for users automatically. If outside of Azure CLI, users need to download and install kubelogin in order to continue to use kubectl with AAD authentication. https://github.com/Azure/kubelogin
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Component Updates
- Calico updated to v3.21.4 on Windows
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.02.15.
This release is rolling out to all regions - estimated time for completed roll out is 2022-02-23 for public cloud and 2022-02-26 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here https://docs.microsoft.com/en-us/azure/aks/windows-container-cli#add-a-windows-server-node-pool-with-containerd-preview.
- Konnectivity rollout will continue in Feb 2022.
- Kubernetes 1.19 has been removed.
- Starting with 1.23 AKS will follow upstream kubernetes and deprecate in-tree azure authentication which is marked for deprecation to be replaced with 'exec'. If you are using Azure CLI or Azure clients, AKS will download kubelogin for users automatically. If outside of Azure CLI, users need to download and install kubelogin in order to continue to use kubectl with AAD authentication. https://github.com/Azure/kubelogin
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Behavioral changes
- We now limit the OIDC issuer preview feature to 1.20+
- Increased liveness/readiness probe timeout to 10 seconds for metrics server
- Component Updates
- OSM addon updated to v1.0.0
- Calico updated to v3.21.4 on Linux w/ operator managing CRDs
- Azure file updated to v1.10.0 on aks 1.21+
- omsagent update ciprod01312022 & win-ciprod01312022
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.02.07.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.2565.220211.
This release is rolling out to all regions - estimated time for completed roll out is 2022-02-16 for public cloud and 2022-02-19 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here https://docs.microsoft.com/en-us/azure/aks/windows-container-cli#add-a-windows-server-node-pool-with-containerd-preview.
- Konnectivity rollout will continue in Feb 2022.
- Kubernetes 1.19 will be removed on the next release.
- Starting with 1.23 AKS will follow upstream kubernetes and deprecate in-tree azure authentication which is marked for deprecation to be replaced with 'exec'. If you are using Azure CLI or Azure clients, AKS will download kubelogin for users automatically. If outside of Azure CLI, users need to download and install kubelogin in order to continue to use kubectl with AAD authentication. https://github.com/Azure/kubelogin
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Behavioral changes
- Increase CPU limit on Windows OMS agent from 200mc to 500mc
- GA AKS Tags now allows Patch tags to managedCluster which will also patch tags to child ARM resources {NetworkSecurityGroup, LoadBalancer, virtualNetwork}
- Bug Fixes
- Fix azure file NFS mount permissions and enable azure file volume stats by default on AKS 1.21+
- Upgraded Linux version to 5.4.0-1068.70-azure to address CVE-2021-4034
- Preview Features
- Kubernetes 1.23.3
- Enable ephemeral OS on temp disk for v5 VM instances
- Component Updates
- Kubernetes 1.20.15, 1.21.9 and 1.22.6 released, 1.20.9, 1.21.2, and 1.22.2 removed
- Upgraded Linux version to 5.4.0-1068.70-azure to address CVE-2021-4034
- Containerd registry configuration for Linux nodes - including adding root CAs for containerd via DS.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.02.01.
This release is rolling out to all regions - estimated time for completed roll out is 2022-02-07 for public cloud and 2022-02-10 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here https://docs.microsoft.com/en-us/azure/aks/windows-container-cli#add-a-windows-server-node-pool-with-containerd-preview.
- Konnectivity rollout will continue in Feb 2022.
- Kubernetes 1.19 will be removed on the next release.
- Starting with 1.23 AKS will follow upstream kubernetes and deprecate in-tree azure authentication which is marked for deprecation to be replaced with 'exec'. If you are using Azure CLI or Azure clients, AKS will download kubelogin for users automatically. If outside of Azure CLI, users need to download and install kubelogin in order to continue to use kubectl with AAD authentication. https://github.com/Azure/kubelogin
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Behavioral changes
- AKS will now create pseudo-random IPv6 address ranges for the Kubernetes pod and service IPs for new dual-stack clusters when --pod-cidrs or --service-cidrs are omitted instead of a default static value. These ranges will be generated with the method suggested in RFC 4193.
- Removed secret RBAC for azure disk csi driver.
- Increased csi-resizer timeout from 60s to 120s.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.01.24. Upgraded Linux version to 5.4.0-1067.70-azure to address CVE-2022-0185 (Azure#2749.
This release is rolling out to all regions - estimated time for completed roll out is 2022-01-31 for public cloud and 2022-02-03 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here https://docs.microsoft.com/en-us/azure/aks/windows-container-cli#add-a-windows-server-node-pool-with-containerd-preview.
- Konnectivity rollout will continue in Feb 2022.
- Client automatic cert rotation is now being enabled on the last set of regions.
- Kubernetes 1.19 will be removed on 2022-01-31.
- Starting with 1.23 AKS will follow upstream kubernetes and deprecate in-tree azure authentication which is marked for deprecation to be replaced with 'exec'. If you are using Azure CLI or Azure clients, AKS will download kubelogin for users automatically. If outside of Azure CLI, users need to download and install kubelogin in order to continue to use kubectl with AAD authentication. https://github.com/Azure/kubelogin
- Starting in Kubernetes 1.23 AKS Metrics server deployment will start having 2 pods instead of 1 for HA, which will increase the memory requests of the system by 54Mb.
- Preview Features
- Multi Instance GPU support is available for ND A100 v4 VMs. See https://aka.ms/AAfjra1 for more details.
- Bug Fixes
- Fixed bug where some custom in-tree storage classes on 1.21+ were deleted by mistake.
- Ensured Azure Defender pods have affinity for system pools.
- App GW ingress controller was added the CriticalAddonsOnly toleration as the rest of the addons and system components.
- Behavioral changes
- New global policy added to clusters with Calico network policies enabled to allow egress from the konnectivity system component.
- All AKS system-created tags will have an "aks-managed" prefix and cannot be modified or deleted.
- Component Updates
- ip-masq-agent updated to v2.5.0.9.
- Konnectivity updated to v0.0.27.
- Azure CNI updated to v0.9.1.
- Azure Policy addon updated to prod_20220114.1.
- Windows Pause Image updated to 3.6-hotfix.20220114.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.01.19.
This release is rolling out to all regions - estimated time for completed roll out is 2022-01-24 for public cloud and 2022-01-27 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here https://docs.microsoft.com/en-us/azure/aks/windows-container-cli#add-a-windows-server-node-pool-with-containerd-preview.
- Konnectivity rollout will continue in Feb 2022.
- AKS is implementing auto-cert rotation slowly over the next few months. We have already enabled the following regions westcentralus, uksouth, eastus, australiacentral, and australiaest. If you have clusters in those regions please run a cluster upgrade in order to have that cluster configured for auto-cert rotation. The following regions brazilsouth, canadacentral, centralindia, and eastasia will be released in January after the holidays as the next group of regions. We will update the release notes will the upcoming schedule going forward until all regions are deployed.
- Kubernetes 1.19 will be removed on 2022-01-31.
- Starting with 1.23 AKS will follow upstream kubernetes and deprecate in-tree azure authentication which is marked for deprecation to be replaced with 'exec'. If you are using Azure CLI or Azure clients, AKS will download kubelogin for users automatically. If outside of Azure CLI, users need to download and install kubelogin in order to continue to use kubectl with AAD authentication. https://github.com/Azure/kubelogin
- Bug Fixes
- Fixed a bug where if RBAC was disabled on a cluster, the Azure file daemonset would crash on windows nodes.
- Component Updates
- Upgrade dns-autoscaler to version 1.8.5 for 1.22+.
- Azure disk CSI driver updated to v.1.10.
- Azure file CSI driver updated to v.19 on AKS versions 1.21+
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2022.01.08.
This release is rolling out to all regions - estimated time for completed roll out is 2022-01-17 for public cloud and 2022-01-20 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here https://docs.microsoft.com/en-us/azure/aks/windows-container-cli#add-a-windows-server-node-pool-with-containerd-preview.
- Konnectivity rollout will continue in Feb 2022.
- AKS is implementing auto-cert rotation slowly over the next few months. We have already enabled the following regions westcentralus, uksouth, eastus, australiacentral, and australiaest. If you have clusters in those regions please run a cluster upgrade in order to have that cluster configured for auto-cert rotation. The following regions brazilsouth, canadacentral, centralindia, and eastasia will be released in January after the holidays as the next group of regions. We will update the release notes will the upcoming schedule going forward until all regions are deployed.
- Kubernetes 1.19 will be removed on 2021-01-31.
- Starting with 1.23 AKS will follow upstream kubernetes and deprecate in-tree azure authentication which is marked for deprecation to be replaced with 'exec'. If you are using Azure CLI or Azure clients, AKS will download kubelogin for users automatically. If outside of Azure CLI, users need to download and install kubelogin in order to continue to use kubectl with AAD authentication. https://github.com/Azure/kubelogin
- Features
- Private DNS Subzone for Private Cluster is now GA.
- Containerd runtime on Windows is now GA
- Preview Features
- Kubenet IPv6 support has been enabled all public cloud regions. See https://aka.ms/aks/ipv6 for more details.
- Bug Fixes
- Corrected validation that silently ignored updates to HTTP proxy settings.
- Fixed issue that blocked creation of 0 node nodepools.
- CSI driver probe timeout increased to 30s avoid driver crashes on small Windows VM sizes.
- Behavioral Change
- Private Cluster now supports cross-subscription VNET for PrivateLink.
- In 1.21+ existing and newly created clusters, all built-in storage classes will use CSI Driver provisioners and. There are no in-tree provisioners any more(kubernetes.io/azure-disk and kubernetes.io/azure-file).
- CPU limits for CSI drivers have been removed.
- Azure CNI - won't reserve VNet IP addresses for daemonset pods using hostNetwork: true"
- Component Updates
- Cluster Auto Scaler updates:
- Added support for more SKUs for scaling from zero (including Standard_E2s_v5, Standard_D8s_v5 and Standard_D4s_v5).
- Fixed an issue with balancing node groups and scaling from zero in clusters with CSI drivers that utilize zonal affinities.
- Fixed an issue with scaling from zero when pods have a selector on the stable instance type label node.kubernetes.io/instance-type.
- Improve scale up performance in very large scale-up scenarios
- Azure Policy for AKS updated to Gatekeeper 3.7.0
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.01.07.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.2366.211215.
- Cluster Auto Scaler updates:
This release is rolling out to all regions - estimated time for completed roll out is 2021-12-20 for public cloud and 2021-12-23 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here https://docs.microsoft.com/en-us/azure/aks/windows-container-cli#add-a-windows-server-node-pool-with-containerd-preview.
- Konnectivity rollout has been halted for the rest of the year. We will continue the rollout in the new calendar year.
- AKS is implementing auto-cert rotation slowly over the next few months. We have already enabled the following regions westcentralus, uksouth, eastus, australiacentral, and australiaest. If you have clusters in those regions please run a cluster upgrade in order to have that cluster configured for auto-cert rotation. The following regions brazilsouth, canadacentral, centralindia, and eastasia will be released in January after the holidays as the next group of regions. We will update the release notes will the upcoming schedule going forward until all regions are deployed.
- Kubernetes 1.19 will be removed on 2021-01-31.
- Starting with 1.23 AKS will follow upstream kubernetes and deprecate in-tree azure authentication which is marked for deprecation to be replaced with 'exec'. If you are using Azure CLI or Azure clients, AKS will download kubelogin for users automatically. If outside of Azure CLI, users need to download and install kubelogin in order to continue to use kubectl with AAD authentication. https://github.com/Azure/kubelogin
- Features
- Kubernetes 1.22 is now GA.
- New Kubernetes patch versions released, 1.20.13, 1.21.7, 1.22.4.
- Preview Features
- AKS GitOps agent extension is now in Public Preview.
- Microsoft Defender for containers is now in Public Preview.
- Bug Fixes
- Corrected validation that silently ignored updates to HTTP proxy settings.
- Fixed issue that blocked creation of 0 node nodepools.
- CSI driver probe timeout increased to 30s avoid driver crashes on small Windows VM sizes.
- Component Updates
- Calico updated to v3.21.0 on Linux.
- Updated Azure CNI on Windows to v1.4.16. Fixes #2608
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.12.07
This release is rolling out to all regions - estimated time for completed roll out is 2021-12-13 for public cloud and 2021-12-16 for sovereign clouds.
- From Kubernetes 1.23, containerD will be the default container runtime for Windows node pools. Docker support will be deprecated in Kubernetes 1.24. You are advised to test your workloads before Docker deprecation happens by following the documentation here https://docs.microsoft.com/en-us/azure/aks/windows-container-cli#add-a-windows-server-node-pool-with-containerd-preview.
- Konnectivity rollout has been halted for the rest of the year. We will continue the rollout in the new calendar year.
- AKS is implementing auto-cert rotation slowly over the next few months. We have already enabled the following regions westcentralus, uksouth, eastus, australiacentral, and australiaest. If you have clusters in those regions please run a cluster upgrade in order to have that cluster configured for auto-cert rotation. The following regions brazilsouth, canadacentral, centralindia, and eastasia will be released in January after the holidays as the next group of regions. We will update the release notes will the upcoming schedule going forward until all regions are deployed.
- AKS and Holiday Season: To ease the burden of upgrade and change during the holiday season, AKS is extending a limited scope of support for all clusters and node pools on 1.19 as a courtesy. Customers with clusters and node pools on 1.19 after the announced deprecation date of 2021-11-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions.
The scope of this limited extension is effective from '2021-12-01 to 2022-01-31' and is limited to the following:
- Creation of new clusters and node pools on 1.19.
- CRUD operations on 1.19 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- Bug Fixes
- Fixed a bug such that the nodes on 1.21 won't be able to start with the DelegateFSGroupToCSIDriver feature flag. This feature flag is only introduced to kubelet in 1.22.
- A WindowsGmsaProfile certificate renewal issue during certificate rotation has been identified and fixed.
- Added the component=tunnel label to konnectivity-agent pods so they will be matched by any label selectors that previously matched tunnelfront pods. This only applies to clusters that have received the new Konnectivity network tunnel.
- Behavioral Changes
- Increased cpu limits of csi driver node daemonsets from 200m to 1cpu in order to prevent cpu throttling.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.11.27 - please refer to the link for package versions in this VHD.
- AKS is implementing auto-cert rotation slowly over the next few months. We have already enabled the following regions westcentralus, uksouth, eastus, australiacentral, and australiaest. If you have clusters in those regions please run a cluster upgrade in order to have that cluster configured for auto-cert rotation. The following regions brazilsouth, canadacentral, centralindia, and eastasia will be released in January after the holidays as the next group of regions. We will update the release notes will the upcoming schedule going forward until all regions are deployed.
- Konnectivity - a new version of the AKS tunnel component that will replace the aks-link and tunnel-front versions has been rolled back in all regions. The AKS team will announce when Konnectivity is re-released.
- AKS and Holiday Season: To ease the burden of upgrade and change during the holiday season, AKS is extending a limited scope of support for all clusters and node pools on 1.19 as a courtesy. Customers with clusters and node pools on 1.19 after the announced deprecation date of 2021-11-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions.
The scope of this limited extension is effective from '2021-12-01 to 2022-01-31' and is limited to the following:
- Creation of new clusters and node pools on 1.19.
- CRUD operations on 1.19 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- Bug Fixes
- A bug has been fixed where clusters would come up with incorrect SNAT settings that broke DNS resolution. The following GitHub issues describe the bug, Pods cannot resolve external DNSes and azure-ip-masq-agent-config populated with empty nonMasqueradeCIDRs list for private cluster.
This release is rolling out to all regions - estimated time for completed roll out is 2021-11-18 for public cloud and 2021-11-25 for sovereign clouds.
- AKS is implementing auto-cert rotation slowly over the next few months. We have already enabled the following regions westcentralus, uksouth, eastus, australiacentral, and australiaest. If you have clusters in those regions please run a cluster upgrade in order to have that cluster configured for auto-cert rotation. The following regions brazilsouth, canadacentral, centralindia, and eastasia will be released in January after the holidays as the next group of regions. We will update the release notes will the upcoming schedule going forward until all regions are deployed.
- Konnectivity - a new version of the AKS tunnel component that will replace the aks-link and tunnel-front versions has been rolled back in all regions. The AKS team will announce when Konnectivity is re-released.
- AKS and Holiday Season: To ease the burden of upgrade and change during the holiday season, AKS is extending a limited scope of support for all clusters and node pools on 1.19 as a courtesy. Customers with clusters and node pools on 1.19 after the announced deprecation date of 2021-11-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions.
The scope of this limited extension is effective from '2021-12-01 to 2022-01-31' and is limited to the following:
- Creation of new clusters and node pools on 1.19.
- CRUD operations on 1.19 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- New Features
- Auto-Upgrade for AKS is now GA.
- Bug Fixes
- An Authentication issue related to pulling image secrets has been fixed with a new version of the virtual-kubelet.
- Component Updates
- Virtual-kubelet has been updated to version 1.4.1.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.11.06 - please refer to the link for package versions in this VHD.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.2300.211110 - please refer to the link for component versions in this VHD.
This release is rolling out to all regions - estimated time for completed roll out is 2021-11-11 for public cloud and 2021-11-18 for sovereign clouds.
- AKS is implementing auto-cert rotation slowly over the next few months. We have already enabled the following regions westcentralus, uksouth, eastus, australiacentral, and australiaest. If you have clusters in those regions please run a cluster upgrade in order to have that cluster configured for auto-cert rotation. The following regions brazilsouth, canadacentral, centralindia, and eastasia will be released in January after the holidays as the next group of regions. We will update the release notes will the upcoming schedule going forward until all regions are deployed.
- Konnectivity - a new version of the AKS tunnel component will replace the aks-link and tunnel-front versions slowly over the rest of the calendar year. The following regions eastus, westcentralus, uksouth, uaenorth already have Konnectivity enabled.
- Preview Features
- Managed NAT Gateway is now in public preview.
- Group Managed Service Accounts (GMSA) for your Windows Server nodes is now in public preview.
- Bug Fixes
- A bug has been fixed in
Application Gateway Ingress Controller
that previously caused users OOM errors while running a large number of ingress objects.
- A bug has been fixed in
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.10.30 - please refer to the link for package versions in this VHD.
This release is rolling out to all regions - estimated time for completed roll out is 2021-11-04 for public cloud and 2021-11-11 for sovereign clouds.
- AKS is implementing auto-cert rotation slowly over the next few months. We have already enabled the following regions westcentralus, uksouth, eastus, australiacentral, and australiaest. If you have clusters in those regions please run a cluster upgrade in order to have that cluster configured for auto-cert rotation. The following regions brazilsouth, canadacentral, centralindia, and eastasia will be released in January after the holidays as the next group of regions. We will update the release notes will the upcoming schedule going forward until all regions are deployed.
- Konnectivity - a new version of the AKS tunnel component will replace the aks-link and tunnel-front versions slowly over the rest of the calendar year. The following regions westcentralus, uksouth, uaenorth already have Konnectivity enabled.
- Preview Features
- Node pool start/stop is now in preview.
- Node pool snapshots are now supported. Please check the AKS documentation at 5pm Pacific on 11/02/2021 to read more.
- Bug Fixes
- added missing
managed-csi
storage class in AKS Kubernetes versions 1.21+. - Fixed a bug with the cluster autoscaler nodepool balancing due to a new agentpool label being added "kubernetes.azure.com/agentpool"
- User manipulation or usage of the system-reserved label prefix "kubernetes.azure.com" is now correctly blocked.
- added missing
- Component Updates
- Updated CSI Disk Driver to v1.8. and File Driver to 1.7.
- Updated omsagent to ciprod10132021 and win-ciprod10132021.
- Updated Azure CNI to v1.4.13 for Windows.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.10.23 - please refer to the link for package versions in this VHD.
This release is rolling out to all regions - estimated time for completed roll out is 2021-10-21 for public cloud and 2021-10-28 for sovereign clouds.
- Behavioral Changes
- Add aks-managed-cluster-rg and aks-managed-cluster-name tags to the node resource group
- Bug Fixes
- Fix issue where Terraform is unable to set a default for the auto upgrade channel preview feature
- Component Updates
- Update Virtual Kubelet to 1.4.0
- Use 1.5.0-rc1 of the AGIC Addon for k8s 1.22.0 to support ingress v1 API
- Update Azure CNI to v1.4.12 for Windows
- Update AKS base image version for Edge zones to 2021.10.13
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.10.13 - please refer to the link for package versions in this VHD.
- AKS Windows image has been updated to the 10B patch version including KB5004335, KB5004424, KB5006672 & KB5005701 2019-datacenter-core-smalldisk-17763.2237.211014 - please refer to the link for component versions in this VHD.
This release is rolling out to all regions - estimated time for completed roll out is 2021-10-14 for public cloud and 2021-10-21 for sovereign clouds.
- Features
- General Availability of Ultra SSD support
- Preview Features
- Public Preview of Private DNS sub zone support for Private Clusters
- Public Preview of HTTP Proxy
- Public Preview of support for WASM/WASI based nodepools
- Behavioral Changes
- Validation that DNS service IP is not on subnet boundary
- Improve system pool taints error messages
- Don't provision network monitor on any clusters >= 1.21 as Azure CNI moved to transparent mode
- Bug Fixes
- Fix issue where images in China region were pulled from public cloud MCR
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.10.02 - please refer to the link for package versions in this VHD.
This release is rolling out to all regions - estimated time for completed roll out is 2021-10-07 for public cloud and 2021-10-14 for sovereign clouds.
- Features
- Preview Features
- Kubernetes 1.22.1 is now in preview
- Enable multiple service account issuers for clusters using version >= 1.22, also fixing CNCF Validation issues.
- Cloud Controller Manager is now default for clusters 1.22+
- Behavioral Changes
- When turning Cluster Auto Scaler off, you can now specify the requested agent pool node number.
- Bug Fixes
- Fixed csi driver crash issue on Windows nodes.
- Component Updates
- Containerd 1.5 is now available to clusters 1.22+, for clusters prior to 1.22 AKS will continue to use and patch containerd 1.4.
- New patches for containerd released, 1.5.5 and 1.4.9, which address CVE-2021-41103
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.09.25 - please refer to the link for package versions in this VHD.
- AKS Windows image has been updated to the 9C patch version 2019-datacenter-core-smalldisk-17763.2213.210922 - please refer to the link for component versions in this VHD.
This release is rolling out to all regions - estimated time for completed roll out is 2021-09-23 for public cloud and 2021-09-30 for sovereign clouds.
- In order to preserve any deallocated VMs, you must to set Scale-down Mode to
Deallocate
. That includes VMs that have been deallocated using IaaS APIs (Virtual Machine Scale Set APIs). Setting Scale-down Mode toDelete
will remove any deallocated VMs.
- New Features
- Preview Features
- Cloud Controller Manager is now in Public Preview in anticipation of moving Azure specific controllers out of tree.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.09.19 - please refer to the link for package versions in this VHD.
- Azure Disk CSI driver has been updated to v1.7.0.
- Azure File CSI driver has been updated to v1.6.0.
This release is rolling out to all regions - estimated time for completed roll out is 2021-09-16 for public cloud and 2021-09-25 for sovereign clouds.
- In order to preserve any deallocated VMs, you must to set Scale-down Mode to Deallocate. That includes VMs that have been deallocated using IaaS APIs (Virtual Machine Scale Set APIs). Setting Scale-down Mode to Delete will remove any deallocate VMs.
- New Features
- AKS Run Command is now Generally Available (GA), estimated to roll out the week of 2021-09-13.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.09.06 - please refer to the link for package versions in this VHD.
- Behavioral Change
- Azure Confidential Compute has changed the CPU resource request and limits for the device plugin and quote helper daemonset as part of the ACC addon deployments. They are now reduced as the earlier requested amounts were not necessary.
- AKS Run Command will now be available by default, and customers can now disable when desired through the cli.
This release is rolling out to all regions - estimated time for completed roll out is 2021-09-09 for public cloud and 2021-09-18 for sovereign clouds.
- AKS will be upgrading to CoreDNS v1.8.4 in September. Users who are using the rewrite plugin, should upgrade their configuration before 1.8.4 goes live.
- In order to preserve any deallocated VMs, you must to set Scale-down Mode to Deallocate. That includes VMs that have been deallocated using IaaS APIs (Virtual Machine Scale Set APIs). Setting Scale-down Mode to Delete will remove any deallocate VMs.
-
Features
- Scale-down mode is now in public preview https://docs.microsoft.com/en-us/azure/aks/scale-down-mode
-
Component Updates
- Update Windows Azure CNI version to v1.4.9. - Azure CNI start time shortened by 500ms
- csi-snapshotter has been updated to v4.2.1 for Kubernetes 1.21
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.08.31 - please refer to the link for package versions in this VHD.
-
Bug Fixes
- NPM updated to 1.4.9, fixing Azure/azure-container-networking#851
This release is rolling out to all regions - estimated time for completed roll out is 2021-09-02 for public cloud and 2021-09-09 for sovereign clouds.
- AKS will be upgrading to CoreDNS v1.8.4 in September. Users who are using the rewrite plugin, should upgrade their configuration before 1.8.4 goes live.
- Component Updates
- Azuredisk and Azurefile CSI drivers upgraded to v1.5.0 in 1.21.0+ clusters.
- Open Service Mesh (OSM) addon has been updated to v0.9.2.
- Calico has been updated to v3.20.0 on linux.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.08.21 - please refer to the link for package versions in this VHD.
- Bug Fixes
- In scenarios where users are creating multiple node pools simultaneously, it is now possible to have the first User node pool in the deployment to have any characteristics so long as there is at least one System pool present in the deployment.
This release is rolling out to all regions - estimated time for completed roll out is 2021-08-26 for public cloud and 2021-09-02 for sovereign clouds.
- On the next release of the az AKS CLI we will introduce a new subcommand "aks addons" which will have the following commands: disable, enable, list, list-available, show, update.
-
Component Updates
- Kubernetes patch versions: 1.19.13 and 1.20.9 have been onboarded. Versions 1.19.9 and 1.20.5 have been deprecated.
- Bump Windows containerd to v0.0.42
- Bump CoreDNS to 1.8.4 for Kubernetes versions above 1.20.0
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.08.14 - please refer to the link for package versions in this VHD.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.2114.210811 - please refer to the link for component versions in this VHD.
-
Behavioral Changes
- Stop ability to make changes to the following system labels:
- beta.kubernetes.io/arch
- beta.kubernetes.io/instance-type
- beta.kubernetes.io/os
- failure-domain.beta.kubernetes.io/region
- failure-domain.beta.kubernetes.io/zone
- failure-domain.kubernetes.io/zone
- failure-domain.kubernetes.io/region
- kubernetes.io/arch
- kubernetes.io/hostname
- kubernetes.io/os
- kubernetes.io/role
- kubernetes.io/instance-type
- node.kubernetes.io/instance-type
- topology.kubernetes.io/region
- topology.kubernetes.io/zone
- kubernetes.azure.com/role=agent
- node-role.kubernetes.io/agent
- kubernetes.io/role=agent
- agentpool
- storageprofile
- storagetier
- accelerator
- kubernetes.azure.com/fips_enabled
- kubernetes.azure.com/os-sku
- kubernetes.azure.com/cluster
- Stop ability to make changes to the following system labels:
This release is rolling out to all regions - estimated time for completed roll out is 2021-08-19 for public cloud and 2021-08-24 for sovereign clouds.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.08.07.
This release is rolling out to all regions - estimated time for completed roll out is 2021-08-12 for public cloud and 2021-08-17 for sovereign clouds.
- Behavioral Changes
- All regions now use Azure Policy V2 by default.
- TLS 1.2 is now enabled for in AKS Windows nodes. TLS1.1, TLS1.0, SSL3.0, SSL2.0 are now disabled.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.07.31.
This release is rolling out to all regions - estimated time for completed roll out is 2021-08-05 for public cloud and 2021-08-10 for sovereign clouds.
- Azure Kubernetes Service (AKS) will stop publishing Ubuntu 16.04 image change moving forward.
- As a response to customer feedback and issues with previous Kubernetes version patches that left a lot of users with hard options. The AKS Team is extending a limited scope of support for all clusters and nodepools on 1.18 as a courtesy. Customers with clusters and nodepools on 1.18 after the announced deprecation date of 2021-06-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions .The scope of this limited extension is effective from '2021-06-30 to 2021-07-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.18.
- CRUD operations on 1.18 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- Bug Fixes
- Added missing tolerations to Pod Identity Pods. Closes #2146.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.07.25.
This release is rolling out to all regions - estimated time for completed roll out is 2021-07-29 for public cloud and 2021-08-03 for sovereign clouds.
- Azure Kubernetes Service (AKS) will stop publishing Ubuntu 16.04 image change moving forward.
- As a response to customer feedback and issues with previous Kubernetes version patches that left a lot of users with hard options. The AKS Team is extending a limited scope of support for all clusters and nodepools on 1.18 as a courtesy. Customers with clusters and nodepools on 1.18 after the announced deprecation date of 2021-06-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions .The scope of this limited extension is effective from '2021-06-30 to 2021-07-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.18.
- CRUD operations on 1.18 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- Features
- New Kubernetes patch version available, v.1.21.2.
- Preview Features
- Upgrade your Windows nodepool from Docker to Containerd by the following two methods. Note that the Kubernetes version should be > 1.20.0 and the feature flag UseCustomizedWindowsContainerRuntime should be registered under your current Azure subscription. Reminder that Docker is removed in Kubernetes 1.24 completely, use the following commands to move your workload from Docker to Containerd now.
- To upgrade a specific nodepool to use containerd:
az aks nodepool upgrade --cluster-name $CLUSTERNAME --name $NODEPOOLNAME --resource-group $RGNAME --kubernetes-version 1.21.1 --aks-custom-headers WindowsContainerRuntime=containerd
- To upgrade the cluster to use containerd for all Windows nodepools:
az aks upgrade --cluster-name $CLUSTERNAME --resource-group $RGNAME --kubernetes-version 1.21.1 --aks-custom-headers WindowsContainerRuntime=containerd
- To upgrade a specific nodepool to use containerd:
- Upgrade your Windows nodepool from Docker to Containerd by the following two methods. Note that the Kubernetes version should be > 1.20.0 and the feature flag UseCustomizedWindowsContainerRuntime should be registered under your current Azure subscription. Reminder that Docker is removed in Kubernetes 1.24 completely, use the following commands to move your workload from Docker to Containerd now.
- Behavioral Changes
- Cluster autoscaler will now enforce the minimum count in cases where the actual count drops below that. For example, Spot eviction or changing the minimum count value from the AKS API. In the past, the autoscaler operated and respected the minimum count but never interfered to enforce it if external factors affect it.
- Component Updates
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.07.17.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.2061.210714.
This release is rolling out to all regions - estimated time for completed roll out is 2021-07-27 for public cloud and 2021-07-31 for sovereign clouds.
- Azure Kubernetes Service (AKS) will stop publishing Ubuntu 16.04 image change moving forward.
- As a response to customer feedback and issues with previous Kubernetes version patches that left a lot of users with hard options. The AKS Team is extending a limited scope of support for all clusters and nodepools on 1.18 as a courtesy. Customers with clusters and nodepools on 1.18 after the announced deprecation date of 2021-06-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions .The scope of this limited extension is effective from '2021-06-30 to 2021-07-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.18.
- CRUD operations on 1.18 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- New Features
- Kubernetes 1.21 is now Generally Available (GA), estimated to roll out the week of 2021-07-19.
- Container Storage Interface (CSI) drivers for Azure disks and Azure files on Azure Kubernetes Service (AKS) is now Generally Available (GA) in Kubernetes version 1.21+. Azure Disk CSI migration is turned on for 1.21.0+ clusters.
- Bug Fixes
- Fix external-dns 0.8.0 for HTTP application routing addon for 1.21+ clusters.
- Behavioral Changes
- Azure Kubernetes Service (AKS) will now rotate your intermediate certificates during an upgrade operation
- Preview Features
- Windows containerd support on AKS is now available in all sovereign clouds.
- Component Updates
- Azuredisk and Azurefile CSI drivers upgraded to v1.4.0 in 1.20.0+ clusters.
- Windows image update for omsagent for Windows mdm by setting the NODE_IP environment variable for 'machine' as required by Windows in non-sidecar enabled mode.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.07.10.
This release is rolling out to all regions - estimated time for completed roll out is 2021-07-15 for public cloud and 2021-07-21 for sovereign clouds.
- As a response to customer feedback and issues with previous Kubernetes version patches that left a lot of users with hard options. The AKS Team is extending a limited scope of support for all clusters and nodepools on 1.18 as a courtesy. Customers with clusters and nodepools on 1.18 after the announced deprecation date of 2021-06-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions .The scope of this limited extension is effective from '2021-06-30 to 2021-07-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.18.
- CRUD operations on 1.18 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- New Features
- Bring your own Managed Identity is now GA. Allowing you to bring your own control plane MI and Kubelet MI.
- Preview Features
- Public DNS for private clusters is now in preview. Read more here.
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.07.03.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.07.03.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1999.210609.
This release is rolling out to all regions - estimated time for completed roll out is 2021-07-08 for public cloud and 2021-07-14 for sovereign clouds.
- As a response to customer feedback and issues with previous Kubernetes version patches that left a lot of users with hard options. The AKS Team is extending a limited scope of support for all clusters and nodepools on 1.18 as a courtesy. Customers with clusters and nodepools on 1.18 after the announced deprecation date of 2021-06-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions .The scope of this limited extension is effective from '2021-06-30 to 2021-07-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.18.
- CRUD operations on 1.18 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- Bug Fixes
- Resolved "TO/FROM rule and port rule on same PodSelector in multiple policies", Azure/azure-container-networking#870
- Component Updates
- Block enabling autoupgrade for unsupported k8s versions (less than lowest minor version by one)
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.06.19.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.06.19.
This release is rolling out to all regions - estimated time for completed roll out is 2021-06-24 for public cloud and 2021-06-28 for sovereign clouds.
- As a response to customer feedback and issues with previous Kubernetes version patches that left a lot of users with hard options. The AKS Team is extending a limited scope of support for all clusters and nodepools on 1.18 as a courtesy. Customers with clusters and nodepools on 1.18 after the announced deprecation date of 2021-06-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions .The scope of this limited extension is effective from '2021-06-30 to 2021-07-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.18.
- CRUD operations on 1.18 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- Component Updates
- Omsagent updated to 06112021. Read more here.
- Windows Azure CNI updated to version 1.4.0.
- HTTP Application Routing addon has been updated to support Kubernetes version >= 1.21.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.06.12.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.06.12.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1999.210609.
This release is rolling out to all regions - ETA for conclusion 2021-06-17 for public cloud and 2021-06-21 for sovereign clouds.
- As a response to customer feedback and issues with previous Kubernetes version patches that left a lot of users with hard options. The AKS Team is extending a limited scope of support for all clusters and nodepools on 1.18 as a courtesy. Customers with clusters and nodepools on 1.18 after the announced deprecation date of 2021-06-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions .The scope of this limited extension is effective from '2021-06-30 to 2021-07-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.18.
- CRUD operations on 1.18 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- Preview Features
- Public DNS support for Private Clusters using the Private cluster endpoint.
- Bug Fixes
- Released runc r95 to address a vulnerability to symlink-exchange attack.
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.06.09.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.06.09.
This release is rolling out to all regions - ETA for conclusion 2021-06-10 for public cloud and 2021-06-14 for sovereign clouds.
- As a response to customer feedback and issues with previous Kubernetes version patches that left a lot of users with hard options. The AKS Team is extending a limited scope of support for all clusters and nodepools on 1.18 as a courtesy. Customers with clusters and nodepools on 1.18 after the announced deprecation date of 2021-06-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions .The scope of this limited extension is effective from '2021-06-30 to 2021-07-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.18.
- CRUD operations on 1.18 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- Preview Features
- Windows containerd support on AKS is now available in all regions. Read more here.
- Bug Fixes
- Fix priority expander in cluster autoscaler falling back to a random choice when a higher priority exists. To read more about this bug, click here.
- Fix a regression where users with > 200 group memberships may fail to authenticate to AAD enabled AKS clusters in Azure public cloud.
- Component Updates
- Updated omsagent to ciprod05202021.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.06.02.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.06.02.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1935.210513
This release is rolling out to all regions - ETA for conclusion 2021-06-03 for public cloud and 2021-06-07 for sovereign clouds.
- Previous pod security policy (preview)](https://docs.microsoft.com/azure/aks/use-pod-security-policies) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- Features
- Use Set-TimeZone now with Windows Containers to change timezones.
- New Kubernetes patch versions available, v1.18.19, 1.19.11, v1.20.7.
- Encryption at Host is now GA
- Preview Features
- Kubernetes 1.21.1
- Disable local accounts in now in preview here.
- Windows containerd support on AKS are available in 3 regions (eastus, uksouth, and westcentralus) today. If you registered the containerd public preview feature flag and add node pool on a cluster below k8s 1.20 version in other regions than mentioned above, the windows nodepool creation will fail. If you are using k8s version 1.20 and register the containerd feature flag in the available regions, this will only add containerd node pool instead of docker. You can unregister the feature flag to use docker node pool. Please note that we are working towards releasing the fix in other regions in few days.
- Bug Fixes
- Reverting Container Insights agent to March release [ciprod03262021] in response to failing liveness probes.
- Component Updates
- Upgraded calico to v3.19. The newest Calico update includes this fix for customers that were experiencing upgrade problems.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.05.19.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.05.19.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1911.210513.
This release is rolling out to all regions - ETA for conclusion 2021-05-20 for public cloud and 2021-05-24 for sovereign clouds.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- Preview Features
- The CSI Secret Store AKS Addon is now in Public Preview. See more here.
- Component Updates
- Upgrade azuredisk/azurefile CSI Driver to v1.2.0 (currently in preview).
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.05.08.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.05.08.
This release is rolling out to all regions - ETA for conclusion 2021-05-13 for public cloud and 2021-05-17 for sovereign clouds.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- Preview Features
- FIPS compliant nodes
- Bug Fixes
- Fix a bug that different users could not reset service principal using same Azure Active Directory Client ID.
- Component Updates
- AGIC has been updated to 1.4.0. Read more here.
- Azure NPM has been updated to 1.3.2
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.05.01.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.05.01.
This release is rolling out to all regions - ETA for conclusion 2021-05-03 for public cloud and 2021-05-10 for sovereign clouds.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- Preview Features
- Autoupgrade will now respect customer's default maintenance configuration settings.
- Bug Fixes
- Customers trying to use the
RunCommand
on clusters with both PrivateLink and AAD enabled will now see aNotSupportedSetup
message.
- Customers trying to use the
- Component Updates
- Azure Monitor for Containers image tag has been updated to ciprod04222021. Read more here.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1911.210423.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.04.27.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.04.27.
This release is rolling out to all regions - ETA for conclusion 2021-04-26 for public cloud and 2021-05-03 for sovereign clouds.
- Kubernetes version 1.17 has now been deprecated since March 31st.
- CSI Drivers will become default for Kubernetes versions 1.21+.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- For all AKS clusters using Kubernetes v1.20+, CoreDNS will be upgraded to version 1.8.3. This will remove
resyncperiod
andupstream
from the Kubernetes plugin.
- New Features
- You can now update Windows passwords via Azure cli.
- Bug Fixes
- Fixed a bug with Cert Rotation trying to call windows agentpools, this is a linux only function.
- Fixed a bug that if a customer uses "[]" as "AvailabilityZones" for both create and update, their update will be blocked incorrectly.
- Behavioral Changes
- Node pool limit has increased from 10 to 100.
- Component Updates
- Linux Pause container image has been updated to [3.5] from 1.3.1
- Dns-autoscaler image has been updated to [mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.3] for 1.18 and above cluster. 1.8.3 uses non-root user.
- Pod Identity nmi image has been updated to [1.7.5] and set critical addon torelations.
- OSM has been updated to [v0.8.3]
- The OSM Envoy image has been updated to [1.17.2]
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1879.210414.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.04.20.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.04.20.
This release is rolling out to all regions - ETA for conclusion 2021-04-14 for public cloud.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Kubernetes version 1.17 has now been deprecated since March 31st.
- Before k8s 1.20 a bug would allow exec probes to run indefinitely, ignoring any timeoutSeconds configuration value. The previous buggy behavior has been fixed, and timeouts are now enforced. Additionally, this change introduces a new default timeout of 1 second. Please audit all your existing exec probes to make sure that it is appropriate to enforce a 1 second timeout. If not, please provide an explicit timeoutSeconds value that is appropriate for each exec probe.
- CSI Drivers will become default for Kubernetes versions 1.21+.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- For all AKS clusters using Kubernetes v1.20+, CoreDNS will be upgraded to version 1.8.3. This will remove
resyncperiod
andupstream
from the Kubernetes plugin.
- Bug Fixes
- Fixed a bug in runc that caused pods to be stuck in container creation in containerd 1.4.3 and 1.4.4.
- Fixed a bug in VMAS that accidentally enabled VMAS to be scaled down to 0.
- Behavioral Changes
- Increased nslookup/nc timeout to 10s for Provisioning CSE in nodes.
- Component Updates
- Removed Cross-namespace owner references in Azure Policy on AKS v1.20+.
- Updated omsagent to ciprod03262021.
- Updated Azure Confidential Compute Image to 1.16 with updated webhook and plugin version, to include a liveness probe.
- Calico will upgrade to 3.18.1 to correct the policy for Tigera operator which requires hostPath. For the base Calico on linux, we will automatically upgrade cluster with Calico 3.17.2. For the Windows node pools, calico will be upgraded to v3.18.1 in any agent pool update/upgrade operations, for example, upgrade the cluster, update the node image, or upgrade the node pool. For detailed updates on Calico, please read more here.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1817.210330.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.03.31.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.03.31.
This release is rolling out to all regions - ETA for conclusion 2021-04-07 for public cloud.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Kubernetes version 1.17 has now been deprecated since March 31st.
- Before k8s 1.20 a bug would allow exec probes to run indefinitely, ignoring any timeoutSeconds configuration value. The previous buggy behavior has been fixed, and timeouts are now enforced. Additionally, this change introduces a new default timeout of 1 second. Please audit all your existing exec probes to make sure that it is appropriate to enforce a 1 second timeout. If not, please provide an explicit timeoutSeconds value that is appropriate for each exec probe.
- CSI Drivers will become default for Kubernetes versions 1.21+.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- For all AKS clusters using Kubernetes v1.20+, CoreDNS will be upgraded to version 1.8.3. This will remove
resyncperiod
andupstream
from the Kubernetes plugin.
- New Features
brazilsouth
,centralindia
,eastasia
andfrancecentral
are all new supported regions for Virtual Node. Thesouthindia
region has been removed from the supported region list.
- Preview Features
- Open Service Mesh (OSM), as a managed AKS add-on, is now in public preview.
- Component Updates
- Calico will upgrade to 3.18.1 to correct the policy for Tigera operator which requires hostPath. For the base Calico on linux, we will automatically upgrade cluster with Calico 3.17.2. For the Windows node pools, calico will be upgraded to v3.18.1 in any agent pool update/upgrade operations, for example, upgrade the cluster, update the node image, or upgrade the node pool. For detailed updates on Calico, please read more here.
This release is rolling out to all regions - ETA for conclusion 2021-03-31 for public cloud.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Next week, Kubernetes version 1.17 will be deprecated on March 31st.
- Before k8s 1.20 a bug would allow exec probes to run indefinitely, ignoring any timeoutSeconds configuration value. The previous buggy behavior has been fixed, and timeouts are now enforced. Additionally, this change introduces a new default timeout of 1 second. Please audit all your existing exec probes to make sure that it is appropriate to enforce a 1 second timeout. If not, please provide an explicit timeoutSeconds value that is appropriate for each exec probe.
- CSI Drivers will become default for Kubernetes versions 1.21+.
- Previous pod security policy (preview) deprecation was June 30th 2021. To better align with Kubernetes Upstream pod security policy (preview) deprecation will begin with Kubernetes version 1.21, with its removal in version 1.25. As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives.
- For all AKS clusters using Kubernetes v1.20+, CoreDNS will be upgraded to version 1.8.3. This will remove
resyncperiod
andupstream
from the Kubernetes plugin.
- Bug Fixes
- Fixed an issue regarding indecisiveness in Kubernetes versions and the auto-upgrade feature in ARM templates. Read more here.
- Component Updates
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1817.210310.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.03.17.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.03.17.
This release is rolling out to all regions - ETA for conclusion 2021-03-24 for public cloud.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on June 30th, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Kubernetes version 1.17 will be deprecated in the last week of March 2021.
- Before k8s 1.20 a bug would allow exec probes to run indefinitely, ignoring any timeoutSeconds configuration value. The previous buggy behavior has been fixed, and timeouts are now enforced. Additionally, this change introduces a new default timeout of 1 second. Please audit all your existing exec probes to make sure that it is appropriate to enforce a 1 second timeout. If not, please provide an explicit timeoutSeconds value that is appropriate for each exec probe.
- CSI Drivers will become default for Kubernetes versions 1.21+.
- Bug Fixes
- Fixed an issue with using a managed identity created in a different subscription from the cluster while using pod identity github.
- Behavioral Changes
- Made improvements to Cluster AutoScaler for ignoring pods that are stuck in Terminating state to be considered for scale down after exhausting their grace period.
- WinDSR is enabled by default for [Kubernetes versions 1.20+]
- Component Updates
- Updated image tunnel-front to v1.9.2-v3.0.22
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1817.210310.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.03.10.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.03.10.
This release is rolling out to all regions - ETA for conclusion 2021-03-17 for public cloud.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on June 30th, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Kubernetes version 1.17 will be deprecated in the last week of March 2021.
- Before k8s 1.20 a bug would allow exec probes to run indefinitely, ignoring any timeoutSeconds configuration value. The previous buggy behavior has been fixed, and timeouts are now enforced. Additionally, this change introduces a new default timeout of 1 second. Please audit all your existing exec probes to make sure that it is appropriate to enforce a 1 second timeout. If not, please provide an explicit timeoutSeconds value that is appropriate for each exec probe.
- Features
- Azure monitor for containers now supports Pods & Replica set live logs in AKS resource view. Read more here
- Confidential computing addon for confidential computing nodes (DCSv2) on AKS is updated to align with Intel SGX's future initiatives.
- Bug Fixes
- The latest Windows image fixes a bug where Windows could break nodes at the CNI level and cause all pods scheduled on that node to be permanently stuck, or blocked during deployment. If you have questions about this fix, please contact the Windows Container Team.
- Fixed an issue where duplicate packets were sent for kubenet on clusters with k8s 1.19+ and containerd-based clusters. This was cased when the traffic is sent to another pod on the same node over cluster service IP."
- Fixed bug in the addon profile API that caused crashes on build using Terraform in sov clouds.
- Behavioral Change
- The maximum number of managed identities for the Pod Identity addon was increased from 50 to 200.
- Systemd-resolved will no longer be used in AKS Ubuntu 18.04 images. This weeks image, AKSUbuntu-1804-2021.03.09 resolves past issues regarding private DNS with .local entries not working with Kubernetes 1.18 and Ubuntu 18.04.
- Preview Features
- Kubenet support for Pod Identity.
- Component Updates
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1790.210302.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.03.03.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.03.03.
This release is rolling out to all regions - ETA for conclusion 2021-03-10 for public cloud.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on June 30th, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Starting last week, the week of Feb 22nd (Azure China Cloud and Azure Government Cloud users will get this update in the following weeks), we will upgrade AKS clusters Calico network policy from Calico version v3.8.9 to v3.17.2 for cluster 1.20.2 and above. This upgrade will cause a breaking change to the default behavior of all-interfaces Host Endpoints. For customers that use Host Endpoints, and only these, this version brings a change. Please follow our guidance to apply the appropriate label and Global Network Policy if you want to keep the v3.8.9 default behavior of all-interfaces Host Endpoints.
- Systemd-resolved will no longer be used in AKS Ubuntu 18.04 images starting on next week's release. This resolves past issues regarding private DNS with .local entries not working with Kubernetes 1.18 and Ubuntu 18.04.
- Features
- AKS Managed AAD now supports Just-in-Time Access is now Generally Available GA.
- Application Gateway Ingress Controller (AGIC) AKS Add-On is now Generally Available [GA].
- Confidential Computing Nodes (DCSv2) AKS Add-on is now Generally Available [GA].
- HTTP Application Routing addon now Generally Available in Gov Cloud.
- Encrypted customer managed keys policy for AKS is now Generally Available [GA].
- Public IP per node capability in AKS is now Generally Available [GA].
- Deploy WebLogic on Azure Kubernetes Service (AKS) using custom Docker images is now Generally Available GA.
- Persistent Volume monitoring & Reports tab in Container Insights is now Generally Available [GA]. Read more here:
- Preview Features
- Calico Windows support in AKS 1.20 for new clusters.
- Planned Maintenance Windows in AKS.
- Dynamic IP allocation & enhanced subnet support in AKS.
- Containerize and migrate apps to Azure Kubernetes Service with Azure Migrate: App Containerization. Read More Here.
- Behavioral Change
- Windows Containers may fail to resolve DNS names in ~1 seconds after it is created successfully and the status is showing running. This may not affect all customers but only those with applications that requires FQDN resolution when starting up the container. The workaround is retry or sleep ~1 seconds. For feedback, please go to Windows Container GitHub.
- Component Updates
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1757.210220..
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.02.24.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.02.24.
This release is rolling out to all regions - ETA for conclusion 2021-03-03 for public cloud.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on June 30th, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Starting this week (Azure China Cloud and Azure Government Cloud users will get this update in the following weeks), we will upgrade AKS clusters Calico network policy from Calico version v3.8.9 to v3.17.2 for cluster 1.20.2 and above. This upgrade will cause a breaking change to the default behavior of all-interfaces Host Endpoints. For customers that use Host Endpoints, and only these, this version brings a change. Please follow our guidance to apply the appropriate label and Global Network Policy if you want to keep the v3.8.9 default behavior of all-interfaces Host Endpoints.
- Systemd-resolved will no longer be used in AKS Ubuntu 18.04 images starting on next week's release. This resolves past issues regarding private DNS with .local entries not working with Kubernetes 1.18 and Ubuntu 18.04.
- CSI Drivers will become default for Kubernetes versions 1.21+.
- Component Updates
- Calico updated to v3.17.2 for Kubernetes versions 1.20+.
- NMI image updated to 1.7.4.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.02.17.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.02.17.
This release is rolling out to all regions - ETA for conclusion 2021-02-24 for public cloud.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on June 30th, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Starting this week on 22 February 2021 (Azure China Cloud and Azure Government Cloud users will get this update in the following weeks), we will upgrade AKS clusters Calico network policy from Calico version v3.8.9 to v3.17.1 for cluster 1.20.2 and above. This upgrade will cause a breaking change to the default behavior of all-interfaces Host Endpoints. For customers that use Host Endpoints, and only these, this version brings a change. Please follow our guidance to apply the appropriate label and Global Network Policy if you want to keep the v3.8.9 default behavior of all-interfaces Host Endpoints.
-
Behavioral Change
- Date/Time removed from tunnel-front log entries. Timestamps can still be viewed by adding --timestamps to your kubectl logs command.
-
Bug Fixes
- Fixed Auto Scaling issues with 1.19 Preview Clusters where no image is found for a distro to scale from.
- A previous release defaulted to Gen2 VHDs for Kubernetes versions below 1.18.0. This implicitly changed the Ubuntu version from 16.04 to 18.04 for users still below 1.18.0. This has been fixed and users will only receive Gen2 VHDs for Kubernetes versions greater than or equal to 1.18.0.
- Fixed AuthorizationFailed errors on cluster deletion operations to better expose to users.
- Fixed case senstivity problem when specifying "--os-type"
- Fixed an Error Handling issue when provisioning node pools with Ephemeral OS and a VM size with no cache disk.
- Fixed an issue with Azure Policy pods not getting scheduled with CriticalAddonsOnly taint GithubIssue
-
Component Updates
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1697.210210.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.02.10.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.02.10.
This release is rolling out to all regions - ETA for conclusion 2021-02-17 for public cloud.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Starting on the week of 22 February 2021 (Azure China Cloud and Azure Government Cloud users will get this update in the following weeks), we will upgrade AKS clusters Calico network policy from Calico version v3.8.9 to v3.17.1 for any cluster 1.20.0 or above. This upgrade will cause a breaking change to the default behavior of all-interfaces Host Endpoints. For customers that use Host Endpoints, and only these, this version brings a change. Please follow our guidance to apply the appropriate label and Global Network Policy if you want to keep the v3.8.9 default behavior of all-interfaces Host Endpoints.
- Features
- Cluster Start/Stop is now GA.
- Preview Features
- AKS now supports Private Clusters created with a custom DNS zone (BYO DNS zone). Read more here.
- AKS now allows you to re-use your standard LoadBalancer outbound IP (created by AKS) as Inbound IP to your services (and vice-versa) from Kubernetes v1.20+.
- AKS now supports re-using the same Load Balancer IP across multiple services from Kubernetes v1.20+.
- Behavioral Change
- The AKS default storage class behavior now will be to delay the creation of a Persistent Volume until a pod is created. Allowing the Persistent Volume to be created in the same zone as the pod. Read more here.
- Component Updates
- Update default Windows Azure CNI to v1.2.2.
- Calico updated to v3.8.9.2.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1697.210127.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.02.03.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.02.03.
This release is rolling out to all regions - ETA for conclusion 2021-02-12 for public cloud.
- Kubernetes 1.16 is officially deprecated in AKS
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Starting on the week of 15 February 2021 (Azure China Cloud and Azure Government Cloud users will get this update in the following weeks), we will upgrade AKS clusters Calico network policy from Calico version v3.8.9 to v3.17.1. This upgrade will cause a breaking change to the default behavior of all-interfaces Host Endpoints. For customers that use Host Endpoints, and only these, this version brings a change. Please follow our guidance to apply the appropriate label and Global Network Policy if you want to keep the v3.8.9 default behavior of all-interfaces Host Endpoints.
- Features
- Generation 2 Virtual Machines are now GA on AKS.
- New Kubernetes patch version available, 1.19.7
- Preview Features
- Kubernetes 1.20.2 is now in preview
- Bug Fixes
- Fixed ContainerD + Kubenet - Pod IP SNAT/Masquerade Behavior GitHubIssue.
- Component Updates
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1697.210127.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.01.28.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.01.28.
This release is rolling out to all regions - ETA for conclusion 2021-02-03 for public cloud.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- As previously announced, with the Holiday Season ending AKS will deprecate Kubernetes v1.16, completing the extension given after the GA of v1.19 for the holiday season and returning to the regular 3 supported versions window. After the week of January 31st, 2021 you will no longer be able to create v1.16.x based clusters or node pools.
- Features
- AKS now supports NCasT4_v3 SKUs.
- New Kubernetes patch versions available, v1.17.16, 1.18.14, v1.19.6.
- AKS Managed AAD now supports Azure AD Conditional Access. See more: https://docs.microsoft.com/azure/aks/managed-aad#use-conditional-access-with-azure-ad-and-aks
- Preview Features
- AKS now supports WinDSR in AKS Windows nodes in preview by registering the
Microsoft.ContainerService/EnableAKSWindowsDSR
feature flag. - New options for Custom node Configuration:
ContainerLogMaxSizeMB
,ContainerLogMaxFiles
,PodMaxPids
. - AKS now supports Auto-Upgrade channels. https://aka.ms/aks/autoupgrade
- AKS now supports WinDSR in AKS Windows nodes in preview by registering the
- Bug Fixes
- When clusters that are using bring your own subnet and route table with kubenet are deleted, they will now clean up any routes set by Kubernetes/AKS.
- Added new IP availability validation for cluster upgrade of kubenet clusters.
- Fixed bug where
Standard_DC2s_v2
,Standard_DC4s_v2
,Standard_DC8_v2
were incorrectly listed as supporting Accelerated Networking resulting in creation failures.
- Behavioral Change
- The Reset Service Principal operation will now perform a node image upgrade in-order to update the configuration of each agent node.
- Component Updates
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1697.210113.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2021.01.13.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2021.01.13.
- Virtual Node updated to Virtual Kubelet 1.3.2.
This release is rolling out to all regions - ETA for conclusion 2021-01-13 for public cloud.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- AKS has defaulted Azure CNI to transparent mode (from its previous default of bridge mode). This should bring no impact and carries several benefits, read more about it here
- As previously announced, with the Holiday Season ending AKS will deprecate Kubernetes v1.16, completing the extension given after the GA of v1.19 for the holiday season and returning to the regular 3 supported versions window. After the week of January 31st, 2021 you will no longer be able to create v1.16.x based clusters or nodepools.
- Features
- AKS now supports every CPU-based SKU dynamically. This means that every new CPU-based SKU is automatically supported by AKS so long they're not on the restrictions list. GPU-based SKUs and other specialty SKUs still require additional validation before being enabled
- AKS Cluster Autoscaler now exposes the
max-node-provision-time
andpriority
properties as part of the Cluster Autoscaler profile.
- Bug Fixes
- Fixed edge case on Bring your own subnet + kubenet network plugin scenarios where the route table was not correctly associated before the nodes started being created.
- Better handling of race condition with liveness probes of the
aks-link
component. - Cluster autoscaler bug fix for incorrectly reading the value of
new-pod-scale-up
and improvements to CA liveness probe. - Case insensitivity fix for
networkPlugin``, networkPolicy
andloadbalancerSku
. - Bug fixed on BYO Route Table kubenet scenarios where the cluster deletion didn't correctly clean up the route table rules created by kubernetes.
- Preview Features
- Cluster Start/Stop now works in clusters with Cluster Autoscaler enabled and Private clusters.
- Component Updates
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1579.201208.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.12.15.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2020.12.15.
- Updated Azure CNI plugin version for Linux and Windows to 1.2 - https://github.com/Azure/azure-container-networking/releases.
This release is rolling out to all regions - ETA for conclusion 2020-12-09 for public cloud.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- AKS will be defaulting Azure CNI to transparent mode (from its current default of bridge mode) on the next release. This should bring no impact and carries several benefits, read more about it here
- Features
- Bring your Own (BYO) Control Plane Managed Identity is Now Generally Available.
- You may now update your Uptime SLA clusters to Free.
- Behavioral Changes
- AKS Clusters will from now on choose to fail the upgrade if the drain/evict operation doesn't succeed instead of timing out. This means that users must ensure their PodDisruptionBudgets (PDBs) allow their pods to be successfully moved. To see if you have any incorrect PDB check AKS Diagnostics and search for PDBs and Node Drain Failures to see if you have any problematic PDBs in your cluster.
- Preview Features
- AKS now supports Custom Node Configuration in Public Preview.
- AKS now supports Private Clusters created with no Private DNS zone, deferring all DNS to an enterprise-managed DNS server.
- You can create a cluster like this by using
--private-dns-zone none
, and making sure your custom DNS server is on the cluster subnet and contains all necessary entries including the API server endpoint IP (you can add after the cluster is created).
- You can create a cluster like this by using
- Azure AD Pod Identity Add-on is now in public preview.
This release is rolling out to all regions - ETA for conclusion 2020-11-25 for public cloud.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Features
- Ephemeral OS is now Generally Available (GA). From now on ephemeral OS will be the default OS disk type for all SKUs and disk sizes that support it. See more here.
- Kubernetes v1.19 is now Generally Available (GA).
- ContainerD is now Generally Available (GA) and the default container runtime for cluster created or upgraded to kubernetes v1.19+. See more here.
- Max Surge upgrades are now generally available. See more here.
- Behavioral changes
- The command
az aks browse
will now open the Azure Portal Kubernetes resource view after the Azure CLI v2.15.0. - A new property
subnetCIDR
was added for the Application Gateway Ingress Controller (AGIC) addon. This property will eventually replacesubnetPrefix
, and is used by AGIC to create a new subnet for Application Gateway. Application Gateway is deployed in this subnet and is then configured by AGIC to provide ingress capability to AKS. - Added additional username and password validations for windows. The minimal password length in AKS is 14 characters. See more here.
- AKS Base images now come from Shared Image Gallery and no longer from the Azure Marketplace.
- The command
- Bug Fixes
- Fixed issued caused by Chrony on recent AKSUbuntu-1604-2020.10.28 images.
- Component Updates
- Azure Monitor for Containers updated to version 11092020.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1577.201111.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.11.11.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2020.11.11.
This release is rolling out to all regions - ETA for conclusion 2020-11-19 for public cloud
- AKS will default to containerd as the default runtime on kubernetes v1.19+ after this feature GAs. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA of kubernetes v1.19, containerd will be served by default for all new clusters or cluster that upgrade to v1.19. If you are doing container builds in cluster please use the recommended docker buildx.
- After the GA of Ephemeral OS and release of the 2020-11-01 AKS API version. Clusters and nodepools will be created by default with Ephemeral OS. You can still select managed disks explicitly if you prefer that option. See more at https://aka.ms/aks/ephemeral-os.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Features
- Migration from Service Principal-based clusters to Managed Identity Managed clusters is now supported. See how here.
- Added Standard_Dxds_v4 VM SKUs.
- Added Standard_exds_v4.
- Behavioral changes
- From kubernetes clusters v1.19+
az aks browse
CLI command will open the Azure Portal resource view instead of the kubernetes dashboard that no longer is supported on these clusters.
- From kubernetes clusters v1.19+
- Bug Fixes
- The AKS control plane will always send RST for idle connections after 4min. Closes #1052, #1755, #1877.
- Fixed issue with etcd replica management.
- Component updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.10.28.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2020.10.28.
- Azure Monitor for Containers updated to version 10272020
- Updated CNI network monitor addon to version v1.1.18.
- The new AKS API version 2020-11-11 has been published.
This release is rolling out to all regions - ETA for conclusion 2020-11-11
- AKS and Holiday Season: To ease the burden of upgrade and change during the holiday season, AKS is extending a limited scope of support for all clusters and nodepools on 1.16 as a courtesy. Customers with clusters and nodepools on 1.16 after the announced deprecation date of 2020-11-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions.
The scope of this limited extension is effective from '2020-11-30 to 2021-01-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.16.
- CRUD operations on 1.16 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- AKS will default to containerd as the default runtime on kubernetes v1.19+ after this feature GAs. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA of kubernetes v1.19, containerd will be served by default for all new clusters or cluster that upgrade to v1.19. If you are doing container builds in cluster please use the recommended docker buildx.
- After the GA of Ephemeral OS and release of the 2020-11-01 AKS API version. Clusters and nodepools will be created by default with Ephemeral OS. You can still select managed disks explicitly if you prefer that option. See more at https://aka.ms/aks/ephemeral-os.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st, 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Features
- Outbound type UDR now allows for dynamic route exchange over BGP for Private Clusters.
- Proximity Placement Group support is now Generally Available. Releases for #1351
- Spot Node pools are now Generally Available. Releases for #982
- Support for
CriticalAddonsOnly
taint as an exception for system pools. Releases for #1833 - Support for soft Taints. Resolves #1484
- New Kubernetes patch versions available, v1.17.13, 1.18.10.
- Preview Features
- New Preview Kubernetes patch versions available, 1.19.3.
- Bug Fixes
- Fixed misalignment of taint validations with upstream kubernetes validations. Fixes #1412
- Component updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.10.21.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2020.10.21.
This release is rolling out to all regions - ETA for conclusion 2020-10-28.
- AKS and Holiday Season: To ease the burden of upgrade and change during the holiday season, AKS is extending a limited scope of support for all clusters and nodepools on 1.16 as a courtesy. Customers with clusters and nodepools on 1.16 after the announced deprecation date of 2020-11-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions.
The scope of this limited extension is effective from '2020-11-30 to 2021-01-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.16.
- CRUD operations on 1.16 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- AKS will default to containerd as the default runtime on kubernetes v1.19+ after this feature GAs. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA of kubernetes v1.19, containerd will be served by default for all new clusters or cluster that upgrade to v1.19. If you are doing container builds in cluster please use the recommended docker buildx.
- After the GA of Ephemeral OS and release of the 2020-11-01 AKS API version. Clusters and nodepools will be created by default with Ephemeral OS. You can still select managed disks explicitly if you prefer that option. See more at https://aka.ms/aks/ephemeral-os.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Features
- New cluster autoscaler parameters available for the cluster autoscaler profile. New parameters supported:
skip-nodes-with-local-storage
,skip-nodes-with-system-pods
,max-empty-bulk-delete
,expander
,new-pod-scale-up-delay
,max-total-unready-percentage
,ok-total-unready-count
- New Admission controllers supported from kubernetes v1.19:
PodNodeSelector
,PodTolerationRestriction
,ExtendedResourceToleration
. See how to use them here Releases for #1143, #1719 and #1449.
- New cluster autoscaler parameters available for the cluster autoscaler profile. New parameters supported:
- Bug fixes
- Fixed a few CPU throttling and health probe issues on the AKS control plane.
- Updated signed PowerShell package to v0.0.3. Fixes #1772
- Fixed issue with Azure policy addon and kubernetes v1.19 preview. Fixes #1869
- Component updates
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1397.201014.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.10.15.
- AKS Ubuntu 18.04 image updated to AKSUbuntu-1804-2020.10.15.
This release is rolling out to all regions - ETA for conclusion 2020-10-22
- AKS and Holiday Season: To ease the burden of upgrade and change during the holiday season, AKS is extending a limited scope of support for all clusters and nodepools on 1.16 as a courtesy. Customers with clusters and nodepools on 1.16 after the announced deprecation date of 2020-11-30 will be granted an extension of capabilities outside the usual scope of support for deprecated versions.
The scope of this limited extension is effective from '2020-11-30 to 2021-01-31' and is limited to the following:
- Creation of new clusters and nodepools on 1.16.
- CRUD operations on 1.16 clusters.
- Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- AKS will default to containerd as the default runtime on kubernetes v1.19+ after this feature GAs. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA of kubernetes v1.19, containerd will be served by default for all new clusters or cluster that upgrade to v1.19. If you are doing container builds in cluster please use the recommended docker buildx.
- After the GA of Ephemeral OS and release of the 2020-11-01 AKS API version. Clusters and nodepools will be created by default with Ephemeral OS. You can still select managed disks explicitly if you prefer that option. See more at https://aka.ms/aks/ephemeral-os.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Features
- You can now mutate the AKS default storage class. See how here https://docs.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv
- AKS now supports Dv4 and DSv4 VM SKU.
- Bug Fixes
- Fixed bug that allow setting unsupported labels. Unsupported labels are now blocked as per: https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/0000-20170814-bounding-self-labeling-kubelets.md#proposal
- Fixed an issue with Managed-AAD and kubernetes v1.19 preview. Closes #1891
- Fixed an issue where topology labels where not correctly preserved after upgrade.
- Behavior change
- The
calico-typha
deployment is now calledcalico-typha-deployment
. - The revisionHistoryLimit is now set to 2 for managed components and addon deployments. Closes #1502
- The
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.10.08.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.10.08.
This release is rolling out to all regions - ETA for conclusion 2020-09-30
- AKS will default to containerd as the default runtime in kubernetes v1.19. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA of kubernetes v1.19, containerd will be served by default for all new clusters or cluster that upgrade to v1.19. If you are doing container builds in cluster please use the recommended docker buildx.
- [New Date] We heard your feedback and as such, the Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Features
- Azure Policy Addon is now Generally Available, see https://azure.microsoft.com/updates/ga-policy-addon-for-azure-kubernetes-service
- New Kubernetes patches available. v1.16.15 and v1.17.11
- New detectors for AKS Diagnostics:
- Node drain failure detector that calls out node drain failures that might impact the cluster workloads.
- Managed-AAD integration detector that looks for common issues with AADv2 integration like the kubectl minimum version.
- Preview features
- AKS now supports the ability to completely stop and start clusters on demand. A great option for times your cluster might be idle. Start here: https://aka.ms/aks/stop-cluster
- Ephemeral OS is now part of the 2020-09-01 AKS API and can be enabled through ARM. This will also allow you to in the future keep using managed network-attached disk if you want. https://github.com/Azure/azure-rest-api-specs/blob/master/specification/containerservice/resource-manager/Microsoft.ContainerService/stable/2020-09-01/examples/AgentPoolsCreate_Ephemeral.json
- Azure RBAC for Kubernetes Authorization is now in Public Preview and open to anyone to test (no form required): https://docs.microsoft.com/azure/aks/manage-azure-rbac
- Kubernetes v1.19 is now in public preview
- Azure Confidential Compute addon for AKS is not in public preview providing you with confidential nodes: https://docs.microsoft.com/azure/confidential-computing/confidential-nodes-aks-get-started
- AKS now integrates with Azure Files NFS shares on its CSI storage drivers. See more here: https://docs.microsoft.com/azure/aks/azure-files-csi#nfs-file-shares
- Bug fix
- Issue where addon names where not accepted with any casing fixed.
- Component updates
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1397.200904
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.09.17.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.09.17.
- Behavior Change
- AKS is now validating/blocking labels that are disallowed upstream.
This release is rolling out to all regions - ETA for conclusion 2020-09-18
- AKS will default to containerd as the default runtime in kubernetes v1.19. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA of kubernetes v1.19, containerd will be served by default for all new clusters or cluster that upgrade to v1.19. If you are doing container builds in cluster please use the recommended docker buildx.
- [New Date] We heard your feedback and as such, the Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st 2021.
- Once GA AKS will default to its new GPU specialized image as the supported option for GPU-capable agent nodes.
- Features
- Kubernetes version 1.18 is now Generally Available (GA) on AKS. (1.15 is being retired as this release progressively reaches all regions, as previously communicated). Check the release calendar for future version release and GA date.
- New Kubernetes patch versions available, v1.18.8.
- The AKS Kubernetes Audit logs are now split in 2 categories to allow you granularly subscribe and save costs.
kube-audit-admin
: This category contains only audit events that include write verbs (create
,update
,delete
,patch
,post
)kube-audit
: This category contains all remaining audit events.
- AKS Ubuntu 18.04 is now Generally Available and will be the default agent node base image on k8s v1.18 and onward.
- Preview Features
- AKS now supports Azure disk and Azure files CSI storage drivers in Public preview.
- Bug Fix
- Fixed an issue where non-AKS managed identities (eg. from Pod Identity) would be lost after an AKS upgrade.
- Fixed bug where the VMSS backend pool was removed after a Service Principal reset operation.
- Behavior Changes
- Ensure all components use only strong ciphers (matching the AKS API server). Metrics server now only allows the following cipher suites:
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- Ensure all components use only strong ciphers (matching the AKS API server). Metrics server now only allows the following cipher suites:
- Component Updates
- Azure Policy Addon updated to Gatekeeper beta12 and Policy 0804 versions.
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1397.200820
- Azure Monitor for Containers versions updated: https://github.com/microsoft/Docker-Provider/blob/ci_prod/ReleaseNotes.md#08072020--
- Linux version mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod08072020
- Windows version mcr.microsoft.com/azuremonitor/containerinsights/ciprod:win-ciprod08072020
- Add LivenessProbe and ReadinessProbe for Metrics Server.
- Updated AKS Moby version to 19.03.12 (from now on AKS Moby versions will follow docker versioning to assist scanning tools false positives).
- Updated NVIDIA GPU drivers to v450.51.06.
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.08.28.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.08.28.
This release is rolling out to all regions - ETA for conclusion 2020-08-26
- AKS will default to AKS ubuntu 18.04 in upcoming GA of kubernetes v1.18 which marks the GA of AKS Ubuntu 18.04 as well. We recommend testing existing workloads on AKS Ubuntu 18.04 nodepools prior to GA. See how here: https://aka.ms/aks/Ubuntu1804
- AKS will default to containerd as the default runtime in kubernetes v1.19. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA of kubernetes v1.19, containerd will be served by default for all new clusters or cluster that upgrade to v1.19. If you are doing container builds in cluster please use the recommended docker buildx.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st 2020.
- Kubernetes version 1.18 will GA on the week of August 31st and you will no longer be able to create 1.15.x based clusters or nodepools.
- Once GA AKS will default to the GPU specialized image as the supported option for GPU-capable agent nodes.
- Features
- You can now see the AKS Authentication Webhook Server logs for the Azure AD Integrated clusters, as part of the AKS Control Plane logs.
- Preview Features
- AKS now has a specialized GPU Node Image that already includes not only the docker drivers but also the NVIDIA device plugin, so ready to use. See more at: https://aka.ms/aks/specialized-gpu-image
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.08.13.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.08.13.
This release is rolling out to all regions - ETA for conclusion 2020-08-21
- AKS will default to AKS ubuntu 18.04 in upcoming GA of kubernetes v1.18 which marks the GA of AKS Ubuntu 18.04 as well. We recommend testing existing workloads on AKS Ubuntu 18.04 nodepools prior to GA. See how here: https://aka.ms/aks/Ubuntu1804
- AKS will default to containerd as the default runtime in kubernetes v1.19. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA of kubernetes v1.19, containerd will be served by default for all new clusters or cluster that upgrade to v1.19.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st 2020.
- Kubernetes version 1.18 will GA on the week of August 31st and you will no longer be able to create 1.15.x based clusters or nodepools.
- Features
- AKS now supports autoscaling to 0. (latest CLI extension and following core CLI version)
- Bug fixes
- Fixed a bug when scaling windows node pools and the windows profile parameters were missing.
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.08.06.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.08.06.
This release is rolling out to all regions - ETA for conclusion 2020-08-14
- AKS will default to AKS ubuntu 18.04 in upcoming GA of kubernetes v1.18 which marks the GA of AKS Ubuntu 18.04 as well. We recommend testing existing workloads on AKS Ubuntu 18.04 nodepools prior to GA. See how here: https://aka.ms/aks/Ubuntu1804
- AKS will default to containerd as the default runtime in kubernetes v1.19. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA of kubernetes v1.19, containerd will be served by default for all new clusters or cluster that upgrade to v1.19.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st 2020.
- Kubernetes version 1.18 will GA on the week of August 31st and you will no longer be able to create 1.15.x based clusters or nodepools.
- Preview Features
- AKS now supports Ephemeral OS disks in Public Preview. Read more here: https://aka.ms/aks/ephemeral-os
- AKS has announced the AKS Portal Resource view: https://azure.microsoft.com/updates/kubernetes-resource-view-is-in-public-preview
- Bug fixes
- Fixed but where VNET CIDR was not set correctly on Windows nodes which could result in incorrect NATing behavior.
This release is rolling out to all regions - ETA for conclusion 2020-08-07
- AKS will default to AKS ubuntu 18.04 in upcoming GA of kubernetes v1.18 which marks the GA of AKS Ubuntu 18.04 as well. We recommend testing existing workloads on AKS Ubuntu 18.04 nodepools prior to GA. See how here: https://aka.ms/aks/Ubuntu1804
- AKS will default to containerd as the default runtime in kubernetes v1.19. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA of kubernetes v1.19, containerd will be served by default for all new clusters or cluster that upgrade to v1.19.
- AKS has removed the custom "high-priority" and "addon-priority" Priority Classes which are no longer used by the service.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st 2020.
- Kubernetes version 1.18 will GA on the week of August 31st and you will no longer be able to create 1.15.x based clusters or nodepools.
- Features
- AKS-managed Azure AD integration (v2) is now generally available: https://docs.microsoft.com/en-us/azure/aks/managed-aad
- You can now upgrade non-AzureAD clusters to AD-integrated clusters.
- You can now upgrade clusters with the previous iteration of the AzureAD integration (v1) into AKS-managed AzureAD clusters (v2)
- Closes: Azure#1489
- AKS is now supported on the following regions:
- UAE Central - Closes Azure#1693
- West Central US - Closes Azure#998
- New Kubernetes patch versions available v1.16.13, v1.17.9
- This patch versions respond the following CVEs:
- We've heard your feedback and we will not be removing all the previous versions due to the vulnerabilities. Instead we've patched all the latest GA ones (v1.15.11, v1.15.12, v1.16.10, v1.17.7) and you can still mitigate the vulnerabilities in any GA version by leveraging https://aka.ms/aks/node-image-upgrade.
- AKS-managed Azure AD integration (v2) is now generally available: https://docs.microsoft.com/en-us/azure/aks/managed-aad
- Bug fixes
- AKS has added the ready and health plugins to coredns. Closes Azure#1676
- We've added additional validations to prevent the creation of multiple node pool clusters with Basic Load Balancer which isn't supported.
- Preview features
- AKS now supports Bring Your Own (BYO) control plane managed identity: https://aka.ms/aks/byo-mi
- New Kubernetes patch versions are available for preview, v1.18.6.
- Behavior changes
- After API version 2020-07-01 the node image upgrade operation will only allow POST and not PUT. CLI versions won't be affected.
- A default load balancer is not longer created in UDR OutboundType clusters. The LB can be automatically created later if a public service of type LoadBalancer is created.
- Component Updates
- Calico updated to v3.8.0
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1339.200716
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.07.16.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.07.16.
This release is rolling out to all regions
- AKS will default to AKS ubuntu 18.04 in upcoming GA of kubernetes 1.18 and after AKS Ubuntu 18.04 is GA as well. We recommend testing existing workloads on AKS Ubuntu 18.04 nodepools prior to GA. See how here: https://aka.ms/aks/Ubuntu1804
- AKS will default to containerd as the default runtime in the upcoming months. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA containerd will be served for all new clusters on the latest kubernetes version clusters that upgrade to it.
- On the next release, AKS will be removing the custom "high-priority" and "addon-priority" Priority Classes which are no longer used by the service.
- The Azure Kubernetes Service pod security policy (preview) feature will be retired on May 31st 2020.
- Features
- Users can now reuse inbound and outbound IPs on the Load Balancer. After this change, users can assign an outbound IP that is same as an inbound IP in the AKS SLB.
- Preview Features
- AKS is announcing the release of Azure RBAC integration for Kubernetes Authorization in Preview. Which allows you to control the RBAC of your cluster directly from the Azure Portal. See more at: https://aka.ms/aks/azure-rbac
- AKS integration with Azure Policy and Gatekeeper now supports securing your pods with Azure Policy (with the equivalent controls that were made available previously in Pod Security Policies). Read more at https://aka.ms/aks/azpodpolicy
- AKS now supports Azure Ultra disks in preview.
- AKS now supports confidential workloads through DCSv2 SKUs (private preview). Read more here.
- Component Updates
- New AKS base images - Upgrade to these using https://aka.ms/aks/node-image-upgrade
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.06.30.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.06.30.
- New AKS base images - Upgrade to these using https://aka.ms/aks/node-image-upgrade
This release is rolling out to all regions
- AKS is preparing to GA the AKS Ubuntu 18.04 node image and recommends testing existing workloads on AKS Ubuntu 18.04 nodepools prior to GA. After AKS Ubuntu 18.04 is GA, at the time of upgrading to a pre-announced kubernetes version, clusters running AKS Ubuntu 16.04 will receive this new image. See how here: https://aka.ms/aks/Ubuntu1804
- AKS will default to containerd as the default runtime in the upcoming months. During preview we encourage to create nodepools with the new container runtime to validate workloads still work as expected. And do check the containerd differences and limitations. After GA containerd will be served for all new clusters on the latest kubernetes version clusters that upgrade to it.
- Features
- Kubernetes version 1.17 is now Generally Available (GA) on AKS. (1.14 is being retired as this release progressively reaches all regions, as previously communicated).
- New Kubernetes patch versions available, v1.15.12, v1.16.10, v1.17.7. Note that as per policy, versions <1.16.10 and <1.17.7 were removed due to severe bugs and security issues flagged upstream.
- Preview Features
- New Kubernetes patch versions are available for preview, v1.18.4.
- AKS now supports
containerd
as container runtime in preview. This runtime will be the default on AKS on the upcoming months. Read about it at https://aka.ms/aks/containerd and try it out! - AKS now supports Proximity Placement Groups in preview to provide collocation capabilities for low latency workloads. Read more about it at https://aka.ms/aks/ppg and try it out!
- Bug fixes
- Fix issues on clusters using managed identities, where a cluster upgrade or scale operation would remove unknown identities.
- Fixed bug where users where being shown more than the minor version above as available versions to upgrade to.
- Behavior changes
- Kubernetes API server Log Level changed from 4 to 2. This will be reduce the log volume while keeping pair with k8s Prod recommendations.
- Azure Policy for AKS is removing the built-in policies offered for the private preview version 1.0 of the add-on. The built-in policies with category name of "Kubernetes Service" will no longer function starting July 21st, 2020. To continue service, update the add-on to version 2.0 by following these steps and use policies of category name "Kubernetes".
- Component Updates
- Metrics Server has been updated to v0.3.6.
- New AKS base images - Upgrade to these using https://aka.ms/aks/node-image-upgrade
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1282.200625
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.06.25.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.06.25.
This release is rolling out to all regions
- AKS is preparing to GA the AKS Ubuntu 18.04 node image and recommends testing existing workloads on AKS Ubuntu 18.04 nodepools prior to GA. After AKS Ubuntu 18.04 is GA, at the time of upgrading to a pre-announced kubernetes version, clusters running AKS Ubuntu 16.04 will receive this new image. See how here: https://aka.ms/aks/Ubuntu1804
- Kubernetes version 1.17 will GA on the week of July 1st and you will no longer be able to create 1.14.x based clusters or nodepools.
- Kubernetes 1.17 introduces API deprecations, please make sure your manifests are up to date before upgrading, and check Azure Advisor to confirm you are not using deprecated APIs. More information on 1.17 API deprecations here: https://v1-17.docs.kubernetes.io/docs/setup/release/#deprecations-and-removals
- Behavior changes
- In advance of the GA of kubernetes v1.17 AKS is now defaulting to kubernetes v1.16 as the default version. If you have a dependency on the AKS default version, make sure your kubernetes APIs are up to date: Azure#1205
- Component updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.06.10.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.06.10.
- Azure Monitor for Containers monitoring addon image was updated to ciprod05222020 and win-ciprod05222020-2 (for Windows). Notable changes:
- Windows Logs - Starting from this release, users will see the agent automatically start collecting windows container STDOUT/STDERR logs and sending them to same log analytics workspace.
- Metrics available for Alerting - Users will see the below metrics on the AKS 'Metrics' blade in the Azure portal, under the "Container Insights" Namespace.
- Metrics:
- diskUsagePercentage
- completedJobsCount
- oomKilledContainerCount
- podReadyPercentage
- restartingContainerCount
- cpuExceededPercentage
- memoryRssExceededPercentage
- memoryWorkingSetExceededPercentage
- Metrics:
This release is rolling out to all regions
- AKS is preparing to GA the AKS Ubuntu 18.04 node image and recommends testing existing workloads on AKS Ubuntu 18.04 nodepools prior to GA. After AKS Ubuntu 18.04 is GA, at the time of upgrading to a pre-announced kubernetes version, clusters running AKS Ubuntu 16.04 will receive this new image. See how here: https://aka.ms/aks/Ubuntu1804
- Kubernetes version 1.17 will GA on the week of July 1st and you will no longer be able to create 1.14.x based clusters or nodepools.
- Kubernetes 1.17 introduces API deprecations, please make sure your manifests are up to date before upgrading, and check Azure Advisor to confirm you are not using deprecated APIs. More information on 1.17 API deprecations here: https://v1-17.docs.kubernetes.io/docs/setup/release/#deprecations-and-removals
- Features
- It is now supported to upgrade clusters from Free to Paid on all regions that support Uptime SLA. This can be done after this weeks release finishes via ARM and on the next CLI version.
- Windows Server container support is now Generally Available on Azure China regions.
- Preview Features
- AKS has released Node Image Upgrade, to allow users to upgrade the node image of all their cluster nodes, or a specific nodepool, without requiring a full kubernetes upgrade. See more at: http://aka.ms/aks/nodeimageupgrade
- AKS has released the Application Ingress Controller (AGIC) Addon in public preview. With it you can now easily install and leverage AGIC as a fully managed addon on AKS. More here: https://aka.ms/aks/agic
- AKS now supports NDr_v2 with the gen2 preview.
- Bug fixes
- Fixed issue where upgrading older clusters would fail due to incompatibility with the podpriority kubelet feature gate.
- Fixed issue in Upgrade and Update operations where there was a mismatch between the Cluster Autoscaler node count and current node count.
- AKS has cherry picked the following bug fixes into v1.15.11:
- Behavior changes
- You are now allowed deploy AKS into dual-stack subnets on dual-stack vnets. The AKS cluster will only leverage the IPv4 stack currently.
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.05.31.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.05.31.
This release is rolling out to all regions
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- For any cluster created on K8s 1.18 or above, AKS will default the kube-dashboard add-on to disabled moving forward.
- Kubernetes version 1.17 will GA on the week of July 1st and you will no longer be able to create 1.14.x based clusters or nodepools.
- Kubernetes 1.17 introduces API deprecations, please make sure your manifests are up to date before upgrading, and check Azure Advisor to confirm you are not using deprecated APIs. More information on 1.17 API deprecations here: https://v1-17.docs.kubernetes.io/docs/setup/release/#deprecations-and-removals
- Features
- Outbound Type is now Generally Available and supports Kubenet based clusters. Read more at https://aka.ms/aks/outboundtype
- AKS now supports custom Route Tables for Kubenet-based clusters, by enabling use of existing Route Tables so Kubernetes can add required routes for node communication. Read more at https://aka.ms/aks/customrt
- Preview Features
- Support for Confidential Workloads on AKS and DC-series nodepools are now in Private Preview. Read more at https://aka.ms/accakspreview.
- AKS now supports Max Surge Upgrades in preview. Read more at https://aka.ms/aks/maxsurge
- Behavior changes
- AKS default OS disk size is now 128GB, a P10.
- Set calico
IptablesFilterAllowAction
to return from its default value, so that requests not dropped by the policy can be compared against system chains. In this way, the requests to services without endpoints can be rejected following the intention of kube-proxy. - AKS has released API version 2020-04-01 which as previously announced defaults to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- Component Updates
- Updated Kube-Dashboard images for 1.16, 1.17 and 1.18
- 1.16 clusters will use dashboard:v2.0.0-rc3, 1.17 will use dashboard:v2.0.0-rc7, 1.18 will use dashboard:v2.0.1
- Read more about the User Experience here: https://docs.microsoft.com/en-us/azure/aks/kubernetes-dashboard
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.05.27.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.05.27.
- Updated Kube-Dashboard images for 1.16, 1.17 and 1.18
This release is rolling out to all regions
- AKS API version 2020-04-01 defaults to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- Kubernetes 1.17 introduces API deprecations, please make sure your manifests are up to date before upgrading, and check Azure Advisor to confirm you are not using deprecated APIs. More information on 1.17 API deprecations here: https://v1-17.docs.kubernetes.io/docs/setup/release/#deprecations-and-removals
- For any cluster created on K8s 1.18 or above, AKS will default the kube-dashboard add-on to disabled moving forward.
- Features
- Uptime SLA is now available in all Public Cloud regions.
- Bug Fixes
- Fixed bug with Windows nodes and Managed Identity, where nodes were unable to pull images from ACR.
- Component Updates
- Azure Policy image updated to version
prod_20200519.1
- Azure Network policy image updated to v1.1.2, https://github.com/Azure/azure-container-networking/releases/tag/v1.1.2
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1217.200513
- Azure Policy image updated to version
This release is rolling out to all regions
- AKS API version 2020-04-01 (to be published) will default to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- Kubernetes 1.17 introduces API deprecations, please make sure your manifests are up to date before upgrading, and check Azure Advisor to confirm you are not using deprecated APIs. More information on 1.17 API deprecations here: https://v1-17.docs.kubernetes.io/docs/setup/release/#deprecations-and-removals
- Only Spot and Regular will be accepted as parameters for nodepool scaleSetPriority, low priority (now Spot) will no longer be accepted.
- For any cluster created on K8s 1.18 or above, AKS will default the kube-dashboard add-on to disabled moving forward.
-
Features
- Windows Server container support is now Generally Available on Azure Government Cloud.
- AKS has introduced new kubernetes patch versions v1.15.11, v1.16.8, v1.16.9.
-
Preview Features
-
AKS has introduced new public preview patch version v1.17.4, v1.17.5.
-
AKS now supports Gen2 VMs in Public Preview.
az feature register --name "Gen2VMPreview" --namespace "Microsoft.ContainerService" # wait for the feature to register az feature show --name Gen2VMPreview --namespace "Microsoft.ContainerService" # Re-register the AKS namespace by performing the below az provider register --namespace 'Microsoft.ContainerService' # Finally create the cluster az aks create -n aks -g aks -s Standard_D2s_v3 --aks-custom-headers usegen2vm=true
-
-
Bug Fixes
- Fixed an issue where if a nodepool operation was performed in a locked resource group it would return error 500 instead of correctly returning a ResourceGroupLocked error.
-
Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.05.13.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.05.13.
This release is rolling out to all regions
- AKS API version 2020-04-01 (to be published) will default to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- Kubernetes 1.17 introduces API deprecations, please make sure your manifests are up to date before upgrading, and check Azure Advisor to confirm you are not using deprecated APIs. More information on 1.17 API deprecations here: https://v1-17.docs.kubernetes.io/docs/setup/release/#deprecations-and-removals
- Only Spot and Regular will be accepted as parameters for nodepool scaleSetPriority, low priority (now Spot) will no longer be accepted.
- Features
- AKS now offers an optional Paid Uptime SLA. Read more about it: https://techcommunity.microsoft.com/t5/azure-kubernetes-service/aks-introduces-uptime-sla/ba-p/1350832
- Preview Features
- AKS now supports in preview kubernetes versions 1.18.1 and 1.18.2
- AKS now supports creating nodepools leveraging AKS Ubuntu 18.04 images in any existing cluster
eg.
az aks nodepool add -n ubuntu1804 --cluster-name aks -g aks --aks-custom-headers CustomizedUbuntu=aks-ubuntu-1804
- The Azure Policy Add-on for AKS has released a new version to integrate with OPA Gatekeeper v3. Read more here Azure#1606
- Bug Fixes
- Fixed bug where newly added agent pool did not inherit VnetCidrs from existing agent pools resulting in wrong nonMasqueradeCIDRs
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.05.06.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.05.06.
This release is rolling out to all regions
- AKS API version 2020-04-01 (to be published) will default to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- Kubernetes 1.17 introduces API deprecations, please make sure your manifests are up to date before upgrading, and check Azure Advisor to confirm you are not using deprecated APIs. More information on 1.17 API deprecations here: https://v1-17.docs.kubernetes.io/docs/setup/release/#deprecations-and-removals
- Only Spot and Regular will be accepted as parameters for nodepool scaleSetPriority, low priority (now Spot) will no longer be accepted.
- Features
- AKS has released an Admissions Enforcer to protect the system from Admission Controller Webhooks that might impact the kube-system components. Ready more about it here
- AKS now allows users to create windows nodepools with the latest windows image without requiring a cluster upgrade.
- AKS now supports HB series and HBv2 series SKU families.
This release is rolling out to all regions
- AKS API version 2020-04-01 (to be published) will default to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- Kubernetes 1.17 introduces API deprecations, please make sure your manifests are up to date before upgrading, and check Azure Advisor to confirm you are not using deprecated APIs. More information on 1.17 API deprecations here: https://v1-17.docs.kubernetes.io/docs/setup/release/#deprecations-and-removals
- Only Spot and Regular will be accepted as parameters for nodepool scaleSetPriority, low priority (now Spot) will no longer be accepted.
- Features
- Windows Server container support is now Generally Available on AKS.
- Preview Features
- The Public IP Per Node Feature can now be used with Standard Load Balancer SKU.
- Bug Fixes
- Added validation to prevent a subnetID update which would result on a failed nodepool.
- Fixed issue when multiple delete operations where attempted simultaneously.
- Fixed issue when performing cluster updates with APIs older than 2020-03-01 failed in clusters created using API 2020-03-01.
- Fixed issue with agent count mismatch on upgrade.
- Fixed an issue with a dependency on github:443 on Windows provisioning.
- Behavior Changes
- Metrics-server now enforces burstable QoS class.
- Component Updates
- Azure Network Policy (NPM) was updated from v1.0.33 to v1.1.0 - https://github.com/Azure/azure-container-networking/releases/tag/v1.1.0
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1158.200421.
- Azure CNI was updated to 1.0.33 on Windows
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.04.16.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.04.16.
This release is rolling out to all regions
- AKS API version 2020-04-01 (to be published) will default to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- Bug Fixes
- Added CriticalAddonsOnly toleration for calico-typha-horizontal-autoscaler
- Fixed a bug where the ILB backend bool would be removed after a manual VMSS update-instances command was issued.
- Features
- AKS has now introduced a new Mode property for nodepools. This will allow you to set nodepools as System or User nodepools. System nodepools will have additional validations and will be preferred by system pods, while User pool will have more lax validations and can perform additional operations like scale to 0 nodes or be removed from the cluster. Each cluster needs at least one system pool. All details here: https://aka.ms/aks/nodepool/mode
- System/User nodepools are available from core CLI version 2.3.1 or greater (or latest preview extension 0.4.43)
- Nodepool mode requires API 2020-03-01 or greater
- AKS now allows User nodepools to scale to 0.
- AKS Diagnostics - Added networking and connectivity checks through our new Cluster Network Configuration detector. This allows you to check DNS and subnet related issues that may have impacted your cluster. It also highlights your network configuration to give you all this information at your fingertips.
- AKS has now introduced a new Mode property for nodepools. This will allow you to set nodepools as System or User nodepools. System nodepools will have additional validations and will be preferred by system pods, while User pool will have more lax validations and can perform additional operations like scale to 0 nodes or be removed from the cluster. Each cluster needs at least one system pool. All details here: https://aka.ms/aks/nodepool/mode
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.04.06.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.04.06.
This release is rolling out to all regions
- AKS API version 2020-04-01 will default to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- Features
- New VM GPU SKUs are now supported: Standard_NV12_Promo; Standard_NV12s_v3; Standard_NV24_Promo; Standard_NV24s_v3; Standard_NV48s_v3.
- Bug fixes
- Added validation to block cluster creation if user specifies a subnet that is delegated
- Fixed bug caused by apmz package being installed from https://upstreamartifacts.blob.core.windows.net, which is not in the AKS required endpoint egress list.
- CoreDNS memory limit increased to 170Mb and assigned Guaranteed QoS class.
- Fixed a bug with Cluster Proportional Autoscaler (CPA) version on 1.16. This bug is solved on version 1.7.1 which is now the version being used in AKS.
- Fixed bug passing the correct nodepool at validation time on UDR OutboundType preview feature.
- Patched bug where nodepool was not correctly added to internal SLB backend address pool: kubernetes/kubernetes#89336
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.03.24.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.03.24.
This release is rolling out to all regions
- AKS API version 2020-04-01 will default to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- Two security issues were discovered in Kubernetes that could lead to a recoverable denial of service.
- CVE-2020-8551 affects the kubelet, and has been rated Medium (CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L).
- CVE-2020-8552 affects the API server, and has also been rated Medium (CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L).
- Am I vulnerable?
- Only in cases where the attacker can make authorized resource requests to un-patched API server or kubelets.
- Also AKS auto restarts apiserver and kubelet in the event of an OOM error which further limits exposure.
- How can I get the latest patched API and kubelet and fix this vulnerability?
- Upgrade to kubernetes versions v1.16.7 or v1.15.10. Or AKS preview versions v1.17.3
- Bug fixes
- Fixed bug that caused an error while updating existing AAD cluster with the new 2020-03-01 API
- Preview Features
- Updated Azure Policy addon preview to use Gatekeeper v3 on new and updated addons. See more at https://docs.microsoft.com/en-us/azure/governance/policy/concepts/rego-for-aks
- Behavioral changes
- All AKS Standard LBs will now have TCP Reset flag set to true.
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu-1604-2020.03.11.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu-1804-2020.03.11.
This release is rolling out to all regions
- As previously announced we have retired support for Kubernetes 1.13 releases.
- AKS API version 2020-04-01 will default to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- Features
- An update to AAD integration (AADv2) is in public preview. Code has been rolled out; documents and cli extension to be published in the week of 23rd March.
- AKS now exposes the balance-similar-node-groups setting on cluster autoscaler, which enables evenly balanced numbers of auto-scaled nodes across nodepools.
- AKS has added 2 new built-in storage classes for Azure Files Standard (azurefile) and Azure Files Premium (azurefile-premium).
- AKS Clusters using Managed Identity are now Generally Available (GA) and will no longer need a service principal.
- Behavioral Changes
- The default azure disk storage class configuration has been changed from
Standard_LRS
toStandardSSD_LRS
, andallowVolumeExpansion
has been set to true. - Event deletion from the cluster will be audited to increase threat detection.
- The default azure disk storage class configuration has been changed from
- Bug Fixes
- A change in how swap nodes (used during upgrade of VMSS) are deleted from the cluster to increase reliability.
- Component Updates
- AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.1075.200227.
This release is rolling out to all regions
- K8s 1.16 introduces API deprecations which will impact user workloads as described in this AKS issue. If you plan to upgrade to this version user action is required to remove dependencies on the deprecated APIs to avoid disruption to workloads. Ensure you have taken this action prior to upgrading to K8s 1.16.
- AKS API version 2020-04-01 will default to VMSS (Virtual Machine Scale Sets), SLB (Standard Load Balancer) and RBAC enabled.
- AKS has introduced AKS Ubuntu 18.04 in preview. During this time we will provide both OS versions side by side. After AKS Ubuntu 18.04 is GA, on the next cluster upgrade, clusters running AKS Ubuntu 16.04 will receive this new image.
- [Egress Breaking Change] Azure MCR has Updated its CDN endpoints - Read about it here: #1476
- Features
- Kubernetes version 1.16 is now Generally Available (GA) on AKS. (1.13 is being retired as previously communicated).
- New Kubernetes patch versions available, v1.15.10, v1.16.7.
- Preview features
- New Kubernetes patch versions (v1.17.3) are available for v1.17 preview.
- AKS will now generate a default Windows username and password when creating a cluster (similarly as with ssh keys for Linux nodes). Customers can then add Windows pools to any newly created cluster without the need to have explicitly specified this parameters at create time. Customers can also reset this username and password at any time if they need it.
- Note that, as before, You can only add Windows nodepools to clusters using VMSS and AzureCNI.
- AKS now supports a new AKS base image based of Ubuntu 18.04 LTS.
-
You can test it by following:
# Install or update the extension az extension add --name aks-preview # Register the preview feature flag az feature register --name UseCustomizedUbuntuPreview --namespace Microsoft.ContainerService # Create 18.04 based cluster az aks create -g <CLUSTER RG> -n <CLUSTER NAME> --aks-custom-headers CustomizedUbuntu=aks-ubuntu-1804
-
If you want to continue to create 16.04 GA clusters, just omit the -aks-custom-headers.
-
- Behavioral Changes
- To ensure user is correctly configuring OutboundType: UDR feature AKS now validates not only if a Route Table is present but also if it contains a default route from 0.0.0.0/0 to allow egress through an appliance, FW, on-prem GW, etc. More details how to correctly configure this feature can be found here: https://docs.microsoft.com/en-us/azure/aks/egress-outboundtype.
- AKS enforces password expiration as part of CIS compliance but excludes the linux admin account that is using public key auth only. All accounts created using password will be subject to this enforcement.
- As usual, with the GA of 1.16 the AKS default version follows n-1 and is now 1.15
- As per Azure#1304 AKS will now upgrade the rest of the fleet to CoreDNS 1.6.6 after upgrading only non-Proxy users on Release 2020-01-27.
- Component Updates
- AKS Ubuntu 16.04 image updated to AKSUbuntu:1604:2020.03.05.
- AKS Ubuntu 18.04 image release notes: AKSUbuntu:1804:2020.03.05.
- Updated to Moby 3.0.10 - https://github.com/Azure/moby/releases/tag/3.0.10.
- Updated Azure CNI plugin version for Linux to 1.0.33 and Azure CNI plugin version for Windows 1.0.30 - https://github.com/Azure/azure-container-networking/releases.
- External DNS image was updated to v0.6.0.
- (Added 03/16/2020) AKS Windows image has been updated to 2019-datacenter-core-smalldisk-17763.973.200213
This release is rolling out to all regions
- K8s 1.16 introduces API deprecations which will impact user workloads as described in this AKS issue. When AKS supports this version user action is required to remove dependencies on the deprecated APIs to avoid disruption to workloads. Ensure you have taken this action prior to upgrading to K8s 1.16 when it is available in AKS.
- 1.16 will GA on the week of March 9th and you will no longer be able to create 1.13.x based clusters or nodepools.
- Features
- Added balance-similar-node-groups as an additional parameter users can configure for AKS Managed Cluster Autoscaler (CA)
- Behavioral Changes
- For enhanced security AKS has removed CHACHA from API server accepted tls cipher suites.
This release is rolling out to all regions
- K8s 1.16 introduces API deprecations which will impact user workloads as described in this AKS issue. When AKS supports this version user action is required to remove dependencies on the deprecated APIs to avoid disruption to workloads. Ensure you have taken this action prior to upgrading to K8s 1.16 when it is available in AKS.
- With the introduction of Kubernetes v1.16 on the last release that marked the start of the deprecation for v1.13 in AKS. 1.13 is scheduled to be retired on February 28th.
- Features
- AKS now supports Service Account Token Volume Projection
- Preview Features
- AKS now supports Azure Spot NodePools
- Bug Fixes
- Fixed bug on Windows Nodepools preview where vnetCidrs were sometimes not set correctly on Windows nodepools resulting in wrong NAT exceptions on Windows nodes.
This release is rolling out to all regions
- K8s 1.16 introduces API deprecations which will impact user workloads as described in this AKS issue. When AKS supports this version user action is required to remove dependencies on the deprecated APIs to avoid disruption to workloads. Ensure you have taken this action prior to upgrading to K8s 1.16 when it is available in AKS.
- With the introduction of Kubernetes v1.16 on the last release that marked the start of the deprecation for v1.13 in AKS. 1.13 is scheduled to be retired on February 28th.
- New Features
- AKS Cluster AutoScaler now supports configuring the autoscaler profile parameters. https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler#using-the-autoscaler-profile
- Bug Fixes
- Fixed bug when upgrading Virtual Machine Availability Set (VMAS) clusters that would trigger a PutNicOperation cancelled
- Fixed bug causing throttling when using Internal Load Balancer
- Preview Features
- AKS now supports adding tags and labels to nodepools
- Component Updates
- AKS VHD image updated to aks-ubuntu-1604-202002_202002.12
This release is rolling out to all regions
- K8s 1.16 introduces API deprecations which will impact user workloads as described in this AKS issue. When AKS supports this version user action is required to remove dependencies on the deprecated APIs to avoid disruption to workloads. Ensure you have taken this action prior to upgrading to K8s 1.16 when it is available in AKS.
- With the introduction of Kubernetes v1.16 on the last release that marked the start of the deprecation for v1.13 in AKS. 1.13 is scheduled to be retired on February 28th.
- New Features
- Virtual Nodes are now supported in Canada Central
- AKS now supports Service Account Token Volume Projection
- Preview Features
- Windows nodepools will change to use a vhd image provided by aks-engine. This release updates the Windows base image to version: 17763.864.191211 --> Rel Notes: https://github.com/Azure/aks-engine/blob/master/vhd/release-notes/aks-windows/2019-datacenter-core-smalldisk-17763.864.191211.txt
- Important With this change, the Image Publisher to "microsoft-aks" also changes, as such existing node pools cannot upgrade to this new image. To get the newest OS image, you'll have to create a new node pool.
- Windows nodepools will change to use a vhd image provided by aks-engine. This release updates the Windows base image to version: 17763.864.191211 --> Rel Notes: https://github.com/Azure/aks-engine/blob/master/vhd/release-notes/aks-windows/2019-datacenter-core-smalldisk-17763.864.191211.txt
- Bug Fixes
- Improved error message when attempting to skip minor versions when performing an upgrade operation.
- Fixed a bug where the dashboard would not work when RBAC was set to false for kubernetes v1.16/v1.17
- Behavioral Changes
- AKS has released a new API version
This release is rolling out to all regions
- K8s 1.16 introduces API deprecations which will impact user workloads as described in this AKS issue. When AKS supports this version user action is required to remove dependencies on the deprecated APIs to avoid disruption to workloads. Ensure you have taken this action prior to upgrading to K8s 1.16 when it is available in AKS.
- CoreDNS has been updated to v1.6.6. This change can affect users using the deprecated Proxy plugin which is no longer supported. Users should replace that with the Forward Plugin. Azure#1304
- With the introduction of Kubernetes v1.16 on the last release that marked the start of the deprecation for v1.13 in AKS. 1.13 is scheduled to be retired on February 28th.
- New Features
- AKS now supports specifying the Outbound Port and Idle Timeout properties on the Azure SLB. https://aka.ms/aks/slb-ports
- Bug Fixes
- Fixed a bug that caused a billing extension error.
- Preview features
- AKS now supports specifying Outbound type to define if the cluster should egress through the Standard Load Balancer (SLB) or a custom UDR (that sends egress traffic through a custom FW, on-prem gateway, etc.) Egress requirements are still the same, wherever the traffic egresses from. https://aka.ms/aks/egress
- Behavioral Changes
- The private cluster FQDN format has changed from guid..azmk8s.io toguid.privatelink..azmk8s.io
This release is rolling out to all regions
- AKS has updated supported versions as announced in this service update and AKS issue to move from the "N-3" to "N-2" window. Starting December 9th, 2019 AKS has removed support for anything older than K8s 1.13 and scoped the active support window to K8s 1.13, 1.14, and 1.15.
- K8s 1.16 introduces API deprecations which will impact user workloads as described in this AKS issue. When AKS supports this version user action is required to remove dependencies on the deprecated APIs to avoid disruption to workloads. Ensure you have taken this action prior to upgrading to K8s 1.16 when it is available in AKS.
- CoreDNS will be updated to v1.6.6. This change can affect users using the deprecated Proxy plugin which is no longer supported. Users should replace that with the Forward Plugin. Azure#1304
- With the introduction of Kubernetes v1.16 this marks the start of the deprecation for v1.13 in AKS. 1.13 is scheduled to be retired on February 28th.
- New Features
- AKS now supports Customer-Managed keys (BYOK) to be used for encryption of both OS and Data Disks of AKS clusters. https://docs.microsoft.com/en-us/azure/aks/azure-disk-customer-managed-keys
- New Supported SKUs: Standard_ND40s_v3, Standard_ND40rs_v2, Standard_D_v4, Standard_E_v4 and Standard_NP families
- New supported patch version for kubernetes v1.15 (v1.15.7)
- AKS now supports up to 10 nodepools.
- Virtual Nodes is now supported in Korea Central
- AKS now supports setting tags per nodepool. Which will propagate automatically to all nodes in the nodepool.
- Preview Features
- AKS now supports Kubernetes versions 1.16 (1.16.1, 1.16.2) and 1.17 (1.17.0) in preview.
- Bug Fixes
- Fixed bug with calico-typha health check in cases where localhost doesn't resolve 127.0.0.1
- Fixed validation bug where users could not deploy AKS at the same time of their SLB Public IP resource
- For clusters using Managed Identities and addons a bug was fixed where the addons' identity information was not displayed correctly.
- Fixed bug where Accelerated Networking would be disabled after an upgrade.
- Fixed issue while retrying to create the SLB default egress IP.
- Fixed bug where DS3_v2 would be Network Accelerated despite supporting it.
- Fixed several issues where under specific conditions users could see Azure API throttling on their subscriptions. - Azure#1413
- Fixed bug with
az aks reset-credentials --reset-aad
that would require manual intervention to complete.
- Component Updates
- Updated to Moby 3.0.8 - https://github.com/Azure/moby/releases/tag/3.0.8
- Updated AKS-Engine to 0.45.0 - https://github.com/Azure/aks-engine/releases/tag/v0.45.0
- Azure Monitor for Containers Agent updated to 01072020 release
- Important Node cpu, node memory, container cpu and container memory metrics were obtained earlier by querying kubelet readonly port(http://$NODE_IP:10255). Agent now supports getting these metrics from kubelet port(https://$NODE_IP:10250) as well. During the agent startup, it checks for connectivity to kubelet port(https://$NODE_IP:10250), and if it fails the metrics source is defaulted to readonly port(http://$NODE_IP:10255).
- AKS VHD image updated to aks-ubuntu-1604-202001_2020.01.10
This release is rolling out to all regions
- AKS is updating supported versions as announced in this service update and AKS issue to move from the "N-3" to "N-2" window. Starting December 9th, 2019 AKS will remove support for anything older than K8s 1.13 and scope the active support window to K8s 1.13, 1.14, and 1.15.
- K8s 1.16 introduces API deprecations which will impact user workloads as described in this AKS issue. When AKS supports this version user action is required to remove dependencies on the deprecated APIs to avoid disruption to workloads. Ensure you have taken this action prior to upgrading to K8s 1.16 when it is available in AKS.
- Bug Fixes
- Fixed cases of failed cluster creations due to an "Unregistering" or "NotRegistered" state for a subscription's access to NRP or CRP.
- Added AKS validation that service principal secrets may not exceed 190 bytes.
- Behavior Changes
- Fixed a bug where outbound IP creation for Standard Load Balancer did not retry when receiving internal server error from Network Resource Provider.
- Improved validation of agent pool operations to only validate agent pool count when cluster autoscaler is turned off. When cluster autoscaler is turned on the minCount and maxCount set are used for count validations.
This release is rolling out to all regions
- New Features
- Announcing AKS Diagnostics in Public Preview
- Hopefully, most of the time your AKS clusters are running happily and healthily. However, when things go wrong, we want to make sure that our AKS customers are empowered to easily and quickly figure out what's wrong and the next steps for mitigation or deeper investigation.
- AKS Diagnostics is a guided and interactive experience in the Azure Portal that helps you diagnose and solve potential issues with your AKS cluster, such as identity and security management, node issues, CRUD operations and more. Detectors in AKS Diagnostics intelligently find issues and observations as well as recommend next steps. This feature comes configured completely out-of-the-box and is free for all our AKS customers.
- Get started and learn more here: https://aka.ms/aks/diagnostics
- Support for new regions:
- Norway East
- Norway West
- Announcing AKS Diagnostics in Public Preview
- Bug Fixes
- Fixed enforcement that node pool versions can never be greater than the control plane
major.minor.patch
version. - Fixed error messages incorrectly stating a version was not supported to return proper errors detailing what validation was failed.
- Added retries to retrieve a managed resource group. Errors can be returned with
ResourceGroupNotFound
due to slow Azure Resource Manager (ARM) data replication when AKS tries to place new managed resources into the managed resource group.
- Fixed enforcement that node pool versions can never be greater than the control plane
- Behavior Changes
- Added a label
control-plane=true
to thekube-system
namespace
- Added a label
- Component updates
- AKS-Engine has been updated to v0.43.0
This release is rolling out to all regions
- With the official 2019-11-04 Azure CLI release (v2.0.76), AKS has defaulted new cluster creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM Availability Sets and Basic Load Balancers (VMAS/BLB). Users can still explicitly choose VMAS and BLB.
- From 2019-10-14 AKS Portal has defaulted new cluster creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM Availability Sets and Basic Load Balancers (VMAS/BLB).
- From 2019-11-04 the CLI extension has a new parameter --zones to replace --node-zones, which specifies the zones to be used by the cluster nodes.
- New Features
- AKS has created a new default role clusterMonitoringUser to simplify the Azure Monitor Live metrics onboard experience so that moving forward users don't need to explicitly grant those permissions. This user will have 'GET' and 'LIST' permissions to 'POD/LOGS', 'EVENTS', 'DEPLOYMENTS', 'PODS', 'REPLICASETS' and 'NODES'.
- Support for new regions:
- Germany North
- Germany West Central
- UAE North
- Switzerland North
- Switzerland West
- On-Demand Certificate Rotation is now Generally Available: https://docs.microsoft.com/en-us/azure/aks/certificate-rotation
- Bug Fixes
- Fixed bug with MC_ infra resource group not being created/propagated quickly enough and triggering ResourceGroupNotFound errors.
- Fixed missing cloud provider role binding: Azure#1104
- Fixed nodepool bug where a PUT would be accepted while the pool was being deleted.
- Correctly assign the cluster-admin clusterrolebinding to the clusterAdmin user in all cases.
- Fixed several upstream bugs with attach/detach in VMSS:
- kubernetes/kubernetes#85158
- kubernetes/kubernetes#83685
- kubernetes/kubernetes#84917
- AKS is rolling this changes in automatically and users do not need to upgrade.
- Fixed a bug upgrading Basic LB clusters that were using the preview of API Authorized Ranges feature, only supported in GA with Standard LB.
- Behavior Changes
- Add priorityClass for calico-node and ensure calico-node tolerates all NoSchedule taints. This ensures calico-node will still be scheduled to all nodes even when users have added other node taints.
- Component updates
- Metrics server has been updated to v0.3.5
This release is rolling out to all regions
- With the official 2019-11-04 Azure CLI release, AKS will default new cluster creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM Availability Sets and Basic Load Balancers (VMAS/BLB). Users can still explicitly choose VMAS and BLB.
- From 2019-10-14 AKS Portal will default new cluster creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM Availability Sets and Basic Load Balancers (VMAS/BLB).
- From 2019-11-04 the CLI extension will have a new parameter --zones to replace --node-zones, which specifies the zones to be used by the cluster nodes.
- New Features
- Multiple Nodepools backed AKS clusters are now Generally Available (GA)
- Cluster Autoscaler is now Generally Available (GA)
- Availability Zones are now Generally Available (GA)
- AKS API server Authorized IP Ranges is now Generally Available (GA)
- Kubernetes versions 1.15.5, 1.14.8 and 1.13.12 have been added.
- These versions have new API call logic that helps users with many AKS clusters in the same subscription to incur is less throttling.
- These versions have security fixes for CVE-2019-11253
- The minimum
--max-pods
value has been altered from 30 per node to 30 per Nodepool. Each node will have a hard minimum of 10 pods the user can specify, but this value can only be used if the total pods across all nodes on the nodepool accrue to 30+.
- Bug Fixes
- Added additional validation to nodepool operations to check for enough address space. If there is no address space left for a scale/upgrade operation, the operation will not start and give a descriptive error message.
- Fixed bug with Nodepool operations and
az aks update-credentials
to reflect on the agentpool state. - Fixed bug on Nodepool operations when using incorrect SKUs to have more descriptive error.
- Added validation to block
az aks update-credentials
if nodepool is not ready to avoid conflicts. - Node count on the Nodepool is ignored when user has autoscaling enabled. (Manual scale with autoscaler enabled is not allowed)
- Fixed bug where some clusters would still receive an older Moby version (3.0.6). Current version is 3.0.7
- Preview Features
- Windows docker runtime updated to 19.03.2
- Component updates
- Moby has been updated to v3.0.7
- AKS-Engine has been updated to v0.41.5
This release is rolling out to all regions
- With the official 2019-11-04 Azure CLI release, AKS will default new cluster creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM Availability Sets and Basic Load Balancers (VMAS/BLB).
- From 2019-10-14 AKS Portal will default new cluster creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM Availability Sets and Basic Load Balancers (VMAS/BLB). Users can still explicitly choose VMAS and BLB.
- From 2019-11-04 the CLI extension will have a new parameter --zones to replace --node-zones, which specifies the zones to be used by the cluster nodes.
- Bug Fixes
- Fixed a bug where nodepool API would not accept and handle empty fields correctly, "", "{}", "{"properties":{}}".
- Fixed a bug with http application routing addon where portal would lowercase all addon names and the input was not accepted.
- Upgrade operation will not fail when manual changes have been applied to the SinglePlacementGroup property on underlying VMSS.
- Fixed bug where customers trying to enable pod security policy without providing k8s version in the request would encounter failure (500 internal error).
- Fixed bug where NPM pods would consume an excessive amount of memory.
- Preview Features
- Updated windows image to the latest version.
- Component Updates
- Updated Azure Network Policy (NPM) version to v1.0.28
- Azure Monitor for Containers Agent updated to 2019-10-11 release: https://github.com/microsoft/Docker-Provider/releases
This release is rolling out to all regions
- With the official 2019-11-04 Azure CLI release, AKS will default new cluster creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM Availability Sets and Basic Load Balancers (VMAS/BLB).
- From 2019-10-14 AKS Portal will default new cluster creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM Availability Sets and Basic Load Balancers (VMAS/BLB). Users can still explicitly choose VMAS and BLB.
- Behavioral Changes
- Improved process and speed of upgrade to reduce impact to pods during the process
- Bug Fixes
- Fixed a bug where kubelet reserved values where applied only to primary node pool. Now correctly applied to all nodepools if using multiple nodepools.
- Added additional service principal validation on Upgrade.
- Prevented multiple concurrent provisioning operations.
- New Features
- Kubernetes versions 1.15.4, 1.14.7 and 1.13.11 have been added.
- Component Updates
- AKS-Engine has been updated to v0.41.4
This release is rolling out to all regions
- With the official 2019-11-04 Azure CLI release, AKS will default new cluster creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM Availability Sets and Basic Load Balancers (VMAS/BLB).
- Support for node pool taints and public ip assignment per node with AKS will be available in Azure CLI extension v0.4.17
- AKS Availability Zone support has been expanded to the following regions:
- Japan East
- UK South
- France Central
- East US
- Central US
- Australia East
- New Features
- Customer may use NetworkPolicies with Azure CNI and Kubenet based clusters:
- Managed Identity (MSI) support is now in public preview.
- Bug Fixes
- Fix a bug where the removal of an outbound rule from standard load balancer in the AKS node resource group could cause the failure of subsequent cluster operations.
- Fixed the issue impacting GPU enabled clusters being unable to install the required NVidia drivers.
- Fixed an issue where customers could encounter a CSE (custom script extension) error 99 during operations.
- Fixed an issue with the Azure Portal cluster metrics multiplying the metric
count based on the viewed window of time. Moving forward the default for
these metrics will be correctly set to average() as opposed to sum().
- For customers with metrics already enabled and in-use in portal, the sum() type will continue to be supported.
- Component Updates
- AKS-Engine has been updated to v0.40.1
- Preview Features
- Fixed an issue where nodes provisioned by cluster autoscaler would be de-provisioned when resetting or updating AAD credentials.
- Azure CLI 2.0.74 released with key AKS changes
- https://github.com/Azure/azure-cli/releases/tag/azure-cli-2.0.74
- Added
--load-balancer-sku
parameter to aks create command, which allows for creating AKS cluster with SLB - Added
--load-balancer-managed-outbound-ip-count
,--load-balancer-outbound-ips
and--load-balancer-outbound-ip-prefixes
parameters to aks[create|update]
commands, which allow for updating load balancer profile of an AKS cluster with SLB - Added
--vm-set-type
parameter to aks create command, which allows to specify vm types of an AKS Cluster (vmas or vmss)
- Bug Fixes
- Fixed an issue where the node pool count rendered in the portal would be incorrect when not using the multiple node pools feature.
- Fixed an issue to ensure a cluster upgrade will upgrade both the control plane and agent pools for clusters using VMSS, but not multiple agent pools.
- Resolved an issue with cluster upgrades that could remove existing diagnostics settings and data erroneously.
- Fixed an issue where AKS was not validating user defined taint formats per agent pool resulting in failures at cluster creation time.
- Behavioral Changes
- Increased the reserved CPU cores for kubelets to scale proportionally to cores available on the kubelet's host node. Read more about AKS resource reservation here.
- Preview Features
- Fixed an issue where AKS was not enforcing the minimum Kubernetes version required at additional agent pool creation time when using the multiple node pools feature.
- Fixed an issue where creating new agent pools will overwrite the route table and customers would lose their route table rules. Fixes issue #1212.
This release is rolling out to all regions
- The announced updates to default new clusters to VMSS/SLB configurations is
under way, if you are using the
aks-preview
Azure CLI extension, all clusters created are now defaulted to VMSS & SLB. - AKS Kubernetes 1.10 support will end-of-lifed on Oct 25, 2019
- AKS Kubernetes 1.11 & 1.12 support will end-of-lifed on Dec 9, 2019
- New Documentation additions:
- The AKS team is pleased to announce the new
aks-periscope
tool.- AKS Periscope will allow AKS customers to run initial diagnostics and collect logs into an Azure Blob storage account to help them analyze and identify potential problems.
- For more information please see: https://aka.ms/AKSPeriscope
- New Features
- AKS now GA in the Azure US Gov Virginia region.
- Control of egress traffic for cluster nodes in AKS is now GA
- This feature allows you to restrict outbound network communication for you cluster as required for compliance or other secure use-cases.
- https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic
- Known Issues:
- Clusters that do not have PSPs enabled upgrading to Kubernetes 1.15 will fail
- Bug Fixes
- An issue where excessively logs (eg node/status patch events) were being emitted to the audit logs stream and stored. Customer should now see greatly reduced audit log volume
- Preview Features
- The
--control-plane-only
flag has been added to theaks-preview
extension - this command will force the upgrade of the customers control plane without simultaneously upgrading the other nodepools. This functionality is only supported for multi-pool clusters.
- The
Service Updates
-
AKS Kubernetes 1.10 support will end-of-lifed on Oct 25, 2019
-
AKS Kubernetes 1.11 & 1.12 support will end-of-lifed on Dec 9, 2019
- Note that AKS Kuberrnetes 1.15 support is in public preview, on Dec 9, 2019 the supported minor Kubernetes versions will be 1.13, 1.14, 1.15
- Azure Updates blog post with additional details will be published this week
-
New Features
- VMSS backed AKS clusters are now GA
- VMSS is the underlying compute resource type which enables features such as cluster autoscaler, but is a separate feature.
- See https://docs.microsoft.com/azure/virtual-machine-scale-sets/overview and https://docs.microsoft.com/azure/aks/cluster-autoscaler for more information.
- NOTE: Official support in the Azure CLI for AKS+VMSS will be released on 2019-09-24 (version 2.0.74 https://github.com/Azure/azure-cli/milestone/73)
- Standard Load Balancer support (SLB) is now GA
- See https://docs.microsoft.com/azure/aks/load-balancer-standard for documentation.
- NOTE: Official support in the Azure CLI for AKS+SLB will be released on 2019-09-24 (version 2.0.74 https://github.com/Azure/azure-cli/milestone/73)
- Support for the following VM SKUs is now released: Standard_D48_v3, Standard_D48s_v3, Standard_E48_v3, Standard_E48s_v3, Standard_F48s_v2, Standard_L48s_v2, Standard_M208ms_v2, Standard_M208s_v2
- VMSS backed AKS clusters are now GA
-
Bug Fixes
- CCP Fall back is not working as expected. This is because we updated CCP to turn on useCCPPool flag based on the toggle. But we did not refresh the useCCPPool flag after the change. So the flag is still false even though toggle changed it to true.
- Fixed an issue where cluster upgrade could be blocked when the "managedBy" property is missing from the node resource group.
- Fixed an issue where ingress controller network policy would block all egress traffic when assigned to pods (using label selectors).
-
Behavioral Changes
- Review the planned changes for new cluster creation defaults referenced in Release 2019-08-26
-
Preview Features
- Fixed an issue where multiple nodepool clusters would use the incorrect version(s) and block upgrades.
- Fixed an issue where AKS would incorrectly allow customers to specify different versions for multiple nodepools.
- Fixed an issue where the incorrect node count would be returned or fail to update when using multiple node pools
-
Preview Features
- Kubernetes 1.15 is now in Preview (1.15.3)
-
Bug Fixes
- A bug where kube-svc-redirect would crash due to an invalid bash line has been fixed.
- A recent Kubernetes dashboard change to enable self-signed certs has been reverted due to browser issues.
- A bug where the OMSAgent pod would fail scheduling on a user tainted node has been fixed with proper toleration on the OMSAgent pod.
- A preview bug allowing more than 8 node pools to be created has been fixed to enforce a max of 8 node pools per cluster.
- A preview bug that would change the primary node pool when adding a new node pool has been fixed.
-
Behavioral Changes
- Review the planned changes for new cluster creation defaults referenced in Release 2019-08-26
-
Component Updates
- aks-engine has been updated to v0.40.0
This release is rolling out to all regions
- Features
- Added prometheus annotation to coredns to facilitate metric port discovery
- Bug Fixes
- Fixed bug with older 1.8 clusters that was preventing clusters from upgrading.
- Important: this was a best effort fix since these cluster versions are out of support. Please upgrade to a currently supported version
- For information on how AKS handles Kubernetes version support see: Supported Kubernetes versions in Azure
- Removed the default restricted Pod Security Policy to solve race condition with containers not seeing the user in their config. This policy can be applied by customers.
- Fixed a bug with kube-proxy, ip-masq-agent and kube-svc-redirect where in certain scenarios they could try to access iptables at the same time.
- Fixed bug with older 1.8 clusters that was preventing clusters from upgrading.
- Preview Features
- CLI extension updated for new Standard Load Balancer (SLB) and VM Scale Set (VMSS) Parameters:
--vm-set-type
Agent pool vm set type. VirtualMachineScaleSets or AvailabilitySet.--load-balancer-sku
- Azure Load Balancer SKU selection for your cluster. Basic or Standard.--load-balancer-outbound-ip-prefixes
- Comma-separated public IP prefix resource IDs for load balancer outbound connection. Valid for Standard SKU load balancer cluster only.--load-balancer-outbound-ips
- Comma-separated public IP resource IDs for load balancer outbound connection. Valid for Standard SKU load balancer cluster only.--load-balancer-managed-outbound-ip-count
- Desired number of automatically created and managed outbound IPs for load balancer outbound connection. Valid for Standard SKU load balancer cluster only.
- CLI extension updated for new Standard Load Balancer (SLB) and VM Scale Set (VMSS) Parameters:
- Behavioral Changes
- Starting from 2019-09-10, the preview CLI extension will default new cluster creates to VM Scale-Sets and Standard Load Balancers (VMSS/SLB) instead of VM Availability Sets and Basic Load Balancers (VMAS/BLB).
- Starting from 2019-10-22 the official CLI and Azure Portal will default new cluster creates to VMSS/SLB instead of VMAS/BLB.
- These client defaults changes are important to be aware of due to:
- SLB will automatically assign a public IP to enable egress. This is a requirement placed by Azure Standard Load Balancers, to learn more about Standard vs. Basic, read here.
- SLB enables bringing your own IP address to be used, you will be able to define these with new parameters.
- The capability to use an SLB without any public IP assigned is on the roadmap plan.
- You may still provision a basic load balancer by specifying "basic" for the "loadbalancersku" property at cluster create time.
- Read more at https://aka.ms/aks/slb
- Component Updates
- aks-engine has been updated to v0.39.2
- Azure Monitor for Containers Agent updated to 2019-08-22 release: https://github.com/microsoft/Docker-Provider/releases
This release is rolling out to all regions
Please Note: This release includes new Kubernetes versions 1.13.10 & 1.14.6 these include the fixes for CVEs CVE-2019-9512 and CVE-2019-9514. Please see our customer guidance
- Bug Fixes
- New kubernetes versions released to fix CVE-2019-9512 and CVE-2019-9514
- Kubernetes 1.14.6
- Kubernetes 1.13.10
- Fixed Azure Network Policy bug with multiple labels under a matchLabels selector.
- Fix for CNI lock timeout issue caused due to race condition in starting telemetry process.
- Fixed issue creating AKS clusters using supported Promo SKUs
- New kubernetes versions released to fix CVE-2019-9512 and CVE-2019-9514
- Component Updates
- aks-engine has been updated to v0.38.8
- Azure CNI has been updated to v1.0.25
This release is rolling out to all regions
-
Bug Fixes
- Several bug fixes for AKS NodePool creation and other CRUD operations.
- Fixed audit log bug on older < 1.9.0 clusters.
- Important: this was a best effort fix since these cluster versions are out of support. Please upgrade to a currently supported version
- For information on how AKS handles Kubernetes version support see: Supported Kubernetes versions in Azure
- Improved error messaging for VM size errors, including actions to take.
- Fixed for PUT request bug that caused an unexpected API restart.
-
Behavioral Changes
- AKS has released an API update, documentation available here: https://docs.microsoft.com/en-us/rest/api/aks/managedclusters
- Important: With this API update there are changes to the API whitelisting API. This is now under ManagedClusterAPIServerAccessProfile, where previously it was a top level property.
- AKS has released an API update, documentation available here: https://docs.microsoft.com/en-us/rest/api/aks/managedclusters
This release is rolling out to all regions
Please Note: This release includes new Kubernetes versions 1.13.9 & 1.14.5 (GA today) these include the fixes for CVEs CVE-2019-11247 and CVE-2019-11249. Please see our customer guidance
- New Features
- Kubernetes 1.14 is now GA (1.14.5)
- As of Monday August 12th (2019-08-12) customers running Kubernetes 1.10.x have 60 days (2019-10-14) to upgrade to a supported release. Please see AKS supported versions document for more information.
- Kubernetes Audit log support is now GA.
- Kubernetes 1.14 is now GA (1.14.5)
- Bug Fixes
- Fixed an issue where creating a cluster with a custom subnet would return an HTTP error 500 vs 400 when the subnet could not be found.
- Behavioral Changes
- Preview Features
- Fixed an issue where customers could not create a new node pool with AZs even if they were already using SLBs.
- Fixed an issue where VMSS cluster commands could return the incorrect node count.
- Component Updates
- aks-engine has been updated to v0.38.7
-
New Features
- Customers may now create multiple AKS clusters using ARM templates regardless of what region the clusters are located in.
-
Bug Fixes
- AKS has resolved the issue(s) with missing metrics in the default metrics blade.
- An issue where the
--pod-max-pids
was set to 100 (maximum) for clusters and re-applied during upgrade causingpthread_create() failed (11: Resource temporarily unavailable)
pod start failures was fixed.- See Azure/aks-engine#1623 for more information
-
Preview Features
- AKS is now in Public Preview in the Azure Government (Fairfax, VA)
region. Please note the following:
- Azure Portal support for AKS is in progress, for now customers must use the Azure CLI for all cluster operations currently.
- AKS preview features are not supported in Azure Government currently and will be supported when those features are GA.
- Fixed an issue where a delete request for a locked VMSS node would get an
incorrect and unclear
InternalError
failure - the error message and error code have both been fixed. - Fixed an issue with egress filtering where managed AKS pods would incorrectly use the IP address to connect instead of the FQDN.
- Fixed an issue with the SLB preview where AKS allowed the customer to provide an IP address already in use by another SLB.
- An issue that prevented customers from using normal cluster operations on multiple node pool clusters with a single VMSS pool has been fixed.
- AKS is now in Public Preview in the Azure Government (Fairfax, VA)
region. Please note the following:
-
Component Updates
- AKS-Engine has been updated to v0.38.4
- Preview Features
- An issue where New Windows node pools in existing cluster would not get updated Windows versions has been fixed.
- TCP reset has been set for all new clusters using the SLB preview.
- An issue where AKS would trigger a scale operation requested on a previously deleted VMSS cluster has been fixed.
- Component Updates
- AKS-Engine has been updated to v0.38.3
Important behavioral change: All AKS clusters are being updated to pull all needed container images for cluster operations from Azure Container Registry, this means if you have custom allow/deny lists, port filtering, etc you will need to update your network configuration to allow ACR.
Please see the documentation for more information including all required AKS cluster ports and URLs
-
New Features
- Support for the M, NC_promo and DS_v3 Azure Compute VM SKUs has been added.
-
Bug Fixes
- Fixed an issue with clusters created in Canada and Australia regions between
2019-07-09 and 2019-07-10 as well as US region clusters created on 2019-07-10
where customers would receive
error: Changing property 'platformFaultDomainCount' is not allowed
errors.
- Fixed an issue with clusters created in Canada and Australia regions between
2019-07-09 and 2019-07-10 as well as US region clusters created on 2019-07-10
where customers would receive
-
Behavioral Changes
- The error message returned to users when attempting to create clusters with an unsupported Kubernetes version in that region has been fixed.
- Noted above, AKS has moved all container images required by AKS clusters for cluster CRUD operations have been moved to Azure Container Registry. This means that customers must update allow/deny rules and ports. See: Required ports and addresses for AKS clusters
-
Preview Features
- Fixed a VMSS cluster upgrade failure that would return:
Changing property 'type' is not allowed.
- An issue where
az aks nodepool list
would return the incorrect node count has been resolved.
- Fixed a VMSS cluster upgrade failure that would return:
-
Component Updates
- The Azure Monitor for Container agent has been updated to the 2019-07-09 release
- Please see the release notes.
- The Azure Monitor for Container agent has been updated to the 2019-07-09 release
- New Features
- Kubernetes versions 1.11.10 and 1.13.7 have been added. Customers
are encouraged to upgrade.
- For information on how AKS handles Kubernetes version support see: Supported Kubernetes versions in Azure)
- The
az aks update-credentials
command now supports Azure tenant migration of your AKS cluster. Follow the instructions in Choose to update or create a service principal and then execute theUpdate AKS cluster with new credentials
command passing in the--tenant-id
argument.
- Kubernetes versions 1.11.10 and 1.13.7 have been added. Customers
are encouraged to upgrade.
- Behavioral Changes
- All new clusters now have --protect-kernel-defaults enabled.
- Preview Features
- Kubernetes 1.14.3 is now available for preview users.
- Azure availability zone support is now in public preview.
- This feature enables customers to distribute their AKS clusters across availability zones providing a higher level of availability.
- Please see AKS previews for additional information.
- For all previews, please see the previews document for opt-in instructions and documentation links.
- Component Updates
- aks-engine has been updated to version 0.37.5
- Azure CNI has been updated to version 1.0.22
- Moby has been updated to 3.0.5 from 3.0.4
- Note that this version number is Azure specific, the Moby project does not have official releases / release numbers.
- Bug Fixes
- Fixed an issue with
az aks update-credentials
where the command would not take special characters and nodes would get incorrect values. Note that double quote"
, backslash\
, ampersand&
, and angle quotations<>
are still NOT allowed to be used as password characters. - Fixed an issue with update-credentials where the command would not work for VMSS clusters with more than 10 instances.
- AKS now has validation to check for Resource Locks when performing Scale and Upgrade operations.
- Fixed an issue where GPU nodes could fail to install the GPU driver due to ongoing background apt operations.
- Adjusted the timeout value for Service Principal update based on the number of nodes in the cluster, to accommodate larger clusters.
- Fixed an issue with
- New Features
- AKS now supports OS disk sizes of up to 2048GiB.
- Persistent Tags
- Custom tags can now be passed to AKS and will persisted onto the MC infrastructure Resource Group. Note: They will NOT be applied to all child resources in that RG, aka VMs, VNets, disks, etc.
- Preview Features
- Windows Node Pools
- AKS updated Windows default image to latest windows patch release.
- API server authorized IP ranges
- The max number of API server authorized IP ranges has now increased to 100.
- Windows Node Pools
- Component Updates
- AKS-Engine has been updated to v0.35.6
- This change includes a new AKS VHD with the Linux Kernel CVE fixes. See more: https://github.com/Azure/AKS/issues/
- This new VHD also fixes broken IPv6 support for the host.
- AKS-Engine has been updated to v0.35.6
- Bug Fixes
- Fixed an issue that could result in a failed service principal update and AKS cluster creation.
- Fixed an issue where deploying AKS clusters using ARM templates without a defined Service Principal would incorrectly pass validation.
- Preview Features
- Azure Standard load balancer support is now in public preview.
- This has been a long awaited feature which enables selection of the SKU type offered by Azure Load Balancer to be used with your AKS cluster. Please see AKS previews for additional information.
- For all previews, please see the previews document for opt-in instructions and documentation links.
- Azure Standard load balancer support is now in public preview.
- Component Updates
- The Azure Monitor for Container agent has been updated to the 2019-06-14 release
- Please see the release notes.
- The Azure Monitor for Container agent has been updated to the 2019-06-14 release
-
Behavioral Changes
- Important: Change in UDR and subnet behavior
- When using Kubenet with a custom subnet, AKS now checks if there is an existing associated route table.
- If that is the case AKS will NOT attach the kubenet RT/Routes automatically and they should be added manually to the existing RT.
- If no Route Table exists AKS will automatically attach the kubenet RT/Routes.
- Important: Change in UDR and subnet behavior
-
Preview Features
- A bug where users could not scale VMSS based clusters after disabling the cluster autoscaler has been fixed.
- A missing CRD for calico-enabled clusters (#1042) has been fixed.
-
Bug Fixes
- Kubernetes taints and tolerations are now supported in all AKS regions.
- Taints & Tolerations are preserved for current cluster nodes and through upgrades, however they are not preserved through scale (up, down) operations.
- Kubernetes taints and tolerations are now supported in all AKS regions.
-
Preview Features
- A bug that prevented cluster agent pool deletions due to VMSS creation failures has been fixed.
- A bug preventing the cluster autoscaler from working with nodepool enabled clusters (one or more nodepools) has been fixed.
- A bug where the NSG would not be reset as needed during a nodepool create request has been fixed.
-
Behavioral Changes
- AKS removed all weak CBC suite ciphers for API server. More info: https://blog.qualys.com/technology/2019/04/22/zombie-poodle-and-goldendoodle-vulnerabilities
-
Component Updates
- AKS-Engine has been updated to v0.35.4
-
New Features
- AKS is now available in both China East 2 / China North 2 Azure Regions.
- AKS is now available in South Africa North
- The L and M series Virtual Machines are now supported
-
Component Updates
- AKS-Engine has been updated to version 0.35.3
- CoreDNS has been upgraded from 1.2.2 to version 1.2.6
-
Preview Features
- A bug where users could not deleted an agent pool containing VMSS nodes if the VMSS node creation fails has been fixed.
-
Behavioral Changes
- The 192.0.2.0/24 IP block is now reserved for AKS use. Clusters created in a VNet that overlaps with this block will fail pre-flight validation.
-
Bug Fixes
- An issue where users running old AKS clusters attempting to upgrade would get a failed upgrade with an Internal Server Error has been fixed.
- An issue where Kubernetes 1.14.0 would not show in the Azure Portal or AKS Preview CLI with the 'Preview' or 'isPreview' tag has been resolved.
- An issue where customers would get excessive log entries due to missing Heapster rbac permissions has been fixed.
- An issue where AKS clusters could end up with missing DNS entries resulting in DNS resolution errors or crashes within CoreDNS has been resolved.
-
Preview Features
- A bug where the AKS node count could be out of sync with the VMSS node count has been resolved.
- There is a known issue with the cluster autoscaler preview and multiple agent pools. The current autoscaler in preview is not compatible with multiple agent pools, and could not be disabled. We have fixed the issue that blocked disabling the autoscaler. A fix for multiple agent pools and the cluster autoscaler is in development.
-
Window node support for AKS is now in Public Preview
- Blog post: https://aka.ms/aks/windows
- Support and documentation:
- Documentation: https://aka.ms/aks/windowsdocs
- Issues may be filed on this Github repository (https://github.com/Azure/AKS) or raised as a Sev C support request. Support requests and issues for preview features do not have an SLA / SLO and are best-effort only.
- Do not enable preview featured on production subscriptions or clusters.
- For all previews, please see the previews document for opt-in instructions and documentation links.
-
Bug fixes
- An issue impacting Java workloads where pods running Java workloads would
consume all available node resources instead of the defined pod resource
limits defined by the user has been resolved.
- https://bugs.openjdk.java.net/browse/JDK-8217766
- AKS-Engine PR for fix: Azure/aks-engine#1095
- An issue impacting Java workloads where pods running Java workloads would
consume all available node resources instead of the defined pod resource
limits defined by the user has been resolved.
-
Component Updates
- AKS-Engine has been updates to v0.35.1
- New Features
- Shared Subnets are now supported with Azure CNI.
- Users may bring / provide their own subnets to AKS clusters
- Subnets are no longer restricted to a single subnet per AKS cluster, users may now have multiple AKS clusters on a subnet.
- If the subnet provided to AKS has NSGs, those NSGs will be preserved and
used.
- Warning: NSGs must respect: https://aka.ms/aksegress or the cluster might not come up or work properly.
- Note: Shared subnet support is not supported with VMSS (in preview)
- Shared Subnets are now supported with Azure CNI.
- Bug Fixes
- A bug that blocked Azure CNI users from setting maxPods above 110 (maximum of 250) and that blocked existing clusters from scaling up when the value was over 110 for CNI has been fixed.
- A validation bug blocking long DNS names used by customers has been fixed. For restrictions on DNS/Cluster names, please see https://aka.ms/aks-naming-rules
-
New Features
- Kubernetes Network Policies are GA
- See https://docs.microsoft.com/en-us/azure/aks/use-network-policies for documentation.
- Kubernetes Network Policies are GA
-
Bug Fixes
- An issues customers reported with CoreDNS entering CrashLoopBackoff has
been fixed. This was related to the upstream move to
klog
- An issue where AKS managed pods (within kube-system) did not have the correct tolerations preventing them from being scheduled when customers use taints/tolerations has been fixed.
- An issue with kube-dns crashing on specific config map override scenarios as seen in Azure/acs-engine#3534 has been resolved by updating to the latest upstream kube-dns release.
- An issue where customers could experience longer than normal create times for clusters tied to a blocking wait on heapster pods has been resolved.
- An issues customers reported with CoreDNS entering CrashLoopBackoff has
been fixed. This was related to the upstream move to
-
Preview Features
- New features in public preview:
- Secure access to the API server using authorized IP address ranges
- Locked down egress traffic
- This feature allows users to limit / whitelist the hosts used by AKS clusters.
- Multiple Node Pools
- For all previews, please see the previews document for opt-in instructions and documentation links.
- New features in public preview:
-
Kubernetes 1.14 is now in Preview
- Do not use this for production clusters. This version is for early adopters and advanced users to test and validate.
- Accessing the Kubernetes 1.14 release requires the
aks-preview
CLI extension to be installed.
-
New Features
- Users are no longer forced to create / pre-provision subnets when using Advanced networking. Instead, if you choose advanced networking and do not supply a subnet, AKS will create one on your behalf.
-
Bug fixes
- An issue where AKS / the Azure CLI would ignore the
--network-plugin=azure
option silently and create clusters with Kubenet has been resolved.- Specifically, there was a bug in the cluster creation workflow where users
would specific
--network-plugin=azure
with Azure CNI / Advanced Networking but miss passing in the additional options (eg '--pod-cidr, --service-cidr, etc). If this occurred, the service would fall-back and create the cluster with Kubenet instead.
- Specifically, there was a bug in the cluster creation workflow where users
would specific
- An issue where AKS / the Azure CLI would ignore the
-
Preview Features
- Kubernetes 1.14 is now in Preview
- An issue with Network Policy and Calico where cluster creation could
fail/time out and pods would enter a crashloop has been fixed.
- Azure#905
- Note, in order to get the fix properly applied, you should create a new cluster based on this release, or upgrade your existing cluster and then run the following clean up command after the upgrade is complete:
kubectl delete -f https://github.com/Azure/aks-engine/raw/master/docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml
- Component Updates
- Azure Monitoring for Containers has been updated to the 2019-04-23 release
- For more information, please see: https://github.com/Microsoft/docker-provider/tree/ci_feature_prod#04232019--
- Azure Monitoring for Containers has been updated to the 2019-04-23 release
-
Kubernetes 1.13 is GA
-
The Kubernetes 1.9.x releases are now deprecated. All clusters on version 1.9 must be upgraded to a later release (1.10, 1.11, 1.12, 1.13) within 30 days. Clusters still on 1.9.x after 30 days (2019-05-25) will no longer be supported.
- During the deprecation period, 1.9.x will continue to appear in the available versions list. Once deprecation is completed 1.9 will be removed.
-
(Region) North Central US is now available
-
(Region) Japan West is now available
-
New Features
- Customers may now provide custom Resource Group names.
- This means that users are no longer locked into the MC_* resource name group. On cluster creation you may pass in a custom RG and AKS will inherit that RG, permissions and attach AKS resources to the customer provided resource group. * Currently, you must pass in a new RG (resource group) must be new, and can not be a pre-existing RG. We are working on support for pre-existing RGs. * This change requires newly provisioned clusters, existing clusters can not be migrated to support this new capability. Cluster migration across subscriptions and RGs is not currently supported.
- AKS now properly associates existing route tables created by AKS when passing in custom VNET for Kubenet/Basic Networking. This does not support User Defined / Custom routes (UDRs).
- Customers may now provide custom Resource Group names.
-
Bug fixes
- An issue where two delete operations could be issued against a cluster simultaneously resulting in an unknown and unrecoverable state has been resolved.
- An issue where users could create a new AKS cluster and set the
maxPods
value too low has been resolved.- Users have reported cluster crashes, unavailability and other issues
when changing this setting. As AKS is a managed service, we provide
sidecars and pods we deploy and manage as part of the cluster. However
users could define a maxPods value lower than the value required for the
managed pods to run (eg 30), AKS now calculates the minimum number of
pods via:
maxPods or maxPods * vm_count > managed add-on pods
- Users have reported cluster crashes, unavailability and other issues
when changing this setting. As AKS is a managed service, we provide
sidecars and pods we deploy and manage as part of the cluster. However
users could define a maxPods value lower than the value required for the
managed pods to run (eg 30), AKS now calculates the minimum number of
pods via:
-
Behavioral Changes *AKS cluster creation now properly pre-checks the assigned service CIDR range to block against possible conflicts with the dns-service CIDR. * As an example, a user could use 10.2.0.1/24 instead of 10.2.0.0/24 which would lead to IP conflicts. This is now validated/checked and if there is a conflict, a clear error is returned. * AKS now correctly blocks/validates users who accidentally attempt an upgrade to a previous release (eg downgrade).
- AKS now validate all CRUD operations to confirm the requested action will not fail due to IP Address/subnet exhaustion. If a call is made that would exceed available addresses, the service correctly returns an error.
- The amount of memory allocated to the Kubernetes Dashboard has been increased to 500Mi for customers with large numbers of nodes/jobs/objects.
- Small VM SKUs (such as Standard F1, and A2) that do not have enough RAM to support the Kubernetes control plane components have been removed from the list of available VMs users can use when creating AKS clusters.
-
Preview Features
- A bug where Calico pods would not start after a 1.11 to 1.12 upgrade has been resolved.
- When using network policies and Calico, AKS now properly uses Azure CNI for all routing vs defaulting to using Calico the routing plugin.
- Calico has been updated to v3.5.0
-
Component Updates
- AKS-Engine has been updates to v0.33.4
- Bug Fixes
- Resolved an issue preventing some users from leveraging the Live Container Logs feature (due to a 401 unauthorized).
- Resolved an issue where users could get "Failed to get list of supported orchestrators" during upgrade calls.
- Resolved an issue where users using custom subnets/routes/networking with
AKS where IP ranges match the cluster/service or node IPs could result in
an inability to
exec
, get cluster logs (kubectl get logs
) or otherwise pass required health checks. - An issue where a user running
az aks get-credentials
while a cluster is in creation resulting in an unclear error ('Could not find role name') has been resolved.
This release fixes one AKS product regression and an issue identified with the Azure Jenkins plugin.
- A regression when using ARM templates to issue AKS cluster update(s) (such as
configuration changes) that also impacted the Azure Portal has been fixed.
- Users do not need to perform any actions / upgrades for this fix.
- An issue when using the Azure Container Jenkins plugin with AKS has been
mitigated.
- This issue caused errors and failures when using the Jenkins plugin - the bug triggered by a new AKS API version but was related to a latent issue in the plugin's API detection behavior.
- An updated Jenkins plugin has been published: jenkinsci/azure-acs-plugin#16
- https://github.com/jenkinsci/azure-acs-plugin/releases/tag/azure-acs-0.2.4
-
Bug fixes
- New kubernetes versions released with multiple CVE mitigations
- Kubernetes 1.12.7
- Kubernetes 1.11.9
- Customers should upgrade to the latest 1.11 and 1.12 releases.
- Kubernetes versions prior to 1.11 must upgrade to 1.11/1.12 for the fix.
- New kubernetes versions released with multiple CVE mitigations
-
Component updates
- Updated included AKS-Engine version to 0.33.2
- See: https://github.com/Azure/aks-engine/releases/tag/v0.33.4 for details
- Updated included AKS-Engine version to 0.33.2
-
The following regions are now GA: South Central US, Korea Central and Korea South
-
Bug fixes
- Fixed an issue which prevented Kubernetes addons from being disabled.
-
Behavioral Changes
- AKS will now block subsequent PUT requests (with a status code 409 - Conflict) while an ongoing operation is being performed.
-
The Central India region is now GA
-
Bug fixes
- AKS will now begin preserving node labels & annotations users apply to
clusters during upgrades.
- Note: labels & annotations will not be applied to new nodes added during a scale up operation.
- AKS now properly validates the Service Principal / Azure Active Directory
(AAD) credentials
- This prevents invalid, expired or otherwise broken credentials being inserted and causing cluster issues.
- Clusters that enter a failed state due to upgrade issues will now allow users to re-attempt to upgrade or will throw an error message with instructions to the user.
- Fixed an issue with cloud-init and the walinuxagent resulting in
failed state
VMs/worker nodes - The
tenant-id
is now correctly defaulted if not passed in for AAD enabled clusters.
- AKS will now begin preserving node labels & annotations users apply to
clusters during upgrades.
-
Behavioral Changes
- AKS is now pre-validating MC_* resource group locks before any CRUD operation, avoiding the cluster enter Failed state.
- Scale up/down calls now return a correct error ('Bad Request') when users delete underlying virtual machines during the scale operation.
- Performance Improvement: caching is now set to read only for data disks
- The Nvidia driver has been updated to 410.79 for N series cluster configurations
- The default worker node disk size has been increased to 100GB
- This resolves customer reported issues with large numbers (and large sizes) of Docker images triggering out of disk issues and possible workload eviction.
- The Kubernetes controller manager
terminated-pod-gc-threshold
has been lowered to 6000 (previously 12500)- This will help system performance for customers running large number of Jobs (finished pods)
- The Azure Monitor for Container agent has been updated to the 2019-03 release
- The "View Kubernetes Dashboard" has been removed from the Azure Portal
- Note that this button did not expose/add functionality, it only linked to the existing instructions for using the Kubernetes dashboard found here: https://docs.microsoft.com/en-us/azure/aks/kubernetes-dashboard
-
The Azure Monitor for containers Agent has been updated to 3.0.0-4 for newly built or upgraded clusters
-
The Azure CLI now properly defaults to N-1 for Kubernetes versions, for example N is the current latest (1.12) release - the CLI will correctly pick 1.11.x. When 1.13 is released, the default will move to 1.12.
-
Bug Fixes:
- If a user exceeds quota during a scale operation, the Azure CLI will now correctly display a "Quota exceeded" vs "deployment not found"
- All AKS CRUD (put) operations now validate and confirm user subscriptions have the needed quota to perform the operation. If a user does not, an error is correctly shown and the operation will not take effect.
- All AKS issued Kubernetes SSL certificates have had weak cipher support
removed, all certificates should now pass security audits for BEAST and
other vulnerabilities.
- If you are using older clients that do not support TLS 1.2 you will need
to upgrade those clients and associated SSL libraries to securely connect.
- Note that only Kubernetes 1.10 and above support the new certificates, additionally existing certificates will not be updated as this would revoke all user access. To get the updated certificates you will need to create a new AKS cluster.
- If you are using older clients that do not support TLS 1.2 you will need
to upgrade those clients and associated SSL libraries to securely connect.
- Clusters that are in the process of upgrading or in failed upgrade state will attempt to re-execute the upgrade or throw an obvious error message.
-
The preview feature for Calico/Network Security Policies has been updated to repair a bug where ip-forwarding was not enabled by default.
-
The
cachingmode: ReadOnly
flag was not always being correctly applied to the managed premium storage class, this has been resolved.
- New kubernetes versions released for CVE-2019-1002100 mitigation
- Kubernetes 1.12.6
- Kubernetes 1.11.8
- Customers should upgrade to the latest 1.11 and 1.12 releases.
- Kubernetes versions prior to 1.11 must upgrade to 1.11/1.12 for the fix.
- A security bug with the Kubernetes dashboard and overly permissive service account access has been fixed
- The France Central region is now GA for all customers
- Bug fixes and performance improvements
- Fixed a bug in cluster location/region validation has been resolved.
- Previously, if you passed in a location/region with a trailing unicode non-breaking space (U+00A0) would cause failures on CRUD operations or cause other non-parseable characters to be displayed.
- Fixed a bug where if the dnsService IP conflicts with the apiServer IP
address(es) creates or updates would fail after the fact.
- Addresses are now checked to ensure no overlap or conflict at CRUD operation time.
- The Australia Southeast region is now GA
- Fixed a bug when using the new Service Principal rotation/update command on
cluster nodes using the Azure CLI would fail
- Specifically, there was a missing dependency (e.g.
jq is missing
) on the nodes, all new nodes should now contain thejq
utility.
- Specifically, there was a missing dependency (e.g.
At this time, all regions now have the CVE hotfix release. The simplest way to consume it is to perform a Kubernetes version upgrade, which will cordon, drain, and replace all nodes with a new base image that includes the patched version of Moby. In conjunction with this release, we have enabled new patch versions for Kubernetes 1.11 and 1.12. However, as there are no new patch versions available for Kubernetes versions 1.9 and 1.10, customers are recommended to move forward to a later minor release.
If that is not possible and you must remain on 1.9.x/1.10.x, you can perform the following steps to get the patched runtime:
- Scale up your existing 1.9/1.10 cluster - add an equal number of nodes to your existing worker count.
- After scale-up completes, pick a single node and using the kubectl command, cordon the old node, drain all traffic from it, and then delete it.
- Repeat step 2 for each worker in your cluster, until only the new nodes remain.
Once this is complete, all nodes should reflect the new Moby runtime version.
We apologize for the confusion, and we recognize that this process is not ideal and we have future plans to enable an upgrade strategy that decouples system components like the container runtime from the Kubernetes version.
Note: All newly created 1.9, 1.10, 1.11 and 1.12 clusters will have the new Moby runtime and will not need to be upgraded to get the patch.
Hotfix releases follow an accelerated rollout schedule - this release should be in all regions by 12am PST 2019-02-13
- Kubernetes 1.12.5, 1.11.7
- This release mitigates CVE-2019-5736 for Azure Kubernetes Service (see below).
- Please note that GPU-based nodes do not support the new container runtime yet. We will provide another service update once a fix is available for those nodes.
CVE-2019-5736 notes and mitigation Microsoft has built a new version of the Moby container runtime that includes the OCI update to address this vulnerability. In order to consume the updated container runtime release, you will need to upgrade your Kubernetes cluster.
Any upgrade will suffice as it will ensure that all existing nodes are removed and replaced with new nodes that include the patched runtime. You can see the upgrade paths/versions available to you by running the following command with the Azure CLI:
az aks get-upgrades -n myClusterName -g myResourceGroup
To upgrade to a given version, run the following command:
az aks upgrade -n myClusterName -g myResourceGroup -k <new Kubernetes version>
You can also upgrade from the Azure portal.
When the upgrade is complete, you can verify that you are patched by running the following command:
kubectl get nodes -o wide
If all of the nodes list docker://3.0.4 in the Container Runtime column, you have successfully upgraded to the new release.
This hotfix release fixes the root-cause of several bugs / regressions introduced in the 2019-01-31 release. This release does not add new features, functionality or other improvements.
Hotfix releases follow an accelerated rollout schedule - this release should be in all regions within 24-48 hours barring unforeseen issues
- Fix for the API regression introduced by removing the Get Access Profile API
call.
- Note: This call is planned to be deprecated, however we will issue advance communications and provide the required logging/warnings on the API call to reflect it's deprecating status.
- Resolves Issue 809
- Fix for CoreDNS / kube-dns autoscaler conflict(s) leading to both running in
the same cluster post-upgrade
- Resolves Issue 812
- Fix to enable the CoreDNS customization / compatibility with kube-dns config
maps
- Resolves Issue 811
- Note: customization of Kube-dns via the config map method was technically unsupported, however the AKS team understands the need and has created a compatible work around (formatting of the customizations has changed however). Please see the example/notes below for usage.
With kube-dns, there was an undocumented feature where it supported two config maps allowing users to perform DNS overrides/stub domains, and other customizations. With the conversion to CoreDNS, this functionality was lost - CoreDNS only supports a single config map. With the hotfix above, AKS now has a work around to meet the same level of customization.
You can see the pre-CoreDNS conversion customization instructions here
Here is the equivalent ConfigMap for CoreDNS:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
azurestack.server: |
azurestack.local:53 {
errors
cache 30
proxy . 172.16.0.4
}
After create the config map, you will need to delete the CoreDNS deployment to force-load the new config.
kubectl -n kube-system delete po -l k8s-app=kube-dns
- Kubernetes 1.12.4 GA Release
- With the release of 1.12.4 Kubernetes 1.8 support has been removed, you will need to upgrade to at least 1.9.x
- CoreDNS support GA release
- Conversion from kube-dns to CoreDNS completed, CoreDNS is the default for all new 1.12.4+ AKS clusters.
- If you are using configmaps or other tools for kube-dns modifications, you
will need to be adjust them to be CoreDNS compatible.
- The CoreDNS add-on is set to
reconcile
which means modifications to the deployments will be discarded. - We have identified two issues with this release that will be resolved in a hot fix beginning rollout this week:
- The CoreDNS add-on is set to
- Kube-dns (pre 1.12) / CoreDNS (1.12+) autoscaler(s) are enabled by default,
this should resolve the DNS timeout and other issues related to DNS queries
overloading kube-dns.
- In order to get the dns-autoscaler, you must perform an AKS cluster upgrade to a later supported release (clusters prior to 1.12 will continue to get kube-dns, with kube-dns autoscale)
- Users may now self update/rotate Security Principal credentials using the Azure CLI
- Additional non-user facing stability and reliability service enhancements
- New Features in Preview
- Note: Features in preview are considered beta/non-production ready and unsupported. Please do not enable these features on production AKS clusters.
- Cluster Autoscaler / Virtual machine Scale Sets
- Kubernetes Audit Log
- Network Policies/Network Security Policies
- This means you can now use
calico
as a valid entry in addition toazure
when creating clusters using Advanced Networking - There is a known issue when using Network Policies/calico that prevents
exec
into the cluster containers which will be fixed in the next release
- This means you can now use
- For all product / feature previews including related projects, see this document.