This repository has been archived by the owner on Dec 15, 2022. It is now read-only.
Autoscaling AKS Nodepools cause de-sync and deletion attempts #186
Labels
bug
Something isn't working
What happened?
It appears that the KubernetesClusterNodePool resource will attempt to delete itself when you enable autoscaling. I assume this is because the NodeCount changes when the resource gets autoscaled.
Thankfully the lifecycle prevents destruction, but now I have a ton of nodepools that are de-synced.
How can we reproduce it?
Create a nodepool w/ autoscaling enabled. Have the nodepool get autoscaled, then notice that "SYNCED" becomes false, and the status
" Warning CannotObserveExternalResource 93s managed/containerservice.azure.jet.crossplane.io/v1alpha2, kind=kubernetesclusternodepool cannot run plan: plan failed: Instance cannot be destroyed: Resource azurerm_kubernetes_cluster_node_pool.-2fj2m has lifecycle.prevent_destroy set, but the plan calls for this resource to be destroyed. To avoid this error and continue with the plan, either disable lifecycle.prevent_destroy or reduce the scope of the plan using the -target flag.: File name: main.tf.json"
appears.
What environment did it happen in?
Crossplane version:
Crossplane version: 1.6.1
AKS, provider-jet-azure release 0.8.0.
The text was updated successfully, but these errors were encountered: