Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use tofu binary instead of terraform one #2773

Open
wants to merge 17 commits into
base: main
Choose a base branch
from

Conversation

marcelovilla
Copy link
Member

@marcelovilla marcelovilla commented Oct 14, 2024

Reference Issues or PRs

Closes #2762

What does this implement/fix?

Put a x in the boxes that apply

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds a feature)
  • Breaking change (fix or feature that would cause existing features not to work as expected)
  • Documentation Update
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no API changes)
  • Build related changes
  • Other (please describe):

Testing

  • Did you test the pull request locally?
  • Did you add new tests?

How to test this PR?

I think there are two important things to test with this PR: (1) deploy from scratch using the OpenTofu binary, and (2) upgrade an existing cluster using the OpenTofu binary. To test:

  1. Deploy Nebari off of this branch, either locally or to any of the supported cloud providers.
  2. Upgrade an existing Nebari deployment using this branch and redeploy

All resources should be correctly deployed and Nebari should be running as usual after.

Any other comments?

This has been successfully tested in the following scenarios:

@marcelovilla marcelovilla marked this pull request as ready for review November 6, 2024 15:22
Copy link
Member

@Adam-D-Lewis Adam-D-Lewis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tested an upgrade? I think we should test one on an existing deployment just to be sure no issues will arise.

@marcelovilla
Copy link
Member Author

Have you tested an upgrade? I think we should test one on an existing deployment just to be sure no issues will arise.

@Adam-D-Lewis I did test an upgrade on GCP and everything went smooth. I still think it would be worthwhile to test both an AWS and Azure upgrade

@Adam-D-Lewis Adam-D-Lewis self-requested a review November 7, 2024 16:21
@marcelovilla
Copy link
Member Author

Here's a passing local upgrade test: https://github.com/nebari-dev/nebari/actions/runs/11740475035/job/32707098771

Copy link
Contributor

@dcmcand dcmcand left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested locally with clean install and upgrade from 2024.7.1. Does what it says on the tin. 🚀

@marcelovilla
Copy link
Member Author

I'll be testing a couple more scenarios before merging.

@Adam-D-Lewis do you think you can test this on an Azure deployment when you have some time?

@Adam-D-Lewis
Copy link
Member

I'll be testing a couple more scenarios before merging.

@Adam-D-Lewis do you think you can test this on an Azure deployment when you have some time?

I did a fresh deployment and no problems came up

@Adam-D-Lewis
Copy link
Member

Adam-D-Lewis commented Nov 14, 2024

I deployed 2024.7.1then ran nebari upgrade and deployed and got an error. Here are the logs.

(neb) [balast@nirvana 2024-11-15-azure-opentofu-upgrade]$ nebari deploy -c nebari-config.yaml
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ The following files will be created:                                                                                                                                                              ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ stages/07-kubernetes-services/modules/kubernetes/cephfs-mount/main.tf                                                                                                                             │
│ stages/07-kubernetes-services/modules/kubernetes/cephfs-mount/outputs.tf                                                                                                                          │
│ stages/07-kubernetes-services/modules/kubernetes/cephfs-mount/variables.tf                                                                                                                        │
│ stages/07-kubernetes-services/modules/kubernetes/services/argo-workflows/get_cert.py                                                                                                              │
│ stages/07-kubernetes-services/modules/kubernetes/services/argo-workflows/ssl-issue.py                                                                                                             │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/modules/modules.json                                                                                             │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/hashicorp/helm/2.14.0/linux_amd64/LICENSE.txt                                    │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/hashicorp/helm/2.14.0/linux_amd64/terraform-provider-helm_v2.14.0_x5             │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/hashicorp/kubernetes/2.31.0/linux_amd64/LICENSE.txt                              │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/hashicorp/kubernetes/2.31.0/linux_amd64/terraform-provider-kubernetes_v2.31.0_x5 │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/hashicorp/null/3.2.2/linux_amd64/terraform-provider-null_v3.2.2_x5               │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/hashicorp/random/3.6.2/linux_amd64/LICENSE.txt                                   │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/hashicorp/random/3.6.2/linux_amd64/terraform-provider-random_v3.6.2_x5           │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/mrparkers/keycloak/3.7.0/linux_amd64/CHANGELOG.md                                │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/mrparkers/keycloak/3.7.0/linux_amd64/LICENSE                                     │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/mrparkers/keycloak/3.7.0/linux_amd64/README.md                                   │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/.terraform/providers/registry.terraform.io/mrparkers/keycloak/3.7.0/linux_amd64/terraform-provider-keycloak_v3.7.0          │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/shared-pvc.tf                                                                                                               │
│ stages/07-kubernetes-services/modules/kubernetes/services/dask-gateway/controller.tf                                                                                                              │
│ stages/07-kubernetes-services/modules/kubernetes/services/rook-ceph/cluster-values.yaml.tftpl                                                                                                     │
│ stages/07-kubernetes-services/modules/kubernetes/services/rook-ceph/main.tf                                                                                                                       │
│ stages/07-kubernetes-services/modules/kubernetes/services/rook-ceph/operator-values.yaml                                                                                                          │
│ stages/07-kubernetes-services/modules/kubernetes/services/rook-ceph/variables.tf                                                                                                                  │
│ stages/07-kubernetes-services/modules/kubernetes/services/rook-ceph/versions.tf                                                                                                                   │
│ stages/07-kubernetes-services/rook-ceph.tf                                                                                                                                                        │
│ stages/10-kubernetes-kuberhealthy/crds/comcast.github.io_khchecks.yaml                                                                                                                            │
│ stages/10-kubernetes-kuberhealthy/crds/comcast.github.io_khjobs.yaml                                                                                                                              │
│ stages/10-kubernetes-kuberhealthy/crds/comcast.github.io_khstates.yaml                                                                                                                            │
│ stages/10-kubernetes-kuberhealthy/manifests/apps_v1_deployment_kuberhealthy.yaml                                                                                                                  │
│ stages/10-kubernetes-kuberhealthy/manifests/comcast.github.io_v1_kuberhealthycheck_daemonset.yaml                                                                                                 │
│ stages/10-kubernetes-kuberhealthy/manifests/comcast.github.io_v1_kuberhealthycheck_deployment.yaml                                                                                                │
│ stages/10-kubernetes-kuberhealthy/manifests/comcast.github.io_v1_kuberhealthycheck_dns-status-internal.yaml                                                                                       │
│ stages/10-kubernetes-kuberhealthy/manifests/monitoring.coreos.com_v1_servicemonitor_kuberhealthy.yaml                                                                                             │
│ stages/10-kubernetes-kuberhealthy/manifests/policy_v1_poddisruptionbudget_kuberhealthy-pdb.yaml                                                                                                   │
│ stages/10-kubernetes-kuberhealthy/manifests/rbac.authorization.k8s.io_v1_clusterrole_dns-internal-service-cr.yaml                                                                                 │
│ stages/10-kubernetes-kuberhealthy/manifests/rbac.authorization.k8s.io_v1_clusterrole_kuberhealthy-daemonset-khcheck.yaml                                                                          │
│ stages/10-kubernetes-kuberhealthy/manifests/rbac.authorization.k8s.io_v1_clusterrole_kuberhealthy.yaml                                                                                            │
│ stages/10-kubernetes-kuberhealthy/manifests/rbac.authorization.k8s.io_v1_clusterrolebinding_dns-internal-service-crb.yaml                                                                         │
│ stages/10-kubernetes-kuberhealthy/manifests/rbac.authorization.k8s.io_v1_clusterrolebinding_kuberhealthy-daemonset-khcheck.yaml                                                                   │
│ stages/10-kubernetes-kuberhealthy/manifests/rbac.authorization.k8s.io_v1_clusterrolebinding_kuberhealthy.yaml                                                                                     │
│ stages/10-kubernetes-kuberhealthy/manifests/rbac.authorization.k8s.io_v1_role_deployment-service-role.yaml                                                                                        │
│ stages/10-kubernetes-kuberhealthy/manifests/rbac.authorization.k8s.io_v1_role_ds-admin.yaml                                                                                                       │
│ stages/10-kubernetes-kuberhealthy/manifests/rbac.authorization.k8s.io_v1_rolebinding_daemonset-khcheck.yaml                                                                                       │
│ stages/10-kubernetes-kuberhealthy/manifests/rbac.authorization.k8s.io_v1_rolebinding_deployment-check-rb.yaml                                                                                     │
│ stages/10-kubernetes-kuberhealthy/manifests/v1_configmap_kuberhealthy.yaml                                                                                                                        │
│ stages/10-kubernetes-kuberhealthy/manifests/v1_service_kuberhealthy.yaml                                                                                                                          │
│ stages/10-kubernetes-kuberhealthy/manifests/v1_serviceaccount_daemonset-khcheck.yaml                                                                                                              │
│ stages/10-kubernetes-kuberhealthy/manifests/v1_serviceaccount_deployment-sa.yaml                                                                                                                  │
│ stages/10-kubernetes-kuberhealthy/manifests/v1_serviceaccount_dns-internal-sa.yaml                                                                                                                │
│ stages/10-kubernetes-kuberhealthy/manifests/v1_serviceaccount_kuberhealthy.yaml                                                                                                                   │
│ stages/11-kubernetes-kuberhealthy-healthchecks/manifests/comcast.github.io_v1_kuberhealthycheck_conda-store-http-check.yaml                                                                       │
│ stages/11-kubernetes-kuberhealthy-healthchecks/manifests/comcast.github.io_v1_kuberhealthycheck_jupyterhub-http-check.yaml                                                                        │
│ stages/11-kubernetes-kuberhealthy-healthchecks/manifests/comcast.github.io_v1_kuberhealthycheck_keycloak-http-check.yaml                                                                          │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ The following files will be updated:                                                                           ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ stages/01-terraform-state/azure/_nebari.tf.json                                                                │
│ stages/02-infrastructure/azure/modules/kubernetes/main.tf                                                      │
│ stages/03-kubernetes-initialize/_nebari.tf.json                                                                │
│ stages/04-kubernetes-ingress/_nebari.tf.json                                                                   │
│ stages/05-kubernetes-keycloak/_nebari.tf.json                                                                  │
│ stages/07-kubernetes-services/_nebari.tf.json                                                                  │
│ stages/07-kubernetes-services/conda-store.tf                                                                   │
│ stages/07-kubernetes-services/dask_gateway.tf                                                                  │
│ stages/07-kubernetes-services/jupyterhub.tf                                                                    │
│ stages/07-kubernetes-services/jupyterhub_ssh.tf                                                                │
│ stages/07-kubernetes-services/modules/kubernetes/nfs-mount/main.tf                                             │
│ stages/07-kubernetes-services/modules/kubernetes/nfs-mount/outputs.tf                                          │
│ stages/07-kubernetes-services/modules/kubernetes/nfs-mount/variables.tf                                        │
│ stages/07-kubernetes-services/modules/kubernetes/nfs-server/main.tf                                            │
│ stages/07-kubernetes-services/modules/kubernetes/nfs-server/variables.tf                                       │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/output.tf                                │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/variables.tf                             │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/worker.tf                                │
│ stages/07-kubernetes-services/modules/kubernetes/services/dask-gateway/gateway.tf                              │
│ stages/07-kubernetes-services/modules/kubernetes/services/dask-gateway/variables.tf                            │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub-ssh/sftp.tf                               │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub-ssh/variables.tf                          │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/files/jupyterhub/02-spawner.py            │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/files/jupyterhub/03-profiles.py           │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/files/jupyterhub/04-auth.py               │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/files/jupyterlab/overrides.json           │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/main.tf                                   │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/variables.tf                              │
│ stages/07-kubernetes-services/modules/kubernetes/services/monitoring/dashboards/Main/jupyterhub_dashboard.json │
│ stages/07-kubernetes-services/modules/kubernetes/services/monitoring/dashboards/Main/usage_report.json         │
│ stages/07-kubernetes-services/modules/kubernetes/services/monitoring/main.tf                                   │
│ stages/07-kubernetes-services/variables.tf                                                                     │
│ stages/08-nebari-tf-extensions/_nebari.tf.json                                                                 │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ The following files will be deleted:                                                ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ stages/07-kubernetes-services/modules/kubernetes/services/dask-gateway/controler.tf │
└─────────────────────────────────────────────────────────────────────────────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ The following files are untracked (only exist in output directory):                                            ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ stages/01-terraform-state/azure/_nebari.tf.json                                                                │
│ stages/02-infrastructure/azure/modules/kubernetes/main.tf                                                      │
│ stages/03-kubernetes-initialize/_nebari.tf.json                                                                │
│ stages/04-kubernetes-ingress/_nebari.tf.json                                                                   │
│ stages/05-kubernetes-keycloak/_nebari.tf.json                                                                  │
│ stages/07-kubernetes-services/_nebari.tf.json                                                                  │
│ stages/07-kubernetes-services/conda-store.tf                                                                   │
│ stages/07-kubernetes-services/dask_gateway.tf                                                                  │
│ stages/07-kubernetes-services/jupyterhub.tf                                                                    │
│ stages/07-kubernetes-services/jupyterhub_ssh.tf                                                                │
│ stages/07-kubernetes-services/modules/kubernetes/nfs-mount/main.tf                                             │
│ stages/07-kubernetes-services/modules/kubernetes/nfs-mount/outputs.tf                                          │
│ stages/07-kubernetes-services/modules/kubernetes/nfs-mount/variables.tf                                        │
│ stages/07-kubernetes-services/modules/kubernetes/nfs-server/main.tf                                            │
│ stages/07-kubernetes-services/modules/kubernetes/nfs-server/variables.tf                                       │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/output.tf                                │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/variables.tf                             │
│ stages/07-kubernetes-services/modules/kubernetes/services/conda-store/worker.tf                                │
│ stages/07-kubernetes-services/modules/kubernetes/services/dask-gateway/gateway.tf                              │
│ stages/07-kubernetes-services/modules/kubernetes/services/dask-gateway/variables.tf                            │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub-ssh/sftp.tf                               │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub-ssh/variables.tf                          │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/files/jupyterhub/02-spawner.py            │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/files/jupyterhub/03-profiles.py           │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/files/jupyterhub/04-auth.py               │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/files/jupyterlab/overrides.json           │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/main.tf                                   │
│ stages/07-kubernetes-services/modules/kubernetes/services/jupyterhub/variables.tf                              │
│ stages/07-kubernetes-services/modules/kubernetes/services/monitoring/dashboards/Main/jupyterhub_dashboard.json │
│ stages/07-kubernetes-services/modules/kubernetes/services/monitoring/dashboards/Main/usage_report.json         │
│ stages/07-kubernetes-services/modules/kubernetes/services/monitoring/main.tf                                   │
│ stages/07-kubernetes-services/variables.tf                                                                     │
│ stages/08-nebari-tf-extensions/_nebari.tf.json                                                                 │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
[tofu]: 
[tofu]: Initializing the backend...
[tofu]: Upgrading modules...
[tofu]: - terraform-state in modules/terraform-state
[tofu]: 
[tofu]: Initializing provider plugins...
[tofu]: - terraform.io/builtin/terraform is built in to OpenTofu
[tofu]: - Finding hashicorp/azurerm versions matching "3.97.1"...
[tofu]: - Installing hashicorp/azurerm v3.97.1...
[tofu]: - Installed hashicorp/azurerm v3.97.1 (signed, key ID 0C0AF313E5FD9F80)
[tofu]: 
[tofu]: Providers are signed by their developers.
[tofu]: If you'd like to know more about provider signing, you can read about it here:
[tofu]: https://opentofu.org/docs/cli/plugins/signing/
[tofu]: 
[tofu]: OpenTofu has made some changes to the provider dependency selections recorded
[tofu]: in the .terraform.lock.hcl file. Review those changes and commit them to your
[tofu]: version control system if they represent changes you intended to make.
[tofu]: 
[tofu]: OpenTofu has been successfully initialized!
[tofu]: 
[tofu]: You may now begin working with OpenTofu. Try running "tofu plan" to see
[tofu]: any changes that are required for your infrastructure. All OpenTofu commands
[tofu]: should now work.
[tofu]: 
[tofu]: If you ever set or change modules or backend configuration for OpenTofu,
[tofu]: rerun this command to reinitialize your working directory. If you forget, other
[tofu]: commands will detect it and remind you to do so if necessary.
['import', '-var-file=/tmp/tmp3hvkxp8f.tfvars.json', 'module.terraform-state.azurerm_resource_group.terraform-state-resource-group', '/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod-state']
[tofu]: module.terraform-state.azurerm_resource_group.terraform-state-resource-group: Importing from ID "/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod-state"...
[tofu]: module.terraform-state.azurerm_resource_group.terraform-state-resource-group: Import prepared!
[tofu]:   Prepared azurerm_resource_group for import
[tofu]: ╷
[tofu]: │ Error: Resource already managed by OpenTofu
[tofu]: │ 
[tofu]: │ OpenTofu is already managing a remote object for
[tofu]: │ module.terraform-state.azurerm_resource_group.terraform-state-resource-group.
[tofu]: │ To import to this address you must first remove the existing object from
[tofu]: │ the state.
[tofu]: ╵
[tofu]: 
['import', '-var-file=/tmp/tmp3hvkxp8f.tfvars.json', 'module.terraform-state.azurerm_storage_account.terraform-state-storage-account', '/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod-state/providers/Microsoft.Storage/storageAccounts/nebari11prodxhz2']
[tofu]: module.terraform-state.azurerm_storage_account.terraform-state-storage-account: Importing from ID "/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod-state/providers/Microsoft.Storage/storageAccounts/nebari11prodxhz2"...
[tofu]: module.terraform-state.azurerm_storage_account.terraform-state-storage-account: Import prepared!
[tofu]:   Prepared azurerm_storage_account for import
[tofu]: ╷
[tofu]: │ Error: Resource already managed by OpenTofu
[tofu]: │ 
[tofu]: │ OpenTofu is already managing a remote object for
[tofu]: │ module.terraform-state.azurerm_storage_account.terraform-state-storage-account.
[tofu]: │ To import to this address you must first remove the existing object from
[tofu]: │ the state.
[tofu]: ╵
[tofu]: 
['import', '-var-file=/tmp/tmp3hvkxp8f.tfvars.json', 'module.terraform-state.azurerm_storage_container.storage_container', 'https://nebari11prodxhz2.blob.core.windows.net/nebari11-prod-state']
[tofu]: module.terraform-state.azurerm_storage_container.storage_container: Importing from ID "https://nebari11prodxhz2.blob.core.windows.net/nebari11-prod-state"...
[tofu]: module.terraform-state.azurerm_storage_container.storage_container: Import prepared!
[tofu]:   Prepared azurerm_storage_container for import
[tofu]: ╷
[tofu]: │ Error: Resource already managed by OpenTofu
[tofu]: │ 
[tofu]: │ OpenTofu is already managing a remote object for
[tofu]: │ module.terraform-state.azurerm_storage_container.storage_container. To
[tofu]: │ import to this address you must first remove the existing object from the
[tofu]: │ state.
[tofu]: ╵
[tofu]: 
[tofu]: module.terraform-state.azurerm_resource_group.terraform-state-resource-group: Refreshing state... [id=/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod-state]
[tofu]: module.terraform-state.azurerm_storage_account.terraform-state-storage-account: Refreshing state... [id=/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod-state/providers/Microsoft.Storage/storageAccounts/nebari11prodxhz2]
[tofu]: module.terraform-state.azurerm_storage_container.storage_container: Refreshing state... [id=https://nebari11prodxhz2.blob.core.windows.net/nebari11-prod-state]
[tofu]: 
[tofu]: OpenTofu used the selected providers to generate the following execution
[tofu]: plan. Resource actions are indicated with the following symbols:
[tofu]:   + create
[tofu]: 
[tofu]: OpenTofu will perform the following actions:
[tofu]: 
[tofu]:   # terraform_data.nebari_config will be created
[tofu]:   + resource "terraform_data" "nebari_config" {
[tofu]:       + id     = (known after apply)
[tofu]:       + input  = {
[tofu]:           + amazon_web_services    = null
[tofu]:           + argo_workflows         = {
[tofu]:               + enabled                    = true
[tofu]:               + nebari_workflow_controller = {
[tofu]:                   + enabled   = true
[tofu]:                   + image_tag = "2024.9.1"
[tofu]:                 }
[tofu]:               + overrides                  = {}
[tofu]:             }
[tofu]:           + azure                  = {
[tofu]:               + kubernetes_version        = "1.29.2"
[tofu]:               + max_pods                  = null
[tofu]:               + network_profile           = null
[tofu]:               + node_groups               = {
[tofu]:                   + general = {
[tofu]:                       + instance  = "Standard_D8_v3"
[tofu]:                       + max_nodes = 1
[tofu]:                       + min_nodes = 1
[tofu]:                     }
[tofu]:                   + user    = {
[tofu]:                       + instance  = "Standard_D4_v3"
[tofu]:                       + max_nodes = 5
[tofu]:                       + min_nodes = 0
[tofu]:                     }
[tofu]:                   + worker  = {
[tofu]:                       + instance  = "Standard_D4_v3"
[tofu]:                       + max_nodes = 5
[tofu]:                       + min_nodes = 0
[tofu]:                     }
[tofu]:                 }
[tofu]:               + private_cluster_enabled   = false
[tofu]:               + region                    = "centralus"
[tofu]:               + resource_group_name       = null
[tofu]:               + storage_account_postfix   = "xhz2"
[tofu]:               + tags                      = {}
[tofu]:               + vnet_subnet_id            = null
[tofu]:               + workload_identity_enabled = true
[tofu]:             }
[tofu]:           + ceph                   = {
[tofu]:               + storage_class_name = null
[tofu]:             }
[tofu]:           + certificate            = {
[tofu]:               + acme_email  = "[email protected]"
[tofu]:               + acme_server = "https://acme-v02.api.letsencrypt.org/directory"
[tofu]:               + secret_name = null
[tofu]:               + type        = "lets-encrypt"
[tofu]:             }
[tofu]:           + ci_cd                  = {
[tofu]:               + after_script  = []
[tofu]:               + before_script = []
[tofu]:               + branch        = "main"
[tofu]:               + commit_render = true
[tofu]:               + type          = "none"
[tofu]:             }
[tofu]:           + conda_store            = {
[tofu]:               + default_namespace = "nebari-git"
[tofu]:               + extra_config      = ""
[tofu]:               + extra_settings    = {}
[tofu]:               + image             = "quansight/conda-store-server"
[tofu]:               + image_tag         = "2024.3.1"
[tofu]:               + object_storage    = "200Gi"
[tofu]:             }
[tofu]:           + default_images         = {
[tofu]:               + dask_worker = "quay.io/nebari/nebari-dask-worker:2024.9.1"
[tofu]:               + jupyterhub  = "quay.io/nebari/nebari-jupyterhub:2024.9.1"
[tofu]:               + jupyterlab  = "quay.io/nebari/nebari-jupyterlab:2024.9.1"
[tofu]:             }
[tofu]:           + digital_ocean          = null
[tofu]:           + dns                    = {
[tofu]:               + auto_provision = true
[tofu]:               + provider       = "cloudflare"
[tofu]:             }
[tofu]:           + domain                 = "adl-azure-tofu2.nebari.dev"
[tofu]:           + environments           = {
[tofu]:               + "environment-dashboard.yaml" = {
[tofu]:                   + channels     = [
[tofu]:                       + "conda-forge",
[tofu]:                     ]
[tofu]:                   + dependencies = [
[tofu]:                       + "python==3.11.6",
[tofu]:                       + "cufflinks-py==0.17.3",
[tofu]:                       + "dash==2.14.1",
[tofu]:                       + "geopandas==0.14.1",
[tofu]:                       + "geopy==2.4.0",
[tofu]:                       + "geoviews==1.11.0",
[tofu]:                       + "gunicorn==21.2.0",
[tofu]:                       + "holoviews==1.18.1",
[tofu]:                       + "ipykernel==6.26.0",
[tofu]:                       + "ipywidgets==8.1.1",
[tofu]:                       + "jupyter==1.0.0",
[tofu]:                       + "jupyter_bokeh==3.0.7",
[tofu]:                       + "matplotlib==3.8.1",
[tofu]:                       + "nebari-dask==2024.9.1",
[tofu]:                       + "nodejs=20.8.1",
[tofu]:                       + "numpy==1.26.0",
[tofu]:                       + "openpyxl==3.1.2",
[tofu]:                       + "pandas==2.1.3",
[tofu]:                       + "panel==1.3.1",
[tofu]:                       + "param==2.0.1",
[tofu]:                       + "plotly==5.18.0",
[tofu]:                       + "python-graphviz==0.20.1",
[tofu]:                       + "rich==13.6.0",
[tofu]:                       + "streamlit==1.28.1",
[tofu]:                       + "sympy==1.12",
[tofu]:                       + "voila==0.5.5",
[tofu]:                       + "xarray==2023.10.1",
[tofu]:                       + "pip==23.3.1",
[tofu]:                       + {
[tofu]:                           + pip = [
[tofu]:                               + "streamlit-image-comparison==0.0.4",
[tofu]:                               + "noaa-coops==0.1.9",
[tofu]:                               + "dash_core_components==2.0.0",
[tofu]:                               + "dash_html_components==2.0.0",
[tofu]:                             ]
[tofu]:                         },
[tofu]:                     ]
[tofu]:                   + name         = "dashboard"
[tofu]:                 }
[tofu]:               + "environment-dask.yaml"      = {
[tofu]:                   + channels     = [
[tofu]:                       + "conda-forge",
[tofu]:                     ]
[tofu]:                   + dependencies = [
[tofu]:                       + "python==3.11.6",
[tofu]:                       + "ipykernel==6.26.0",
[tofu]:                       + "ipywidgets==8.1.1",
[tofu]:                       + "nebari-dask==2024.9.1",
[tofu]:                       + "python-graphviz==0.20.1",
[tofu]:                       + "pyarrow==14.0.1",
[tofu]:                       + "s3fs==2023.10.0",
[tofu]:                       + "gcsfs==2023.10.0",
[tofu]:                       + "numpy=1.26.0",
[tofu]:                       + "numba=0.58.1",
[tofu]:                       + "pandas=2.1.3",
[tofu]:                       + "xarray==2023.10.1",
[tofu]:                     ]
[tofu]:                   + name         = "dask"
[tofu]:                 }
[tofu]:             }
[tofu]:           + existing               = null
[tofu]:           + external_container_reg = {
[tofu]:               + access_key_id     = null
[tofu]:               + enabled           = false
[tofu]:               + extcr_account     = null
[tofu]:               + extcr_region      = null
[tofu]:               + secret_access_key = null
[tofu]:             }
[tofu]:           + google_cloud_platform  = null
[tofu]:           + helm_extensions        = []
[tofu]:           + ingress                = {
[tofu]:               + terraform_overrides = {}
[tofu]:             }
[tofu]:           + jhub_apps              = {
[tofu]:               + enabled   = false
[tofu]:               + overrides = {}
[tofu]:             }
[tofu]:           + jupyterhub             = {
[tofu]:               + overrides = {
[tofu]:                   + singleuser = {
[tofu]:                       + extraEnv = {
[tofu]:                           + MLFLOW_TRACKING_URI = "http://nebari10-mlflow-tracking.prod.svc:5000"
[tofu]:                         }
[tofu]:                     }
[tofu]:                 }
[tofu]:             }
[tofu]:           + jupyterlab             = {
[tofu]:               + default_settings     = {}
[tofu]:               + gallery_settings     = {
[tofu]:                   + destination                   = "examples"
[tofu]:                   + exhibits                      = []
[tofu]:                   + hide_gallery_without_exhibits = true
[tofu]:                   + title                         = "Examples"
[tofu]:                 }
[tofu]:               + idle_culler          = {
[tofu]:                   + kernel_cull_busy                    = false
[tofu]:                   + kernel_cull_connected               = true
[tofu]:                   + kernel_cull_idle_timeout            = 15
[tofu]:                   + kernel_cull_interval                = 5
[tofu]:                   + server_shutdown_no_activity_timeout = 15
[tofu]:                   + terminal_cull_inactive_timeout      = 15
[tofu]:                   + terminal_cull_interval              = 5
[tofu]:                 }
[tofu]:               + initial_repositories = []
[tofu]:               + preferred_dir        = null
[tofu]:             }
[tofu]:           + local                  = null
[tofu]:           + monitoring             = {
[tofu]:               + enabled       = true
[tofu]:               + healthchecks  = {
[tofu]:                   + enabled                   = false
[tofu]:                   + kuberhealthy_helm_version = "100"
[tofu]:                 }
[tofu]:               + minio_enabled = true
[tofu]:               + overrides     = {
[tofu]:                   + loki     = {}
[tofu]:                   + minio    = {}
[tofu]:                   + promtail = {}
[tofu]:                 }
[tofu]:             }
[tofu]:           + namespace              = "prod"
[tofu]:           + nebari_version         = "2024.9.2"
[tofu]:           + prevent_deploy         = false
[tofu]:           + profiles               = {
[tofu]:               + dask_worker = {
[tofu]:                   + "Medium Worker" = {
[tofu]:                       + worker_cores        = 3
[tofu]:                       + worker_cores_limit  = 4
[tofu]:                       + worker_memory       = "10G"
[tofu]:                       + worker_memory_limit = "16G"
[tofu]:                       + worker_threads      = 4
[tofu]:                     }
[tofu]:                   + "Small Worker"  = {
[tofu]:                       + worker_cores        = 1.5
[tofu]:                       + worker_cores_limit  = 2
[tofu]:                       + worker_memory       = "5G"
[tofu]:                       + worker_memory_limit = "8G"
[tofu]:                       + worker_threads      = 2
[tofu]:                     }
[tofu]:                 }
[tofu]:               + jupyterlab  = [
[tofu]:                   + {
[tofu]:                       + access               = "all"
[tofu]:                       + default              = true
[tofu]:                       + description          = "Stable environment with 2 cpu / 8 GB ram"
[tofu]:                       + display_name         = "Small Instance"
[tofu]:                       + groups               = null
[tofu]:                       + kubespawner_override = {
[tofu]:                           + cpu_guarantee = 1.5
[tofu]:                           + cpu_limit     = 2
[tofu]:                           + mem_guarantee = "5G"
[tofu]:                           + mem_limit     = "8G"
[tofu]:                         }
[tofu]:                       + users                = null
[tofu]:                     },
[tofu]:                   + {
[tofu]:                       + access               = "all"
[tofu]:                       + default              = false
[tofu]:                       + description          = "Stable environment with 4 cpu / 16 GB ram"
[tofu]:                       + display_name         = "Medium Instance"
[tofu]:                       + groups               = null
[tofu]:                       + kubespawner_override = {
[tofu]:                           + cpu_guarantee = 3
[tofu]:                           + cpu_limit     = 4
[tofu]:                           + mem_guarantee = "10G"
[tofu]:                           + mem_limit     = "16G"
[tofu]:                         }
[tofu]:                       + users                = null
[tofu]:                     },
[tofu]:                 ]
[tofu]:             }
[tofu]:           + project_name           = "nebari11"
[tofu]:           + provider               = "azure"
[tofu]:           + security               = {
[tofu]:               + authentication     = {
[tofu]:                   + type = "password"
[tofu]:                 }
[tofu]:               + keycloak           = {
[tofu]:                   + initial_root_password = "broot"
[tofu]:                   + overrides             = {}
[tofu]:                   + realm_display_name    = "Nebari"
[tofu]:                 }
[tofu]:               + shared_users_group = true
[tofu]:             }
[tofu]:           + storage                = {
[tofu]:               + conda_store       = "200Gi"
[tofu]:               + shared_filesystem = "200Gi"
[tofu]:               + type              = "nfs"
[tofu]:             }
[tofu]:           + telemetry              = {
[tofu]:               + jupyterlab_pioneer = {
[tofu]:                   + enabled    = false
[tofu]:                   + log_format = null
[tofu]:                 }
[tofu]:             }
[tofu]:           + terraform_state        = {
[tofu]:               + backend = null
[tofu]:               + config  = {}
[tofu]:               + type    = "remote"
[tofu]:             }
[tofu]:           + tf_extensions          = []
[tofu]:           + theme                  = {
[tofu]:               + jupyterhub = {
[tofu]:                   + accent_color         = "#32C574"
[tofu]:                   + accent_color_dark    = "#32C574"
[tofu]:                   + display_version      = "True"
[tofu]:                   + favicon              = "https://raw.githubusercontent.com/nebari-dev/nebari-design/main/symbol/favicon.ico"
[tofu]:                   + h1_color             = "#652e8e"
[tofu]:                   + h2_color             = "#652e8e"
[tofu]:                   + hub_subtitle         = "Your open source data science platform, hosted on Azure"
[tofu]:                   + hub_title            = "Nebari - nebri7"
[tofu]:                   + logo                 = "https://raw.githubusercontent.com/nebari-dev/nebari-design/main/logo-mark/horizontal/Nebari-Logo-Horizontal-Lockup-White-text.svg"
[tofu]:                   + navbar_color         = "#1c1d26"
[tofu]:                   + navbar_hover_color   = "#db96f3"
[tofu]:                   + navbar_text_color    = "#f1f1f6"
[tofu]:                   + primary_color        = "#4f4173"
[tofu]:                   + primary_color_dark   = "#4f4173"
[tofu]:                   + secondary_color      = "#957da6"
[tofu]:                   + secondary_color_dark = "#957da6"
[tofu]:                   + text_color           = "#111111"
[tofu]:                   + version              = "v2024.9.2.dev132+g8e59c242"
[tofu]:                   + welcome              = "Welcome! Learn about Nebari's features and configurations in <a href=\"https://www.nebari.dev/docs/welcome\">the documentation</a>. If you have any questions or feedback, reach the team on <a href=\"https://www.nebari.dev/docs/community#getting-support\">Nebari's support forums</a>."
[tofu]:                 }
[tofu]:             }
[tofu]:         }
[tofu]:       + output = (known after apply)
[tofu]:     }
[tofu]: 
[tofu]: Plan: 1 to add, 0 to change, 0 to destroy.
[tofu]: terraform_data.nebari_config: Creating...
[tofu]: terraform_data.nebari_config: Creation complete after 0s [id=0ca8c156-2a6f-e4ee-1a39-901d621fc8d1]
[tofu]: 
[tofu]: Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
[tofu]: [tofu]: 
[tofu]: Initializing the backend...
[tofu]: Upgrading modules...
[tofu]: - kubernetes in modules/kubernetes
[tofu]: - registry in modules/registry
[tofu]: 
[tofu]: Initializing provider plugins...
[tofu]: - Finding hashicorp/azurerm versions matching "3.97.1"...
[tofu]: - Finding latest version of hashicorp/local...
[tofu]: - Installing hashicorp/azurerm v3.97.1...
[tofu]: - Installed hashicorp/azurerm v3.97.1 (signed, key ID 0C0AF313E5FD9F80)
[tofu]: - Installing hashicorp/local v2.5.2...
[tofu]: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80)
[tofu]: 
[tofu]: Providers are signed by their developers.
[tofu]: If you'd like to know more about provider signing, you can read about it here:
[tofu]: https://opentofu.org/docs/cli/plugins/signing/
[tofu]: 
[tofu]: OpenTofu has made some changes to the provider dependency selections recorded
[tofu]: in the .terraform.lock.hcl file. Review those changes and commit them to your
[tofu]: version control system if they represent changes you intended to make.
[tofu]: 
[tofu]: OpenTofu has been successfully initialized!
[tofu]: 
[tofu]: You may now begin working with OpenTofu. Try running "tofu plan" to see
[tofu]: any changes that are required for your infrastructure. All OpenTofu commands
[tofu]: should now work.
[tofu]: 
[tofu]: If you ever set or change modules or backend configuration for OpenTofu,
[tofu]: rerun this command to reinitialize your working directory. If you forget, other
[tofu]: commands will detect it and remind you to do so if necessary.
[tofu]: azurerm_resource_group.resource_group: Refreshing state... [id=/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod]
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.worker_node_group: Refreshing state... [id=/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod/agentPools/worker]
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.user_node_group: Refreshing state... [id=/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod/agentPools/user]
[tofu]: module.registry.azurerm_container_registry.container_registry: Refreshing state... [id=/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerRegistry/registries/nebari11prod]
[tofu]: module.kubernetes.azurerm_kubernetes_cluster.main: Refreshing state... [id=/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod]
[tofu]: local_file.kubeconfig[0]: Refreshing state... [id=8c1442ae9abf718425017ba8c161d8aaf23bfe67]
[tofu]: 
[tofu]: OpenTofu used the selected providers to generate the following execution
[tofu]: plan. Resource actions are indicated with the following symbols:
[tofu]:   + create
[tofu]:   - destroy
[tofu]: 
[tofu]: OpenTofu will perform the following actions:
[tofu]: 
[tofu]:   # module.kubernetes.azurerm_kubernetes_cluster_node_pool.node_group["1"] will be created
[tofu]:   + resource "azurerm_kubernetes_cluster_node_pool" "node_group" {
[tofu]:       + enable_auto_scaling   = true
[tofu]:       + id                    = (known after apply)
[tofu]:       + kubelet_disk_type     = (known after apply)
[tofu]:       + kubernetes_cluster_id = "/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod"
[tofu]:       + max_count             = 5
[tofu]:       + max_pods              = (known after apply)
[tofu]:       + min_count             = 0
[tofu]:       + mode                  = "User"
[tofu]:       + name                  = "user"
[tofu]:       + node_count            = (known after apply)
[tofu]:       + node_labels           = {
[tofu]:           + "azure-node-pool" = "user"
[tofu]:         }
[tofu]:       + orchestrator_version  = "1.29.2"
[tofu]:       + os_disk_size_gb       = (known after apply)
[tofu]:       + os_disk_type          = "Managed"
[tofu]:       + os_sku                = (known after apply)
[tofu]:       + os_type               = "Linux"
[tofu]:       + priority              = "Regular"
[tofu]:       + scale_down_mode       = "Delete"
[tofu]:       + spot_max_price        = -1
[tofu]:       + ultra_ssd_enabled     = false
[tofu]:       + vm_size               = "Standard_D4_v3"
[tofu]:     }
[tofu]: 
[tofu]:   # module.kubernetes.azurerm_kubernetes_cluster_node_pool.node_group["2"] will be created
[tofu]:   + resource "azurerm_kubernetes_cluster_node_pool" "node_group" {
[tofu]:       + enable_auto_scaling   = true
[tofu]:       + id                    = (known after apply)
[tofu]:       + kubelet_disk_type     = (known after apply)
[tofu]:       + kubernetes_cluster_id = "/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod"
[tofu]:       + max_count             = 5
[tofu]:       + max_pods              = (known after apply)
[tofu]:       + min_count             = 0
[tofu]:       + mode                  = "User"
[tofu]:       + name                  = "worker"
[tofu]:       + node_count            = (known after apply)
[tofu]:       + node_labels           = {
[tofu]:           + "azure-node-pool" = "worker"
[tofu]:         }
[tofu]:       + orchestrator_version  = "1.29.2"
[tofu]:       + os_disk_size_gb       = (known after apply)
[tofu]:       + os_disk_type          = "Managed"
[tofu]:       + os_sku                = (known after apply)
[tofu]:       + os_type               = "Linux"
[tofu]:       + priority              = "Regular"
[tofu]:       + scale_down_mode       = "Delete"
[tofu]:       + spot_max_price        = -1
[tofu]:       + ultra_ssd_enabled     = false
[tofu]:       + vm_size               = "Standard_D4_v3"
[tofu]:     }
[tofu]: 
[tofu]:   # module.kubernetes.azurerm_kubernetes_cluster_node_pool.user_node_group will be destroyed
[tofu]:   # (because azurerm_kubernetes_cluster_node_pool.user_node_group is not in configuration)
[tofu]:   - resource "azurerm_kubernetes_cluster_node_pool" "user_node_group" {
[tofu]:       - custom_ca_trust_enabled = false -> null
[tofu]:       - enable_auto_scaling     = true -> null
[tofu]:       - enable_host_encryption  = false -> null
[tofu]:       - enable_node_public_ip   = false -> null
[tofu]:       - fips_enabled            = false -> null
[tofu]:       - id                      = "/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod/agentPools/user" -> null
[tofu]:       - kubelet_disk_type       = "OS" -> null
[tofu]:       - kubernetes_cluster_id   = "/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod" -> null
[tofu]:       - max_count               = 5 -> null
[tofu]:       - max_pods                = 110 -> null
[tofu]:       - min_count               = 0 -> null
[tofu]:       - mode                    = "User" -> null
[tofu]:       - name                    = "user" -> null
[tofu]:       - node_count              = 0 -> null
[tofu]:       - node_labels             = {
[tofu]:           - "azure-node-pool" = "user"
[tofu]:         } -> null
[tofu]:       - node_taints             = [] -> null
[tofu]:       - orchestrator_version    = "1.29.2" -> null
[tofu]:       - os_disk_size_gb         = 128 -> null
[tofu]:       - os_disk_type            = "Managed" -> null
[tofu]:       - os_sku                  = "Ubuntu" -> null
[tofu]:       - os_type                 = "Linux" -> null
[tofu]:       - priority                = "Regular" -> null
[tofu]:       - scale_down_mode         = "Delete" -> null
[tofu]:       - spot_max_price          = -1 -> null
[tofu]:       - tags                    = {} -> null
[tofu]:       - ultra_ssd_enabled       = false -> null
[tofu]:       - vm_size                 = "Standard_D4_v3" -> null
[tofu]:       - zones                   = [] -> null
[tofu]:     }
[tofu]: 
[tofu]:   # module.kubernetes.azurerm_kubernetes_cluster_node_pool.worker_node_group will be destroyed
[tofu]:   # (because azurerm_kubernetes_cluster_node_pool.worker_node_group is not in configuration)
[tofu]:   - resource "azurerm_kubernetes_cluster_node_pool" "worker_node_group" {
[tofu]:       - custom_ca_trust_enabled = false -> null
[tofu]:       - enable_auto_scaling     = true -> null
[tofu]:       - enable_host_encryption  = false -> null
[tofu]:       - enable_node_public_ip   = false -> null
[tofu]:       - fips_enabled            = false -> null
[tofu]:       - id                      = "/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod/agentPools/worker" -> null
[tofu]:       - kubelet_disk_type       = "OS" -> null
[tofu]:       - kubernetes_cluster_id   = "/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod" -> null
[tofu]:       - max_count               = 5 -> null
[tofu]:       - max_pods                = 110 -> null
[tofu]:       - min_count               = 0 -> null
[tofu]:       - mode                    = "User" -> null
[tofu]:       - name                    = "worker" -> null
[tofu]:       - node_count              = 0 -> null
[tofu]:       - node_labels             = {
[tofu]:           - "azure-node-pool" = "worker"
[tofu]:         } -> null
[tofu]:       - node_taints             = [] -> null
[tofu]:       - orchestrator_version    = "1.29.2" -> null
[tofu]:       - os_disk_size_gb         = 128 -> null
[tofu]:       - os_disk_type            = "Managed" -> null
[tofu]:       - os_sku                  = "Ubuntu" -> null
[tofu]:       - os_type                 = "Linux" -> null
[tofu]:       - priority                = "Regular" -> null
[tofu]:       - scale_down_mode         = "Delete" -> null
[tofu]:       - spot_max_price          = -1 -> null
[tofu]:       - tags                    = {} -> null
[tofu]:       - ultra_ssd_enabled       = false -> null
[tofu]:       - vm_size                 = "Standard_D4_v3" -> null
[tofu]:       - zones                   = [] -> null
[tofu]:     }
[tofu]: 
[tofu]: Plan: 2 to add, 0 to change, 2 to destroy.
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.user_node_group: Destroying... [id=/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod/agentPools/user]
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.worker_node_group: Destroying... [id=/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod/agentPools/worker]
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.node_group["2"]: Creating...
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.node_group["1"]: Creating...
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.worker_node_group: Still destroying... [id=/subscriptions/901d2965-5cce-4799-aa15-...usters/nebari11-prod/agentPools/worker, 10s elapsed]
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.user_node_group: Still destroying... [id=/subscriptions/901d2965-5cce-4799-aa15-...Clusters/nebari11-prod/agentPools/user, 10s elapsed]
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.user_node_group: Still destroying... [id=/subscriptions/901d2965-5cce-4799-aa15-...Clusters/nebari11-prod/agentPools/user, 20s elapsed]
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.worker_node_group: Still destroying... [id=/subscriptions/901d2965-5cce-4799-aa15-...usters/nebari11-prod/agentPools/worker, 20s elapsed]
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.user_node_group: Destruction complete after 22s
[tofu]: module.kubernetes.azurerm_kubernetes_cluster_node_pool.worker_node_group: Destruction complete after 22s
[tofu]: ╷
[tofu]: │ Error: A resource with the ID "/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod/agentPools/user" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_kubernetes_cluster_node_pool" for more information.
[tofu]: │ 
[tofu]: │   with module.kubernetes.azurerm_kubernetes_cluster_node_pool.node_group["1"],
[tofu]: │   on modules/kubernetes/main.tf line 67, in resource "azurerm_kubernetes_cluster_node_pool" "node_group":
[tofu]: │   67: resource "azurerm_kubernetes_cluster_node_pool" "node_group" {
[tofu]: │ 
[tofu]: ╵
[tofu]: ╷
[tofu]: │ Error: A resource with the ID "/subscriptions/901d2965-5cce-4799-aa15-44991169568b/resourceGroups/nebari11-prod/providers/Microsoft.ContainerService/managedClusters/nebari11-prod/agentPools/worker" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_kubernetes_cluster_node_pool" for more information.
[tofu]: │ 
[tofu]: │   with module.kubernetes.azurerm_kubernetes_cluster_node_pool.node_group["2"],
[tofu]: │   on modules/kubernetes/main.tf line 67, in resource "azurerm_kubernetes_cluster_node_pool" "node_group":
[tofu]: │   67: resource "azurerm_kubernetes_cluster_node_pool" "node_group" {
[tofu]: │ 
[tofu]: ╵
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/balast/CodingProjects/nebari/src/_nebari/subcommands/deploy.py:92 in deploy                │
│                                                                                                  │
│   89 │   │   │   msg = "Digital Ocean support is currently being deprecated and will be remov    │
│   90 │   │   │   typer.confirm(msg)                                                              │
│   91 │   │                                                                                       │
│ ❱ 92 │   │   deploy_configuration(                                                               │
│   93 │   │   │   config,                                                                         │
│   94 │   │   │   stages,                                                                         │
│   95 │   │   │   disable_prompt=disable_prompt,                                                  │
│                                                                                                  │
│ /home/balast/CodingProjects/nebari/src/_nebari/deploy.py:55 in deploy_configuration              │
│                                                                                                  │
│   52 │   │   │   │   s: hookspecs.NebariStage = stage(                                           │
│   53 │   │   │   │   │   output_directory=pathlib.Path.cwd(), config=config                      │
│   54 │   │   │   │   )                                                                           │
│ ❱ 55 │   │   │   │   stack.enter_context(s.deploy(stage_outputs, disable_prompt))                │
│   56 │   │   │   │                                                                               │
│   57 │   │   │   │   if not disable_checks:                                                      │
│   58 │   │   │   │   │   s.check(stage_outputs, disable_prompt)                                  │
│                                                                                                  │
│ /home/balast/miniconda3/envs/neb/lib/python3.11/contextlib.py:517 in enter_context               │
│                                                                                                  │
│   514 │   │   except AttributeError:                                                             │
│   515 │   │   │   raise TypeError(f"'{cls.__module__}.{cls.__qualname__}' object does "          │
│   516 │   │   │   │   │   │   │   f"not support the context manager protocol") from None         │
│ ❱ 517 │   │   result = _enter(cm)                                                                │
│   518 │   │   self._push_cm_exit(cm, _exit)                                                      │
│   519 │   │   return result                                                                      │
│   520                                                                                            │
│                                                                                                  │
│ /home/balast/miniconda3/envs/neb/lib/python3.11/contextlib.py:137 in __enter__                   │
│                                                                                                  │
│   134 │   │   # they are only needed for recreation, which is not possible anymore               │
│   135 │   │   del self.args, self.kwds, self.func                                                │
│   136 │   │   try:                                                                               │
│ ❱ 137 │   │   │   return next(self.gen)                                                          │
│   138 │   │   except StopIteration:                                                              │
│   139 │   │   │   raise RuntimeError("generator didn't yield") from None                         │
│   140                                                                                            │
│                                                                                                  │
│ /home/balast/CodingProjects/nebari/src/_nebari/stages/infrastructure/__init__.py:966 in deploy   │
│                                                                                                  │
│   963 │   def deploy(                                                                            │
│   964 │   │   self, stage_outputs: Dict[str, Dict[str, Any]], disable_prompt: bool = False       │
│   965 │   ):                                                                                     │
│ ❱ 966 │   │   with super().deploy(stage_outputs, disable_prompt):                                │
│   967 │   │   │   with kubernetes_provider_context(                                              │
│   968 │   │   │   │   stage_outputs["stages/" + self.name]["kubernetes_credentials"]["value"]    │
│   969 │   │   │   ):                                                                             │
│                                                                                                  │
│ /home/balast/miniconda3/envs/neb/lib/python3.11/contextlib.py:137 in __enter__                   │
│                                                                                                  │
│   134 │   │   # they are only needed for recreation, which is not possible anymore               │
│   135 │   │   del self.args, self.kwds, self.func                                                │
│   136 │   │   try:                                                                               │
│ ❱ 137 │   │   │   return next(self.gen)                                                          │
│   138 │   │   except StopIteration:                                                              │
│   139 │   │   │   raise RuntimeError("generator didn't yield") from None                         │
│   140                                                                                            │
│                                                                                                  │
│ /home/balast/CodingProjects/nebari/src/_nebari/stages/base.py:298 in deploy                      │
│                                                                                                  │
│   295 │   │   │   deploy_config["tofu_import"] = True                                            │
│   296 │   │   │   deploy_config["state_imports"] = state_imports                                 │
│   297 │   │                                                                                      │
│ ❱ 298 │   │   self.set_outputs(stage_outputs, opentofu.deploy(**deploy_config))                  │
│   299 │   │   self.post_deploy(stage_outputs, disable_prompt)                                    │
│   300 │   │   yield                                                                              │
│   301                                                                                            │
│                                                                                                  │
│ /home/balast/CodingProjects/nebari/src/_nebari/provider/opentofu.py:71 in deploy                 │
│                                                                                                  │
│    68 │   │   │   │   )                                                                          │
│    69 │   │                                                                                      │
│    70 │   │   if tofu_apply:                                                                     │
│ ❱  71 │   │   │   apply(directory, var_files=[f.name])                                           │
│    72 │   │                                                                                      │
│    73 │   │   if tofu_destroy:                                                                   │
│    74 │   │   │   destroy(directory, var_files=[f.name])                                         │
│                                                                                                  │
│ /home/balast/CodingProjects/nebari/src/_nebari/provider/opentofu.py:152 in apply                 │
│                                                                                                  │
│   149 │   │   + ["-var-file=" + _ for _ in var_files]                                            │
│   150 │   )                                                                                      │
│   151 │   with timer(logger, "tofu apply"):                                                      │
│ ❱ 152 │   │   run_tofu_subprocess(command, cwd=directory, prefix="tofu")                         │
│   153                                                                                            │
│   154                                                                                            │
│   155 def output(directory=None):                                                                │
│                                                                                                  │
│ /home/balast/CodingProjects/nebari/src/_nebari/provider/opentofu.py:120 in run_tofu_subprocess   │
│                                                                                                  │
│   117 │   logger.info(f" tofu at {tofu_path}")                                                   │
│   118 │   exit_code, output = run_subprocess_cmd([tofu_path] + processargs, **kwargs)            │
│   119 │   if exit_code != 0:                                                                     │
│ ❱ 120 │   │   raise OpenTofuException("OpenTofu returned an error")                              │
│   121 │   return output                                                                          │
│   122                                                                                            │
│   123                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
OpenTofuException: OpenTofu returned an error

Update: It looks like it was something unrelated to this PR which caused the node groups to be deleted and recreated, redeploying seems to work fine. Ahh, I think it's expected, I just didn't see the warning since the warning will only show up in 2024.11.1 upgrade notes, and I only upgraded to 2024.9.2 in this branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: New 🚦
Development

Successfully merging this pull request may close these issues.

[ENH] - Switch from Terraform to OpenTofu
3 participants