Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

incus_storage_bucket produces inconsistent result #47

Closed
mattwillsher opened this issue Mar 16, 2024 · 5 comments · Fixed by #51
Closed

incus_storage_bucket produces inconsistent result #47

mattwillsher opened this issue Mar 16, 2024 · 5 comments · Fixed by #51
Assignees
Labels
Bug Confirmed to be a bug

Comments

@mattwillsher
Copy link

Given:

resource "incus_storage_bucket" "this" {
  name = "bucket"

  pool = "default"

  config = {
    "size" = "100MiB"
  }
}
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to incus_storage_bucket.this, provider "provider[\"registry.terraform.io/lxc/incus\"]"
│ produced an unexpected new value: .config: new element "block.filesystem" has appeared.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to incus_storage_bucket.this, provider "provider[\"registry.terraform.io/lxc/incus\"]"
│ produced an unexpected new value: .config: new element "block.mount_options" has appeared.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

Workaround:

resource "incus_storage_bucket" "this" {
  name = "bucket"

  pool = "default"

  config = {
    "size" = "100MiB"
    "block.filesystem" : "ext4"
    "block.mount_options" : "discard"
  }
}
@maveonair
Copy link
Member

I've attempted to reproduce the problem using the latest version 0.1.1, following the steps you've described, but everything seems to be functioning as expected on my end. Here's the output I received:

$ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # incus_storage_bucket.this will be created
  + resource "incus_storage_bucket" "this" {
      + config   = {
          + "size" = "100MiB"
        }
      + location = (known after apply)
      + name     = "bucket"
      + pool     = "default"
      + target   = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if
you run "terraform apply" now.
$ terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create

Terraform will perform the following actions:

  # incus_storage_bucket.this will be created
  + resource "incus_storage_bucket" "this" {
      + config   = {
          + "size" = "100MiB"
        }
      + location = (known after apply)
      + name     = "bucket"
      + pool     = "default"
      + target   = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

incus_storage_bucket.this: Creating...
incus_storage_bucket.this: Creation complete after 1s [name=bucket]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

This leads me to believe that the issue might be related to a specific configuration or environment setup on your side. To better understand and assist you, could you please provide some additional details about your setup? Specifically, it would be helpful to know:

  • The exact version of Incus you are using.
  • The configuration settings for Incus, if they differ from the defaults.
  • Any other details about your environment that might be relevant (e.g., operating system, Terraform version).

@mattwillsher
Copy link
Author

mattwillsher commented Mar 16, 2024

Curious.

Server, incus installed from zabbly APT repo. Storage pool is lvm with thin pool disabled.

config:
  lvm.use_thinpool: "false"
  lvm.vg_name: vg_incus
  source: vg_incus
  volatile.initial_source: vg_incus
description: ""
name: default
driver: lvm
> incus version
Client version: 0.6
Server version: 0.6

> cat /etc/debian_version
12.5

Client, running under WSL2 on Windows 11. Incus install from homebrew.

❯ incus version
Client version: 0.6
Server version: 0.6

❯ terraform version
Terraform v1.7.5
on linux_amd64
+ provider registry.terraform.io/hashicorp/local v2.5.1
+ provider registry.terraform.io/lxc/incus v0.1.1

❯ cat /etc/redhat-release
AlmaLinux release 9.0 (Emerald Puma)

@maveonair
Copy link
Member

maveonair commented Mar 18, 2024

Thanks to your configuration details, I was able to replicate the issue. It appears when using the lvm driver for the storage pool during the creation of a storage bucket for this type of pool.

@stgraber @adamcstephens, to address this behavior, we might need to consider verifying if lvm is selected as the storage driver and then prompt the user to include block.filesystem and block.mount_options in their configuration block. Do you think this check should be incorporated directly into the provider's logic, or would it be more appropriate to outline this requirement in the provider's documentation?

@mattwillsher, as a temporary solution, please adjust your storage bucket configuration as follows:

resource "incus_storage_bucket" "this" {
  name = "bucket"
  pool = "default"
  config = {
    "block.filesystem"    = "ext4"
    "block.mount_options" = "discard"
    "size"                = "100MiB"
  }
}

This configuration should circumvent the issue for now. I'm eager to hear your thoughts and further suggestions from @stgraber and @adamcstephens on the proposed fix.

Looking forward to your input.

@maveonair maveonair added the Bug Confirmed to be a bug label Mar 18, 2024
@adamcstephens
Copy link
Contributor

Are we not able to merge this remote state into the stored config attribute? My preference would be that this just gets computed and stored instead of requiring users to have it in place. The error in the first post indicates to me this is something that needs to be fixed anyway since we shouldn't be generating an inconsistent state.

@maveonair
Copy link
Member

maveonair commented Mar 18, 2024

Are we not able to merge this remote state into the stored config attribute? My preference would be that this just gets computed and stored instead of requiring users to have it in place. The error in the first post indicates to me this is something that needs to be fixed anyway since we shouldn't be generating an inconsistent state.

Oh, definitely, that's a good point. I will try to make the necessary adjustments to implement this effectively.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Confirmed to be a bug
Development

Successfully merging a pull request may close this issue.

3 participants