Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snapshots missing in used_by for custom volumes and storage pools on latest/edge LXD build #14291

Open
mas-who opened this issue Oct 17, 2024 · 27 comments · May be fixed by #14324
Open

Snapshots missing in used_by for custom volumes and storage pools on latest/edge LXD build #14291

mas-who opened this issue Oct 17, 2024 · 27 comments · May be fixed by #14324
Assignees
Labels
Bug Confirmed to be a bug
Milestone

Comments

@mas-who
Copy link

mas-who commented Oct 17, 2024

Required information

  • Distribution: snap
  • Distribution version: 2.63
  • The output of "snap list --all lxd core20 core22 core24 snapd":
Name    Version      Rev    Tracking       Publisher   Notes
core20  20240416     2318   latest/stable  canonical✓  base,disabled
core20  20240705     2379   latest/stable  canonical✓  base
core22  20240823     1612   latest/stable  canonical✓  base,disabled
core22  20240904     1621   latest/stable  canonical✓  base
core24  20240528     423    latest/stable  canonical✓  base,disabled
core24  20240710     490    latest/stable  canonical✓  base
lxd     git-dcd70b1  30709  latest/edge    canonical✓  disabled
lxd     git-35332a1  30717  latest/edge    canonical✓  -
snapd   2.65.3       22991  latest/stable  canonical✓  snapd,disabled
snapd   2.63         21759  latest/stable  canonical✓  snapd
  • The output of "lxc info" or if that fails:
config:
  acme.agree_tos: "true"
  core.https_address: '[::]:8443'
  oidc.audience: https://dev-xjrvvfikbsv4jxn7.us.auth0.com/api/v2/
  oidc.client.id: gxj297yfAjmklILK5WqPWDSbtVBAeSQm
  oidc.groups.claim: lxd-idp-groups
  oidc.issuer: https://dev-xjrvvfikbsv4jxn7.us.auth0.com/
  user.show_permissions: "true"
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- storage_api_remote_volume_snapshot_copy
- zfs_delegate
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- init_preseed_storage_volumes
- metrics_instances_count
- server_instance_type_info
- resources_disk_mounted
- server_version_lts
- oidc_groups_claim
- loki_config_instance
- storage_volatile_uuid
- import_instance_devices
- instances_uefi_vars
- instances_migration_stateful
- container_syscall_filtering_allow_deny_syntax
- access_management
- vm_disk_io_limits
- storage_volumes_all
- instances_files_modify_permissions
- image_restriction_nesting
- container_syscall_intercept_finit_module
- device_usb_serial
- network_allocate_external_ips
- explicit_trust_token
- shared_custom_block_volumes
- instance_import_conversion
- instance_create_start
- instance_protection_start
- devlxd_images_vm
- disk_io_bus_virtio_blk
- metrics_api_requests
- projects_limits_disk_pool
- ubuntu_pro_guest_attach
- metadata_configuration_entity_types
- access_management_tls
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
- oidc
auth_user_name: mason
auth_user_method: unix
environment:
  addresses:
  - 10.0.0.139:8443
  - 172.18.0.1:8443
  - '[fc00:f853:ccd:e793::1]:8443'
  - 172.17.0.1:8443
  - 10.173.68.1:8443
  - '[fd42:fd46:adbb:ef2f::1]:8443'
  - 10.28.203.1:8443
  - '[fd42:c1:430f:23df::1]:8443'
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIB7zCCAXWgAwIBAgIQeabuL29Rx1Kq4GMmGPassjAKBggqhkjOPQQDAzAoMQww
    CgYDVQQKEwNMWEQxGDAWBgNVBAMMD3Jvb3RAQmxhY2tNdW1iYTAeFw0yNDA4MjYx
    MDIwMDVaFw0zNDA4MjQxMDIwMDVaMCgxDDAKBgNVBAoTA0xYRDEYMBYGA1UEAwwP
    cm9vdEBCbGFja011bWJhMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEXTXZ3NmIzzQ5
    lNl8ib1/W2R1f3CFO1CU0HeOaBFlHE+3mv3xmCX02qjFYNhpm43x0yBeQ547EvuV
    SzVoVL6pScLv8CrAiKt5JCqHxdAJZh0odUNSjDrc+9S7CSJ9bZEno2QwYjAOBgNV
    HQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAt
    BgNVHREEJjAkggpCbGFja011bWJhhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMAoG
    CCqGSM49BAMDA2gAMGUCMQDiS6oVLt8jmQKfVBJsp2jMnniLiCZVKXNaC5TNbHhL
    5DLFDhBSOdfwzPS2axTJ6+4CMEc5bSlgpLHIiulWry/fL1KdJPg3V6ChSVzXWHo6
    2CQhSP/O4JcRBZWYTQ4+BjRGeQ==
    -----END CERTIFICATE-----
  certificate_fingerprint: 1a2eaac2f9deb845ec4a25039be7ca47a020812c7b5430b81971392ebd201823
  driver: lxc | qemu
  driver_version: 6.0.0 | 8.2.2
  instance_types:
  - container
  - virtual-machine
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_binfmt: "true"
    unpriv_fscaps: "true"
  kernel_version: 6.8.0-45-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "22.04"
  project: default
  server: lxd
  server_clustered: false
  server_event_mode: full-mesh
  server_name: BlackMumba
  server_pid: 232368
  server_version: "6.1"
  server_lts: false
  storage: zfs
  storage_version: 2.2.2-0ubuntu9
  storage_supported_drivers:
  - name: lvm
    version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.48.0
    remote: false
  - name: powerflex
    version: 2.8 (nvme-cli)
    remote: true
  - name: zfs
    version: 2.2.2-0ubuntu9
    remote: false
  - name: btrfs
    version: 6.6.3
    remote: false
  - name: ceph
    version: 19.2.0~git20240301.4c76c50
    remote: true
  - name: cephfs
    version: 19.2.0~git20240301.4c76c50
    remote: true
  - name: cephobject
    version: 19.2.0~git20240301.4c76c50
    remote: true
  - name: dir
    version: "1"
    remote: false

Issue description

On latest/edge LXD build, when I create a custom volume snapshot, it no longer shows up in the used_by metadata field when making a request to GET /1.0/storage-pools/<pool_name>/volumes/custom/<volume_name>?recursion=1. However, this is fine on the latest/stable LXD build and the snapshot does show up in the response.

Reproducer

  1. Create a new custom volume lxc storage volume create default test-vol
  2. Create a snapshot for that volume lxc storage volume snapshot default test-vol
  3. Check detail for the new custom volume lxc storage volume show default test-vol. See below results:
name: test-vol
description: ""
type: custom
pool: default
content_type: filesystem
project: default
location: none
created_at: 2024-10-16T12:47:17.515348113Z
config:
  volatile.uuid: eb84f929-edba-45c0-8a33-c9ad2d4cdc46
used_by: []

It is expected that used_by should have the snapshot resource url.
4. Check detail for the storage pool lxc storage show default. See below results:

name: default
description: ""
driver: zfs
status: Created
config:
  size: 30GiB
  source: /var/snap/lxd/common/lxd/disks/default.img
  zfs.pool_name: default
used_by:
- /1.0/images/24722ef5183fef6b4e7e1659ff10156025d1b64c14da95a84feec5f7a985c60d?project=test-project
- /1.0/images/5a63bc87974e61c631567c0d171fefffb33ebb7525b8295672ef0c5bf2cbd898
- /1.0/images/a43f9f53990c38ac641acbd1cb4e13b75cbea2a8f2e80cf8218d2f4b86c3b004
- /1.0/instances/asdf?project=test-project
- /1.0/instances/c22
- /1.0/instances/c4
- /1.0/instances/micro1
- /1.0/instances/micro2
- /1.0/instances/micro3
- /1.0/instances/micro4
- /1.0/instances/node-1
- /1.0/instances/node-2
- /1.0/profiles/default
- /1.0/storage-pools/default/volumes/custom/test-vol
locations:
- none

It is expected that used_by should have the snapshot resource url associated to the new custom volume created above.

@tomponline
Copy link
Member

@hamistao please can you look into this issue next.

@tomponline tomponline added the Bug Confirmed to be a bug label Oct 17, 2024
@tomponline tomponline added this to the lxd-6.2 milestone Oct 17, 2024
@tomponline
Copy link
Member

@mas-who please can you provide reproducer steps using lxc tool to assist @hamistao with this case

@edlerd
Copy link
Contributor

edlerd commented Oct 17, 2024

This also applies to instance snapshots. Instance snapshots are missing in the used by section in the GET 1.0/storage-pools/:pool?project=default&recursion=1 response.

@mas-who mas-who changed the title Snapshots missing in used_by for custom volumes on latest/edge LXD build Snapshots missing in used_by for custom volumes and storage pools on latest/edge LXD build Oct 17, 2024
@mas-who
Copy link
Author

mas-who commented Oct 17, 2024

This also applies to instance snapshots. Instance snapshots are missing in the used by section in the GET 1.0/storage-pools/:pool?project=default&recursion=1 response.

Yes, I've updated the issue title.

@mas-who please can you provide reproducer steps using lxc tool to assist @hamistao with this case

Sure @tomponline I will get a reproducer using lxc shortly

@tomponline
Copy link
Member

However, this is fine on the latest/stable LXD build and the snapshot does show up in the response.

Are you sure about that, I dont see it on latest/stable either.

@tomponline
Copy link
Member

Nor does it show on 5.21/stable

@tomponline
Copy link
Member

Nor does it show on 5.0/stable

@tomponline tomponline added Incomplete Waiting on more information from reporter and removed Bug Confirmed to be a bug labels Oct 18, 2024
@tomponline
Copy link
Member

Marking this as incomplete until we get more info on whether this is a regression or a feature improvement request (will prioritize accordingly).

For including volume and instance snapshots in a pool's used by list, what is the use case for this?

Thanks

@mas-who
Copy link
Author

mas-who commented Oct 18, 2024

However, this is fine on the latest/stable LXD build and the snapshot does show up in the response.

Are you sure about that, I dont see it on latest/stable either.

I actually didn't test this myself, David helped with the testing there. @edlerd just to confirm, the channel you tested on is latest/stable right? Do you think maybe the snap stopped updating somehow and you were looking at a stale version of latest/stable?

@mas-who
Copy link
Author

mas-who commented Oct 18, 2024

Marking this as incomplete until we get more info on whether this is a regression or a feature improvement request (will prioritize accordingly).

For including volume and instance snapshots in a pool's used by list, what is the use case for this?

Thanks

In the UI, we can inspect a pool's used by list also allowing them to link to those resources. Instance/volume snapshots being one of them. The feature was there in the UI before I joined I believe, so I may not have the full context here.

Screenshot from 2024-10-18 14-14-08

@edlerd
Copy link
Contributor

edlerd commented Oct 18, 2024

I do see the snapshots in the list for 5.21/stable and 5.21/candidate. Steps to reproduce:

  1. Create a new storage pool with driver "dir".
  2. Create a jammy instance on that pool
  3. Create a snapshot for that instance
  4. Check the storage pool detail page

image

image

@tomponline
Copy link
Member

@edlerd thanks, what about storage volumes?

@edlerd
Copy link
Contributor

edlerd commented Oct 18, 2024

@edlerd thanks, what about storage volumes?

Volume snapshots show up as well for 5.21/stable and 5.21/candidate.

Reproducer:

  1. create storage pool with driver "dir"
  2. create a storage volume on that pool
  3. create a snapshot of the volume
  4. go to the pool detail page

@tomponline
Copy link
Member

go to the pool detail page

Please can we have reproducers without needing the UI, so lxc query perhaps to show the specific endpoint at fault?

@edlerd
Copy link
Contributor

edlerd commented Oct 18, 2024

Both instance and volume snapshots also show up in 5.0/stable. Though, volume snapshots cannot be created from the ui in 5.0/stable. But after creating them in the cli they show up in the ui and in the api.

@tomponline
Copy link
Member

I've tried that reproducer though, but using lxc storage volume info for step 4 and never seen the snapshot in 5.0, 5.21 or latest

@edlerd
Copy link
Contributor

edlerd commented Oct 18, 2024

That is the misunderstanding, the snapshots are missing in the response to

GET 1.0/storage-pools/:pool?project=default&recursion=1

request. They should appear in the metadata.used_by array.

@mas-who
Copy link
Author

mas-who commented Oct 18, 2024

GET /1.0/storage-pools/<pool_name>/volumes/custom/<volume_name>?recursion=1

It's also missing in response to GET /1.0/storage-pools/<pool_name>/volumes/custom/<volume_name>?recursion=1 as a single volume may have the used_by field as well.

I've tried that reproducer though, but using lxc storage volume info for step 4 and never seen the snapshot in 5.0, 5.21 or latest

@tomponline, for step 4 I was using lxc storage show <pool> to check data for a storage pool and lxc storage volume show <pool> <volume> to check for a custom volume. I don't think the info commands return used by list

@edlerd
Copy link
Contributor

edlerd commented Oct 18, 2024

The snapshots never showed up for lxc storage volume show :pool :volume, and I think that is correct. A volume snapshot has no used by relation to the volume itself.

The bug is lxc storage show :pool not showing the snapshots for instances or volumes. This used to work in 5.0/stable, 5.21/stable and 5.21/candiate, but fails on latest/edge.

@tomponline
Copy link
Member

Right got you, thanks for clarification.

@tomponline tomponline added Bug Confirmed to be a bug and removed Incomplete Waiting on more information from reporter labels Oct 18, 2024
@tomponline
Copy link
Member

@markylaing its this line that is incorrectly filtering out snapshots https://github.com/canonical/lxd/blob/main/lxd/storage_pools.go#L673

Please can you take a look

@mas-who
Copy link
Author

mas-who commented Oct 18, 2024

@markylaing its this line that is incorrectly filtering out snapshots https://github.com/canonical/lxd/blob/main/lxd/storage_pools.go#L673

Please can you take a look

Oh interesting, I am authenticated with TLS if that helps, I'd expect to see all used by resources in this case.

@tomponline
Copy link
Member

Oh interesting, I am authenticated with TLS if that helps, I'd expect to see all used by resources in this case.

Its happening with unix socket too.

@mas-who
Copy link
Author

mas-who commented Oct 18, 2024

Oh interesting, I am authenticated with TLS if that helps, I'd expect to see all used by resources in this case.

Its happening with unix socket too.

Of course, thanks for clarifying 👍

@tomponline
Copy link
Member

@markylaing the error is coming from

err := authorizer.CheckPermission(r.Context(), urls[0], auth.EntitlementCanView)
if err != nil {
continue

ERROR  [2024-10-18T13:43:21Z] tomp4                                         err="Cannot check permissions for entity type \"storage_volume_snapshot\" and entitlement \"can_view\": No entitlements can be granted against entities of type \"storage_volume_snapshot\""

Should we be ignoring all errors from authorizer.CheckPermission or only skipping specific ones that mean "you dont have permission" and then logging warnings about others, such as this error.

@tomponline
Copy link
Member

@markylaing tracked the issue to

entity.TypeStorageVolume: {
// Grants permission to edit the storage volume.
EntitlementCanEdit,
// Grants permission to delete the storage volume.
EntitlementCanDelete,
// Grants permission to view the storage volume.
EntitlementCanView,
// Grants permission to create and delete snapshots of the storage volume.
EntitlementCanManageSnapshots,
// Grants permission to create and delete backups of the storage volume.
EntitlementCanManageBackups,

There seems to be a missing section for entity type storage_volume_snapshot which the resource 1.0/storage-pools/default/volumes/custom/test-vol/snapshots/snap0 maps to.

@tomponline
Copy link
Member

@markylaing should we add a section for entity.TypeStorageVolumeSnapshot to that map, or instead should entity.ParseURL( not being parsing a snapshot URL into its own entity type, but rather that of the parent entity type?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Confirmed to be a bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants