Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: RAID volume pre cleanup #169

Merged
merged 1 commit into from
Jul 19, 2023

Conversation

japokorn
Copy link
Collaborator

  • raid volumes now remove existing data from member disks as needed before creation

@japokorn
Copy link
Collaborator Author

Fixes #163

@pcahyna
Copy link
Member

pcahyna commented Sep 23, 2020

is there a test?

@@ -611,6 +611,11 @@ def _create(self):
if safe_mode:
raise BlivetAnsibleError("cannot create new RAID in safe mode")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't comment on the change itself, but this line above does not look right. Does it mean that every RAID creation is considered unsafe and users must set safe_mode off for it to be allowed, even if the RAID is to be created on an empty disk set?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Thanks for noticing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We must not fall into the opposite problem of allowing to destroy existing data in safe mode though. See #168 .

Copy link
Collaborator

@dwlehman dwlehman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good otherwise.

library/blivet.py Outdated Show resolved Hide resolved
@japokorn japokorn force-pushed the master-raid_disks_cleanup branch 2 times, most recently from f88efba to 6c990b1 Compare December 18, 2020 12:36
Copy link
Collaborator

@dwlehman dwlehman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My suggestion would be to make the change where is applies to what you're doing here and maybe open a separate pull request to address the issue in BlivetPool._create_members.

raise BlivetAnsibleError("cannot create new RAID in safe mode")
for spec in self._volume["disks"]:
disk = self._blivet.devicetree.resolve_device(spec)
if not disk.isleaf or disk.format.type is not None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check for safe_mode has to be a bit more careful, like the ones in BlivetVolume._reformat and BlivetBase._manage_one_encryption. I see that BlivetPool._create_members also needs to check for device.original_format.name != get_format(None).name (to catch formatting reported by blkid but not recognized/handled by blivet).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

@richm
Copy link
Contributor

richm commented May 4, 2021

[citest pending]

@richm
Copy link
Contributor

richm commented May 4, 2021

[citest bad]

@richm
Copy link
Contributor

richm commented May 4, 2021

[citest pending]

@japokorn
Copy link
Collaborator Author

Rebased. This should fix the issue with failing test.

@richm
Copy link
Contributor

richm commented Jul 6, 2021

[citest bad]

@richm
Copy link
Contributor

richm commented Jul 13, 2021

[citest pending]

@richm
Copy link
Contributor

richm commented Oct 4, 2021

Do we still need this?

@richm
Copy link
Contributor

richm commented Oct 11, 2021

[citest pending]

2 similar comments
@richm
Copy link
Contributor

richm commented Oct 11, 2021

[citest pending]

@richm
Copy link
Contributor

richm commented Oct 20, 2021

[citest pending]

@richm
Copy link
Contributor

richm commented Dec 1, 2021

[citest bad]

@richm
Copy link
Contributor

richm commented Dec 13, 2021

[citest pending]

@japokorn
Copy link
Collaborator Author

japokorn commented Jan 5, 2022

[citest]

@japokorn
Copy link
Collaborator Author

japokorn commented Feb 1, 2022

[citest bad]

@codecov
Copy link

codecov bot commented Jul 14, 2023

Codecov Report

Patch coverage has no change and project coverage change: -0.04 ⚠️

Comparison is base (d95e590) 13.70% compared to head (5540499) 13.67%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #169      +/-   ##
==========================================
- Coverage   13.70%   13.67%   -0.04%     
==========================================
  Files           8        8              
  Lines        1729     1733       +4     
  Branches       71       79       +8     
==========================================
  Hits          237      237              
- Misses       1492     1496       +4     
Flag Coverage Δ
sanity 16.54% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
library/blivet.py 0.00% <0.00%> (ø)

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@japokorn japokorn changed the title raid volume pre cleanup fix fix: RAID volume pre cleanup Jul 14, 2023
@japokorn japokorn marked this pull request as draft July 14, 2023 11:32
@@ -0,0 +1,105 @@
---
- hosts: all
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- hosts: all
- name: Test RAID cleanup
hosts: all

volume2_size: '4g'

tasks:
- include_role:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- include_role:
- name: Call the storage role
include_role:

- packages_installed
- service_facts

- include_tasks: get_unused_disk.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- include_tasks: get_unused_disk.yml
- name: Get unused disks
include_tasks: get_unused_disk.yml

set_fact:
storage_safe_mode: true

- name: Try to overwrite existing device with raid volume and safe mode on (expect failure)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- name: Try to overwrite existing device with raid volume and safe mode on (expect failure)
- name: >-
Try to overwrite existing device with raid volume and safe mode on
(expect failure)

mount_point: "{{ mount_location1 }}"
state: present

- name: unreachable task
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- name: unreachable task
- name: Unreachable task


- name: Try to overwrite existing device with raid volume and safe mode on (expect failure)
block:
- name: Create a RAID0 device mounted on "{{ mount_location1 }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For checking that the role raises an error, and that the error message is the one that you expect, please consider using verify-role-failed.yml as in https://github.com/linux-system-roles/storage/blob/main/tests/tests_misc.yml#L66
One of the issues we have had in the past with testing for storage role failures was that the role would fail in the wrong way i.e. because the inputs were not correct, or there was some other blivet error, but the test would report that the role failed correctly because it was not checking for the error message - using verify-role-failed.yml makes it easy to verify that the role failed AND that the error message is the correct one for the failure condition.

disks: "{{ unused_disks }}"
mount_point: "{{ mount_location1 }}"
state: absent

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

@japokorn japokorn force-pushed the master-raid_disks_cleanup branch 2 times, most recently from e944a6b to 19af7d8 Compare July 19, 2023 11:05
Cause: Existing data were not removed from member disks before RAID
volume creation.

Fix: RAID volumes now remove existing data from member disks as needed before creation.

Signed-off by: Jan Pokorny <[email protected]>
@japokorn japokorn marked this pull request as ready for review July 19, 2023 14:07
@richm
Copy link
Contributor

richm commented Jul 19, 2023

testing now

@richm richm merged commit 01d5e93 into linux-system-roles:main Jul 19, 2023
15 of 17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants