-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: RAID volume pre cleanup #169
fix: RAID volume pre cleanup #169
Conversation
japokorn
commented
Sep 22, 2020
- raid volumes now remove existing data from member disks as needed before creation
Fixes #163 |
is there a test? |
library/blivet.py
Outdated
@@ -611,6 +611,11 @@ def _create(self): | |||
if safe_mode: | |||
raise BlivetAnsibleError("cannot create new RAID in safe mode") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't comment on the change itself, but this line above does not look right. Does it mean that every RAID creation is considered unsafe and users must set safe_mode off for it to be allowed, even if the RAID is to be created on an empty disk set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. Thanks for noticing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We must not fall into the opposite problem of allowing to destroy existing data in safe mode though. See #168 .
571f773
to
84e8553
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good otherwise.
f88efba
to
6c990b1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My suggestion would be to make the change where is applies to what you're doing here and maybe open a separate pull request to address the issue in BlivetPool._create_members
.
raise BlivetAnsibleError("cannot create new RAID in safe mode") | ||
for spec in self._volume["disks"]: | ||
disk = self._blivet.devicetree.resolve_device(spec) | ||
if not disk.isleaf or disk.format.type is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check for safe_mode
has to be a bit more careful, like the ones in BlivetVolume._reformat
and BlivetBase._manage_one_encryption
. I see that BlivetPool._create_members
also needs to check for device.original_format.name != get_format(None).name
(to catch formatting reported by blkid but not recognized/handled by blivet).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
6c990b1
to
9a701b7
Compare
[citest pending] |
[citest bad] |
[citest pending] |
9a701b7
to
5c2ee87
Compare
Rebased. This should fix the issue with failing test. |
[citest bad] |
[citest pending] |
Do we still need this? |
[citest pending] |
2 similar comments
[citest pending] |
[citest pending] |
[citest bad] |
5c2ee87
to
39e5e8d
Compare
[citest pending] |
[citest] |
39e5e8d
to
fa9e7f1
Compare
[citest bad] |
fa9e7f1
to
5ca1593
Compare
Codecov ReportPatch coverage has no change and project coverage change:
Additional details and impacted files@@ Coverage Diff @@
## main #169 +/- ##
==========================================
- Coverage 13.70% 13.67% -0.04%
==========================================
Files 8 8
Lines 1729 1733 +4
Branches 71 79 +8
==========================================
Hits 237 237
- Misses 1492 1496 +4
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
5ca1593
to
f89f484
Compare
tests/tests_raid_volume_cleanup.yml
Outdated
@@ -0,0 +1,105 @@ | |||
--- | |||
- hosts: all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- hosts: all | |
- name: Test RAID cleanup | |
hosts: all |
tests/tests_raid_volume_cleanup.yml
Outdated
volume2_size: '4g' | ||
|
||
tasks: | ||
- include_role: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- include_role: | |
- name: Call the storage role | |
include_role: |
tests/tests_raid_volume_cleanup.yml
Outdated
- packages_installed | ||
- service_facts | ||
|
||
- include_tasks: get_unused_disk.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- include_tasks: get_unused_disk.yml | |
- name: Get unused disks | |
include_tasks: get_unused_disk.yml |
tests/tests_raid_volume_cleanup.yml
Outdated
set_fact: | ||
storage_safe_mode: true | ||
|
||
- name: Try to overwrite existing device with raid volume and safe mode on (expect failure) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- name: Try to overwrite existing device with raid volume and safe mode on (expect failure) | |
- name: >- | |
Try to overwrite existing device with raid volume and safe mode on | |
(expect failure) |
tests/tests_raid_volume_cleanup.yml
Outdated
mount_point: "{{ mount_location1 }}" | ||
state: present | ||
|
||
- name: unreachable task |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- name: unreachable task | |
- name: Unreachable task |
tests/tests_raid_volume_cleanup.yml
Outdated
|
||
- name: Try to overwrite existing device with raid volume and safe mode on (expect failure) | ||
block: | ||
- name: Create a RAID0 device mounted on "{{ mount_location1 }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For checking that the role raises an error, and that the error message is the one that you expect, please consider using verify-role-failed.yml as in https://github.com/linux-system-roles/storage/blob/main/tests/tests_misc.yml#L66
One of the issues we have had in the past with testing for storage role failures was that the role would fail in the wrong way i.e. because the inputs were not correct, or there was some other blivet error, but the test would report that the role failed correctly because it was not checking for the error message - using verify-role-failed.yml makes it easy to verify that the role failed AND that the error message is the correct one for the failure condition.
tests/tests_raid_volume_cleanup.yml
Outdated
disks: "{{ unused_disks }}" | ||
mount_point: "{{ mount_location1 }}" | ||
state: absent | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
e944a6b
to
19af7d8
Compare
Cause: Existing data were not removed from member disks before RAID volume creation. Fix: RAID volumes now remove existing data from member disks as needed before creation. Signed-off by: Jan Pokorny <[email protected]>
19af7d8
to
5540499
Compare
testing now |