-
Notifications
You must be signed in to change notification settings - Fork 54
Test Scenarios for Backups & Restore
Shuo Wu edited this page Jun 5, 2020
·
6 revisions
- create vol
bak
and attach to node - connect to node via ssh and issue
dd if=/dev/urandom of=/dev/longhorn/bak status=progress
- keep the dd running while doing all the tests below, that way you constantly have new data when backing up
- setup recurring backups every minute with retain count of
3
- do all the tests for each currently supported backup store
nfs, s3
#1341 concurrent backup test
- Take a manual backup of the volume
bak
while a recurring backup is running - verify that backup got created
- verify that backup sticks around even when recurring backups are cleaned up
#1326 concurrent backup creation & deletion
This one is a special case, were the volume only contains 1 backup, which the user requests to delete while the user has another backup in progress. Previously the in progress backup would only be written to disk after it's completed while the delete request would trigger the GC which then detects that there is no backups left on the volume which would trigger the volume deletion.
- create vol
dak
and attach to the same node volbak
is attached - connect to node via ssh and issue
dd if=/dev/urandom of=/dev/longhorn/dak status=progress
- wait for a bunch of data to be written (1GB)
- take a backup(1)
- wait for a bunch of data to be written (1GB)
- take a backup(2)
- immediately request deletion of backup(1)
- verify that backup(2) completes succesfully
- verify that backup(1) has been deleted
- verify that all blocks mentioned in the backup(2).cfg file are present in the blocks directory.
#1431 backup block deletion test
- create vol
blk
and mount to a node on/mnt/blk
- take backup(1)
dd if=/dev/urandom of=/mnt/blk/data2 bs=2097152 count=10 status=progress
- take backup(2)
dd if=/dev/urandom of=/mnt/blk/data3 bs=2097152 count=10 status=progress
- take backup(3)
- diff backup(2) backup(3) (run through json beautifier for easier comparison)
- delete backup(2)
- verify that the blocks solely used by backup(2) are deleted
- verify that the shared blocks between backup(2) and backup(3) are retained
- delete backup(3)
- wait
- delete backup(1)
- wait
- verify no more blocks
- verify volume.cfg BlockCount == 0
#1404 test backup functionality on google cloud and other s3 interop providers.
- create vol
s3-test
and mount to a node on/mnt/s3-test
via pvc - write some data on vol
s3-test
- take backup(1)
- write new data on vol
s3-test
- take backup(2)
- restore backup(1)
- verify data is consistent with backup(1)
- restore backup(2)
- verify data is consistent with backup(2)
- delete backup(1)
- delete backup(2)
- delete backup volume
s3-test
- verify volume path is removed
#1355 The node the restore volume attached to is down
- Create a backup.
- Create a restore volume from the backup.
- Power off the volume attached node during the restoring.
- Wait for the Longhorn node down.
- Wait for the restore volume being reattached and starting restoring volume with state
Degraded
. - Wait for the restore complete.
- Attach the volume and verify the restored data.
- Verify the volume works fine.
- Create a pod with Longhorn volume.
- Write data to the volume and get the md5sum.
- Create the 1st backup for the volume.
- Create a DR volume from the backup.
- Wait for the DR volume starting the initial restore. Then power off/reboot the DR volume attached node immediately.
- Wait for the DR volume detached then reattached.
- Wait for the DR volume restore complete after the reattachment.
- Activate the DR volume and check the data md5sum.
- Create a pod with Longhorn volume.
- Write data to the volume and get the md5sum.
- Create the 1st backup for the volume.
- Create a DR volume from the backup.
- Wait for the DR volume to complete the initial restore.
- Write more data to the original volume and get the md5sum.
- Create the 2nd backup for the volume.
- Wait for the DR volume incremental restore getting triggered. Then power off/reboot the DR volume attached node immediately.
- Wait for the DR volume detached then reattached.
- Wait for the DR volume restore complete after the reattachment.
- Activate the DR volume and check the data md5sum.