Skip to content

Test Scenarios for Backups & Restore

Shuo Wu edited this page Jun 5, 2020 · 6 revisions

Test Setup

  • create vol bak and attach to node
  • connect to node via ssh and issue dd if=/dev/urandom of=/dev/longhorn/bak status=progress
  • keep the dd running while doing all the tests below, that way you constantly have new data when backing up
  • setup recurring backups every minute with retain count of 3
  • do all the tests for each currently supported backup store nfs, s3

#1341 concurrent backup test

  • Take a manual backup of the volume bak while a recurring backup is running
  • verify that backup got created
  • verify that backup sticks around even when recurring backups are cleaned up

#1326 concurrent backup creation & deletion

This one is a special case, were the volume only contains 1 backup, which the user requests to delete while the user has another backup in progress. Previously the in progress backup would only be written to disk after it's completed while the delete request would trigger the GC which then detects that there is no backups left on the volume which would trigger the volume deletion.

  • create vol dak and attach to the same node vol bak is attached
  • connect to node via ssh and issue dd if=/dev/urandom of=/dev/longhorn/dak status=progress
  • wait for a bunch of data to be written (1GB)
  • take a backup(1)
  • wait for a bunch of data to be written (1GB)
  • take a backup(2)
  • immediately request deletion of backup(1)
  • verify that backup(2) completes succesfully
  • verify that backup(1) has been deleted
  • verify that all blocks mentioned in the backup(2).cfg file are present in the blocks directory.

#1431 backup block deletion test

  • create vol blkand mount to a node on /mnt/blk
  • take backup(1)
  • dd if=/dev/urandom of=/mnt/blk/data2 bs=2097152 count=10 status=progress
  • take backup(2)
  • dd if=/dev/urandom of=/mnt/blk/data3 bs=2097152 count=10 status=progress
  • take backup(3)
  • diff backup(2) backup(3) (run through json beautifier for easier comparison)
  • delete backup(2)
  • verify that the blocks solely used by backup(2) are deleted
  • verify that the shared blocks between backup(2) and backup(3) are retained
  • delete backup(3)
  • wait
  • delete backup(1)
  • wait
  • verify no more blocks
  • verify volume.cfg BlockCount == 0

#1404 test backup functionality on google cloud and other s3 interop providers.

  • create vol s3-testand mount to a node on /mnt/s3-test via pvc
  • write some data on vol s3-test
  • take backup(1)
  • write new data on vol s3-test
  • take backup(2)
  • restore backup(1)
  • verify data is consistent with backup(1)
  • restore backup(2)
  • verify data is consistent with backup(2)
  • delete backup(1)
  • delete backup(2)
  • delete backup volume s3-test
  • verify volume path is removed

#1355 The node the restore volume attached to is down

  1. Create a backup.
  2. Create a restore volume from the backup.
  3. Power off the volume attached node during the restoring.
  4. Wait for the Longhorn node down.
  5. Wait for the restore volume being reattached and starting restoring volume with state Degraded.
  6. Wait for the restore complete.
  7. Attach the volume and verify the restored data.
  8. Verify the volume works fine.

#1366 && #1328 The node the DR volume attached to is down/rebooted

Scenario 1

  1. Create a pod with Longhorn volume.
  2. Write data to the volume and get the md5sum.
  3. Create the 1st backup for the volume.
  4. Create a DR volume from the backup.
  5. Wait for the DR volume starting the initial restore. Then power off/reboot the DR volume attached node immediately.
  6. Wait for the DR volume detached then reattached.
  7. Wait for the DR volume restore complete after the reattachment.
  8. Activate the DR volume and check the data md5sum.

Scenario 2

  1. Create a pod with Longhorn volume.
  2. Write data to the volume and get the md5sum.
  3. Create the 1st backup for the volume.
  4. Create a DR volume from the backup.
  5. Wait for the DR volume to complete the initial restore.
  6. Write more data to the original volume and get the md5sum.
  7. Create the 2nd backup for the volume.
  8. Wait for the DR volume incremental restore getting triggered. Then power off/reboot the DR volume attached node immediately.
  9. Wait for the DR volume detached then reattached.
  10. Wait for the DR volume restore complete after the reattachment.
  11. Activate the DR volume and check the data md5sum.