-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
config: move harbor backup schedule and make it configurable #2310
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1693,6 +1693,13 @@ properties: | |
`RetentionDays` defines how old a backup should be before deleting it. | ||
default: 7 | ||
type: number | ||
schedule: | ||
title: Schedule for backup job | ||
description: |- | ||
`schedule` defines when the backup job for harbor will run. | ||
This should be set to run shortly after velero backups in wc, in order to ensure that images needed for velero backups are backed up in harbor. | ||
default: "30 0 * * *" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It could be nice to have a pattern for this in the schema There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I did manage to find this pattern that seems to work in all general cases, but there might be some edge cases where it fails. But it is very complicated.
Is this something we want, or is it just confusing to add this? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The Kubernetes spec just has this:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we really want something this seems less complicated: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm it does indeed look a bit confusing, maybe just adding a link on formatting in the description is good enough There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That link is also what I looked at. The particular one that you showed does at least seem to miss the possibility of adding I think we should either add the complicated one or just link to the wikipedia page like anders suggested. |
||
type: string | ||
type: object | ||
core: | ||
title: Core Config | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this not also depend on how long time the Velero backup takes to complete?
It would be nice if they were triggered in sequence rather than on two different schedules. Issue worthy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it does definitely depend on that. This mostly reflects that most velero backups should be done within 30 mins, it seemed like a sane default.
It would be very nice if that was triggered in sequence, but I'm not sure that it's worth making some controller that would be able to fix that. Especially since this is spread out across two different kubernetes clusters, harbor backups in sc and velero backups in wc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are so hesitant to create new issues to prevent flooding the backlog (which I totally understand) could we find some way of still keep "nice to haves" around? I think even if this isn't something we want to add to the current backlog this is something we want to solve in the future, i.e. that when you create a backup of your cluster all images that are currently in use should also be backed up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i can bring that up to see what we can do 👍