-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
https://issues.redhat.com/browse/ACM-14988 #7210
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
E1017 12:55:24.532493 1 dashboard_controller.go:147] dashboard: sample-dashboard could not be created after retrying 40 times | ||
---- | ||
|
||
. To fix the dashboard failure, redeploy Grafana by scaling the number of replicas to `0`. The `multicluster-observability-operator` pod automatically scales the number of replicas after you change it to `0`. Run the following command: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The latest edit, makes this a bit more unclear. Just explain in detail a little more what happens when you execute that command is:
- Kubernetes scales down the number of replicas to 0 as requested in the command
- The
multicluster-observability-operator
pod sees that the number of replicas are 0, which doesn't match the configured number of replicas, and therefore it scales the number of replicas back up to whatever number is configured in in the multicluster-observability config (2 by default).
. To fix the dashboard failure, redeploy Grafana by scaling the number of replicas to `0`. The `multicluster-observability-operator` pod automatically scales the number of replicas after you change it to `0`. Run the following command: | |
. To fix the dashboard failure, redeploy Grafana by scaling the number of replicas to `0`. The `multicluster-observability-operator` pod will automatically scale the deployment back up to the correct number of replicas. Run the following command: |
@jacobbaungard thanks! I just made another update based on your comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: dockerymick, jacobbaungard, jc-berger The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Thanks to you both for approving! @jacobbaungard @jc-berger |
No description provided.