-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Test Refactor] Enhance output way of e2e case --- using CAPI streams the logs of all pods #208
base: main
Are you sure you want to change the base?
Conversation
… the logs of all pods Signed-off-by: Hui Chen <[email protected]>
Signed-off-by: Hui Chen <[email protected]>
Signed-off-by: Hui Chen <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@huchen2021 - Can I ask how does extracting a CaseContext
struct help? Is it to help with passing data around?
Yes, before do this, we need to pass several arguments. After do this, we only need to pass one argument CaseContext. Code is more clean. Same reason with CollectInfoContext, WriteDeploymentLogContext and ByoHostPoolContext.
|
@huchen2021 - I'm worried that the test is turning into a bunch of functions that do stuff, and we are passing data around using structs. I'm worried this is only going to become harder to maintain. In my ideal world, we would tie the functions closer to the data they operate on. So have structs, and methods on that struct that can use the data. For example, there could be one struct to create a BYOHost and get its logs. There could be another struct that represents our management cluster. I'm happy to spend some more time thinking and giving a more concrete suggestion if you agree with the approach I am suggesting. What do you think @huchen2021 ? |
@jamiemonserrate Actually, I am doing exactly what you suggested. Let me talk more about my PR. There are 4 structs CaseContext., CollectInfoContext, WriteDeploymentLogContext and ByoHostPoolContext.
byoHostPoolData = new(ByoHostPoolContext) |
@huchen2021 - I see what you are saying, and you are right, it is similar to what I was suggesting. I guess I am personally not a fan of the structs themselves. The structs feel like data holders, and the functions operate on this data. I agree with the functions you have identified, but I think the way the structs are written, its making it hard to read the test. So more concretely, what I would have loved to see is only a struct like |
@jamiemonserrate Only struct ByoHostPoolContext is not enough, it is only for ByoHostPool staff. If we pass too many arguments to a function, the golangci-lint will report error. And in this point, it also need to package those arguments into a struct variable. If you see the test-framework project, it also define a struct as input for function. For example, ApplyClusterTemplateAndWait, ScaleAndWaitMachineDeployment and DeleteAllClustersAndWait. If you don't pass struct variable, then you need to pass a lots of variables instead to make function really really long, and golangci-lint will give you a error and suggest you to replace them with a struct variable. Don't think it will make code hard to read. |
Hi @huchen2021, Is it possible to share some snippets of new output logs for e2e tests, and how do they differ from the existing output. |
@dharmjit There is no different from the he existing output. It didn't change the output logs, it just change the implement way of coding. For the current code, I show the pod logs by cli command "kubectl logs -n ${podNamespace} ${podName} --kubeconfig /tmp/mgmt.conf -c manager". In the PR, I using CAPI streams to get the log of this pod, which is more official. If you want the output of logs, please see the failed e2e case in RP. For example, click this one. At the end of e2e logs, it show something like this: ######################Start: Content of /tmp/read-byoh-controller-manager-log.sh################## ######################End: Content of /tmp/read-byoh-controller-manager-log.sh################## #######################Start: execute result of /tmp/read-byoh-controller-manager-log.sh################## ...... Yes, on the current code It generate a serial of commands, and generate those command to get the logs. In this PR, it just used different way to get the logs, and save it on a specified directory. When the case is end with fail, there is a function to read all logs file from this directory, and show them as followed: #######################Start: Content of -<container.Name>.log################## |
@huchen2021 - Let me try to find some time this week to attempt to sketch out what I mean. I will get back to you on Wednesday. |
Background:
When we first introduced the github runners, any failing e2e case was hard to debug because we couldn’t ssh into those runners and things would work fine in local env. So as a way to display logs when there’s a failure, we have some sort of debugging mechanism in place. For the current code implement, we show the pod logs by cli command "kubectl logs -n ${podNamespace} ${podName} --kubeconfig /tmp/mgmt.conf -c manager". But this wasn’t ideal way. In the PR, I change to using CAPI streams to get the log of this pod, which is more official.
What I did in this PR: