Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added test for copy-ns-labels policy #967

Merged
merged 17 commits into from
May 28, 2024

Conversation

siddhikhapare
Copy link
Contributor

@siddhikhapare siddhikhapare commented Apr 11, 2024

Related Issue(s)

Partially addresses #950

Description

added chainsaw test for copy-namespace-labels policy to test working of policy for outside and within ns with owner and multiple additional labels.

Checklist

  • I have read the policy contribution guidelines.
  • I have added test manifests and resources covering both positive and negative tests that prove this policy works as intended.
  • [] I have added the artifacthub-pkg.yml file and have verified it is complete and correct.

@siddhikhapare
Copy link
Contributor Author

@chipzoller Could you help me in resolving this issue of test failures?

@chipzoller
Copy link
Contributor

Sure. Forget the failures for right now. I see you are missing some fundamental points of what this policy does.
Hint: Look at what the policy matches on versus what you're testing. Do they all make sense?

@siddhikhapare
Copy link
Contributor Author

siddhikhapare commented Apr 12, 2024

Sure. Forget the failures for right now. I see you are missing some fundamental points of what this policy does. Hint: Look at what the policy matches on versus what you're testing. Do they all make sense?

Thank you for your helpful response. As per my understanding ,in order to validate both conditions of policy simultaneously. I have to define both rules in single resource but I have created resources with separated methods of policy. I realize now that I was mistaken here. Could you please share your thoughts on this?

@chipzoller
Copy link
Contributor

Not sure what you're saying. Look at my hint, then look at the policy match block, then look at all the resources you're trying to create as part of the test. Something is off there. What is it?

@siddhikhapare
Copy link
Contributor Author

Not sure what you're saying. Look at my hint, then look at the policy match block, then look at all the resources you're trying to create as part of the test. Something is off there. What is it?

pod resource kind is not included in the match block which I have provided and rules will only apply to deployment resources which will create and manage the associated pods.

@siddhikhapare
Copy link
Contributor Author

@chipzoller I will update it.

@chipzoller
Copy link
Contributor

Yes, correct. Fix that first then let's look.

Signed-off-by: siddhikhapare <[email protected]>
@siddhikhapare
Copy link
Contributor Author

Should use a ClusterRole to grant permissions here for this policy?

@chipzoller
Copy link
Contributor

There should be no need to grant permissions here.

Signed-off-by: siddhikhapare <[email protected]>
@siddhikhapare
Copy link
Contributor Author

siddhikhapare commented Apr 18, 2024

@chipzoller I've noticed that policy is taking more time to initialize. I tried increasing the sleep duration in chainsaw-test.yaml file but still providing me same error regarding policy status is null which I mentioned here. I guess my system resource is responsible for taking time run this policy. I found this error on kyverno playground while running this policy is ServerError: Variable owner is not defined in the contexteven if I have provided the namespace resource definition with the deployment Kubernetes resource.
Could you please help me in understanding what's causing issue here?

@siddhikhapare
Copy link
Contributor Author

I assert that owner label is not a static value, so we do not have to specify in values.yaml.
owner label value is fetched dynamically from the namespace's metadata labels using the apiCall and jmesPath in the policy's context.

Copy link
Contributor

@chipzoller chipzoller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please look at the failed test log fro your error. You can and should also run this locally on your machine and you will see the same thing. You should always be running tests locally before pushing changes in a PR.

Signed-off-by: siddhikhapare <[email protected]>
@siddhikhapare
Copy link
Contributor Author

siddhikhapare commented Apr 21, 2024

Please look at the failed test log fro your error. You can and should also run this locally on your machine and you will see the same thing. You should always be running tests locally before pushing changes in a PR.

I have attached passed test output here using chainsaw.

@siddhikhapare
Copy link
Contributor Author

@chipzoller I tested files locally which have got output as passed tests as expected but here test are failed.

@chipzoller
Copy link
Contributor

Look at the logged output in CI tests here and see why they fail.

Signed-off-by: siddhikhapare <[email protected]>
@siddhikhapare
Copy link
Contributor Author

siddhikhapare commented Apr 22, 2024

for bad-deployment-with-ouside-ns resource have the kubernetes.io/metadata.name label. According to the policy, this label will not be copied to the deploy. However, other labels from the ns should be copied to the deployments. Therefore, this resource should fail because I didn't copy the owner label from the ns to the deploy. It failed locally once because the API call is taking time to run, but it passed the test once so I removed that resource.

Signed-off-by: siddhikhapare <[email protected]>
@siddhikhapare
Copy link
Contributor Author

I think the tests were failing because of the creationTimestamp field, so I removed it . But when I ran the tests on locally, I found that they all passed, whether I included that field or not. You can see the results here output

Signed-off-by: siddhikhapare <[email protected]>
@chipzoller
Copy link
Contributor

You still don't seem to understand what this policy is about and how to test it. This is a mutation policy with two rules. There is no "bad" resource and no failure should occur. The test is about asserting on what the resulting mutation should look like, for both rules, in different scenarios with applicable resources.

@siddhikhapare
Copy link
Contributor Author

You still don't seem to understand what this policy is about and how to test it. This is a mutation policy with two rules. There is no "bad" resource and no failure should occur. The test is about asserting on what the resulting mutation should look like, for both rules, in different scenarios with applicable resources.

I found internal error today that failed when calling the webhook "mutate-policy.kyverno.svc". However, according to this note mentioned in Kyverno's documentation https://kyverno.io/docs/writing-policies/mutate/ , Kubernetes disallows changes to certain fields in resources including name, namespace, uid, kind, and apiVersion, therefore you cannot use Kyverno policy to effect any of these fields either during admission or once a resource has been persisted. I thought I needed to change the creationTimestamp field, which indicates when an object was created. and this also mentioned in k8s doc here https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement so - Most of the metadata about a Pod is immutable that cannot change the namespace, name, uid, or creationTimestamp fields; the generation field is unique. It only accepts updates that increment the field's current value. This means that these fields can't be modified once they're set.

I did not understand that error correctly so thanks for providing clarification.

@chipzoller
Copy link
Contributor

None of what you're saying matters in this case. A mutation during admission is not subject to these limitations since there is no "existing" resource. New resources being created are mutated before being persisted. All that's needed is to check the resource after it is created.

Signed-off-by: siddhikhapare <[email protected]>
@siddhikhapare
Copy link
Contributor Author

All test passed as you have explained @chipzoller. Thanks for your guidance.

Signed-off-by: siddhikhapare <[email protected]>
@siddhikhapare
Copy link
Contributor Author

siddhikhapare commented Apr 27, 2024

I think this test might be showing there's a issue with resources timeout. Could you please provide your insights regarding this issue it would be helpful for me. should I use replicas : 0 for few resource for not to create pods if cluster have limit to create pod on server?

Copy link
Contributor

@chipzoller chipzoller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also use an error step to fail if the kubernetes.io/metadata.name is found on at least one Deployment.

other/copy-namespace-labels/.chainsaw-test/resource.yaml Outdated Show resolved Hide resolved
@siddhikhapare
Copy link
Contributor Author

siddhikhapare commented May 8, 2024

@chipzoller Sorry for delay in resolving issue. I was out of station. I have checked test here that kubernetes.io/metadata.name label for only one resource. I have checked for all resource definition also to matched that label and they should provide error. for this I found this output -

- error:
       resource:
         apiVersion: apps/v1
         kind: Deployment
         metadata:
           labels:
             kubernetes.io/metadata.name:  within-ns
          namespace: within-ns

@chipzoller
Copy link
Contributor

So are you saying the previous comment is resolved? Ready for review again?

Also, in the future, there's no need to take a screenshot of terminal output. Please just paste the text into a code block here so it's easy to follow the path of communication.

@siddhikhapare
Copy link
Contributor Author

kubernetes.io/metadata.name

yes I believe that all test passed for that change what you have suggested to add.

@siddhikhapare
Copy link
Contributor Author

siddhikhapare commented May 8, 2024

I have added whole resource definition in test file if that kubernetes.io/metadata.name label found with value of the label is the namespace name then that resource should provide error before applying policy because after applying policy that label should exclude from that resource.

@JimBugwadia
Copy link
Member

@siddhikhapare - has this requested change been addressed?

#967 (review)

@siddhikhapare
Copy link
Contributor Author

@chipzoller I got policy lint error but my all test passed locally.

Signed-off-by: siddhikhapare <[email protected]>
Signed-off-by: siddhikhapare <[email protected]>
@siddhikhapare
Copy link
Contributor Author

@chipzoller Should I update the digest hash string in the artifacthub-pkg.yml file? I believe it should be updated when the policy definition changes. I'm not sure what the exact lint error is. Could you help me understand it better?

@siddhikhapare
Copy link
Contributor Author

siddhikhapare commented May 26, 2024

I found this error Use with "kasten-validate-ns-by-preset-label" policy to require "dataprotection" labeling on new namespaces. while testing diff --git a/kasten/kasten-generate-policy-by-preset-label/artifacthub-pkg.yml b/kasten/kasten-generate-policy-by-preset-label/artifacthub-pkg.yml

@chipzoller
Copy link
Contributor

Sent #1036 to fix.

Copy link
Contributor

@chipzoller chipzoller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, thanks.

@chipzoller chipzoller merged commit a664743 into kyverno:main May 28, 2024
163 of 164 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants