diff --git a/README.md b/README.md index d56deb06..1d6f1b97 100644 --- a/README.md +++ b/README.md @@ -178,30 +178,95 @@ For User-Defined Network (UDN) L3 segmentation testing. It creates two deploymen ## Network Policy workloads -With the help of [networkpolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) object we can control traffic flow at the IP address or port level in Kubernetes. A networkpolicy can come in various shapes and sizes. Allow traffic from a specific namespace, Deny traffic from a specific pod IP, Deny all traffic, etc. Hence we have come up with a few test cases which try to cover most of them. They are as follows. +Network policy scale testing tooling involved 3 components: +1. Template to include all network policy configuration options +2. Latency measurement through connection testing +3. Flow tracking through Convergence tracker -### networkpolicy-multitenant +A network policy defines the rules for ingress and egress traffic between pods in local and remote namespaces. These remote namespace addresses can be configured using a combination of namespace and pod selectors, CIDRs, ports, and port ranges. Given that network policies offer a wide variety of configuration options, we developed a unified template that incorporates all these configuration parameters. Users can specify the desired count for each option. -- 500 namespaces -- 20 pods in each namespace. Each pod acts as a server and a client -- Default deny networkpolicy is applied first that blocks traffic to any test namespace -- 3 network policies in each namespace that allows traffic from the same namespace and two other namespaces using namespace selectors +```console +spec: + podSelector: + matchExpressions: + - key: num + operator: In + values: + - "1" + - "2" + ingress: + - from: + - namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: In + values: + - network-policy-perf-13 + - network-policy-perf-14 + podSelector: + matchExpressions: + - key: num + operator: In + values: + - "1" + - "2" + ports: + - port: 8080 + protocol: TCP + +``` -### networkpolicy-matchlabels +### Scale Testing and Unique ACL Flows +In our scale tests, we aim to create between 10 to 100 network policies within a single namespace. The primary focus is on preventing duplicate configuration options, which ensures that each network policy generates unique Access Control List (ACL) flows. To achieve this, we carefully designed our templating approach based on the following considerations: + +**Round-Robin Assignment:** We use a round-robin strategy to distribute +1. remote namespaces among ingress and egress rules across kube burner job iterations +2. remote namespaces among ingress and egress rules in the same kube burner job iteration + +This ensures that we don’t overuse the same remote namespaces in a single iteration or among multiple interations. For instance, if namespace-1 uses namespace-2 and namespace-3 as its remote namespaces, then namespace-2 will start using namespace-4 and namespace-5 as remote namespaces in the next iteration. + +**Unique Namespace and Pod Combinations:** To avoid redundant flows, the templating system generates unique combinations of remote namespaces and pods for each network policy. Initially, we iterate through the list of remote namespaces, and once all remote namespaces are exhausted, we move on to iterate through the remote pods. This method ensures that every network policy within a namespace is assigned a distinct combination of remote namespaces and remote pods, avoiding duplicate pairs. + +**Templating Logic** +Our templating logic is implemented as follows: +``` console +// Iterate over the list of namespaces to configure network policies. +for namespace := namespaces { + + // Each network policy uses a combination of a remote namespace and a remote pod to allow traffic. + for networkPolicy := networkPolicies { + + /* + Iterate through the list of remote pods. Once all remote namespaces are exhausted, + continue iterating through the remote pods to ensure unique namespace/pod combinations. + */ + for i, remotePod := range remotePods { + // Stop when we reach the maximum number of remote pods allowed. + if i == num_remote_pods { + break + } + + // Iterate through the list of remote namespaces to pair with the remote pod. + for idx, remoteNamespace := range remoteNamespaces { + // Combine the remote namespace and pod into a unique pair for ACL configuration. + combine := fmt.Sprintf("%s:%s", remoteNamespace, remotePod) + + // Stop iterating once we’ve exhausted the allowed number of remote namespaces. + if idx == num_remote_namespace { + break + } + } + } + } +} -- 5 namespaces -- 100 pods in each namespace. Each pod acts as a server and a client -- Each pod with 2 labels and each label shared is by 5 pods -- Default deny networkpolicy is applied first -- Then for each unique label in a namespace we have a networkpolicy with that label as a podSelector which allows traffic from pods with some other randomly selected label. This translates to 40 networkpolicies/namespace +``` -### networkpolicy-matchexpressions +**CIDRs and Port Ranges** +We apply the same round-robin and unique combination logic to CIDRs and port ranges, ensuring that these options are not reused in network policies within the same namespace. -- 5 namespaces -- 25 pods in each namespace. Each pod acts as a server and a client -- Each pod with 2 labels and each label shared is by 5 pods -- Default deny networkpolicy is applied first -- Then for each unique label in a namespace we have a networkpolicy with that label as a podSelector which allows traffic from pods which *don't* have some other randomly-selected label. This translates to 10 networkpolicies/namespace +**Connection Testing Support** +kube-burner measures network policy latency through connection testing. Currently, all pods are configured to listen on port 8080. As a result, client pods will send requests to port 8080 during testing. ## EgressIP workloads diff --git a/cmd/config/network-policy/egress-np.yml b/cmd/config/network-policy/egress-np.yml new file mode 100644 index 00000000..f322393f --- /dev/null +++ b/cmd/config/network-policy/egress-np.yml @@ -0,0 +1,95 @@ +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: egress-{{.Iteration}}-{{.Replica}} +spec: + {{- $lpList := list }} + {{- range $lp, $e := until $.local_pods }} + {{- $nextPod := add 1 $lp }} + {{- $lps := (toString $nextPod)}} + {{- $lpList = append $lpList $lps }} + {{- end }} + {{- $lpNames := toJson $lpList }} + podSelector: + matchExpressions: + - key: num + operator: In + values: {{$lpNames}} + egress: + {{- $peerPodIdx := mul $.Replica .pod_selectors .peer_namespaces}} + {{- $peerPodIdx = div (sub $peerPodIdx 1) $.namespaces}} + {{- $peerPodIdx = mul $peerPodIdx .peer_pods}} + {{- $peerPodList := list }} + {{- range $lp, $e := until $.peer_pods }} + {{- $nextPod := add 1 $lp $peerPodIdx }} + {{- if gt $nextPod $.pods_per_namespace }} + {{- $nextPod = add (mod $nextPod $.pods_per_namespace) 1 }} + {{- end }} + {{- $lps := (toString $nextPod)}} + {{- $peerPodList = append $peerPodList $lps }} + {{- end }} + {{- $peerPodNames := toJson $peerPodList }} + {{- $peerNsIdx := add (mul $.Iteration .pod_selectors .netpols_per_namespace .peer_namespaces) (mul (sub $.Replica 1) .pod_selectors .peer_namespaces) 1}} + {{- range $ps, $e := until $.pod_selectors }} + {{- $nsStart := add $peerNsIdx (mul $ps $.peer_namespaces) }} + {{- $nsList := list }} + {{- range $i, $v := until $.peer_namespaces }} + {{- $nextNs := add $nsStart $i }} + {{- if ge $nextNs $.namespaces }} + {{- $nextNs = mod $nextNs $.namespaces }} + {{- end }} + {{- $next_namespace := print "network-policy-perf-" $nextNs }} + {{- $nsList = append $nsList $next_namespace }} + {{- end }} + {{- $nsNames := toJson $nsList }} + - to: + - namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: In + values: {{$nsNames}} + podSelector: + matchExpressions: + - key: num + operator: In + values: {{$peerPodNames}} + ports: + {{- $single_port := 8079 }} + {{- range $i, $e := until $.single_ports }} + {{- $single_port = add $single_port 1 }} + - protocol: TCP + port: {{$single_port}} + {{- end }} + {{- $rangeStart := 5000 }} + {{- range $i, $e := until $.port_ranges }} + {{- $rangeEnd := add $rangeStart 5 }} + - protocol: TCP + port: {{$rangeStart}} + endPort: {{$rangeEnd}} + {{ $rangeStart = add $rangeStart 10}} + {{- end }} + {{- end }} + {{- $subnetStartIdx := add (mul $.Replica $.cidr_rules) 1 }} + {{- range $i, $e := until .cidr_rules }} + {{- $subnetIdx := add $subnetStartIdx $i }} + - to: + - ipBlock: + cidr: {{GetSubnet24 (int $subnetIdx) }} + ports: + {{- $single_port := 1000 }} + {{- range $i, $e := until $.single_ports }} + {{- $single_port = add $single_port 1 }} + - protocol: TCP + port: {{$single_port}} + {{- end }} + {{- $rangeStart := 5000 }} + {{- range $i, $e := until $.port_ranges }} + {{- $rangeEnd := add $rangeStart 5 }} + - protocol: TCP + port: {{$rangeStart}} + endPort: {{$rangeEnd}} + {{ $rangeStart = add $rangeStart 10}} + {{- end }} + {{- end }} + policyTypes: + - Egress diff --git a/cmd/config/network-policy/ingress-np.yml b/cmd/config/network-policy/ingress-np.yml new file mode 100644 index 00000000..19f050c1 --- /dev/null +++ b/cmd/config/network-policy/ingress-np.yml @@ -0,0 +1,95 @@ +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: ingress-{{.Iteration}}-{{.Replica}} +spec: + {{- $lpList := list }} + {{- range $lp, $e := until $.local_pods }} + {{- $nextPod := add 1 $lp }} + {{- $lps := (toString $nextPod)}} + {{- $lpList = append $lpList $lps }} + {{- end }} + {{- $lpNames := toJson $lpList }} + podSelector: + matchExpressions: + - key: num + operator: In + values: {{$lpNames}} + ingress: + {{- $peerPodIdx := mul $.Replica .pod_selectors .peer_namespaces}} + {{- $peerPodIdx = div (sub $peerPodIdx 1) $.namespaces}} + {{- $peerPodIdx = mul $peerPodIdx .peer_pods}} + {{- $peerPodList := list }} + {{- range $lp, $e := until $.peer_pods }} + {{- $nextPod := add 1 $lp $peerPodIdx }} + {{- if gt $nextPod $.pods_per_namespace }} + {{- $nextPod = add (mod $nextPod $.pods_per_namespace) 1 }} + {{- end }} + {{- $lps := (toString $nextPod)}} + {{- $peerPodList = append $peerPodList $lps }} + {{- end }} + {{- $peerPodNames := toJson $peerPodList }} + {{- $peerNsIdx := add (mul $.Iteration .pod_selectors .netpols_per_namespace .peer_namespaces) (mul (sub $.Replica 1) .pod_selectors .peer_namespaces) 1}} + {{- range $ps, $e := until $.pod_selectors }} + {{- $nsStart := add $peerNsIdx (mul $ps $.peer_namespaces) }} + {{- $nsList := list }} + {{- range $i, $v := until $.peer_namespaces }} + {{- $nextNs := add $nsStart $i }} + {{- if ge $nextNs $.namespaces }} + {{- $nextNs = mod $nextNs $.namespaces }} + {{- end }} + {{- $next_namespace := print "network-policy-perf-" $nextNs }} + {{- $nsList = append $nsList $next_namespace }} + {{- end }} + {{- $nsNames := toJson $nsList }} + - from: + - namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: In + values: {{$nsNames}} + podSelector: + matchExpressions: + - key: num + operator: In + values: {{$peerPodNames}} + ports: + {{- $single_port := 8079 }} + {{- range $i, $e := until $.single_ports }} + {{- $single_port = add $single_port 1 }} + - protocol: TCP + port: {{$single_port}} + {{- end }} + {{- $rangeStart := 5000 }} + {{- range $i, $e := until $.port_ranges }} + {{- $rangeEnd := add $rangeStart 5 }} + - protocol: TCP + port: {{$rangeStart}} + endPort: {{$rangeEnd}} + {{ $rangeStart = add $rangeStart 10}} + {{- end }} + {{- end }} + {{- $subnetStartIdx := add (mul $.Replica $.cidr_rules) 1 }} + {{- range $i, $e := until .cidr_rules }} + {{- $subnetIdx := add $subnetStartIdx $i }} + - from: + - ipBlock: + cidr: {{GetSubnet24 (int $subnetIdx) }} + ports: + {{- $single_port := 1000 }} + {{- range $i, $e := until $.single_ports }} + {{- $single_port = add $single_port 1 }} + - protocol: TCP + port: {{$single_port}} + {{- end }} + {{- $rangeStart := 5000 }} + {{- range $i, $e := until $.port_ranges }} + {{- $rangeEnd := add $rangeStart 5 }} + - protocol: TCP + port: {{$rangeStart}} + endPort: {{$rangeEnd}} + {{ $rangeStart = add $rangeStart 10}} + {{- end }} + {{- end }} + policyTypes: + - Ingress diff --git a/cmd/config/network-policy/network-policy.yml b/cmd/config/network-policy/network-policy.yml new file mode 100644 index 00000000..d42818d9 --- /dev/null +++ b/cmd/config/network-policy/network-policy.yml @@ -0,0 +1,92 @@ +--- +global: + gc: true + gcMetrics: false + measurements: +{{ if eq .NETPOL_LATENCY "true" }} + - name: netpolLatency + config: + netpolTimeout: 10s + skipPodWait: true + proxyRoute: {{.NETWORK_POLICY_PROXY_ROUTE}} +{{ end }} +metricsEndpoints: + - metrics: [{{.METRICS}}] + alerts: [{{.ALERTS}}] + indexer: + type: local + metricsDirectory: collected-metrics-{{.UUID}} + +jobs: + - name: network-policy-perf-pods + namespace: network-policy-perf + jobIterations: {{.JOB_ITERATIONS}} + qps: {{.QPS}} + burst: {{.BURST}} + namespacedIterations: true + podWait: false + waitWhenFinished: true + preLoadImages: false + preLoadPeriod: 1s + jobPause: 15s + skipIndexing: true + namespaceLabels: + kube-burner.io/skip-networkpolicy-latency: true + security.openshift.io/scc.podSecurityLabelSync: false + pod-security.kubernetes.io/enforce: privileged + pod-security.kubernetes.io/audit: privileged + pod-security.kubernetes.io/warn: privileged + objects: + - objectTemplate: pod.yml + replicas: {{.PODS_PER_NAMESPACE}} + + - objectTemplate: np-deny-all.yml + replicas: 1 + + - objectTemplate: np-allow-from-proxy.yml + replicas: 1 + + - name: network-policy-perf + namespace: network-policy-perf + jobIterations: {{.JOB_ITERATIONS}} + qps: {{.QPS}} + burst: {{.BURST}} + namespacedIterations: true + podWait: false + waitWhenFinished: true + preLoadImages: false + preLoadPeriod: 15s + jobPause: 1m + cleanup: false + namespaceLabels: + security.openshift.io/scc.podSecurityLabelSync: false + pod-security.kubernetes.io/enforce: privileged + pod-security.kubernetes.io/audit: privileged + pod-security.kubernetes.io/warn: privileged + objects: + - objectTemplate: ingress-np.yml + replicas: {{.NETPOLS_PER_NAMESPACE}} + inputVars: + namespaces: {{.JOB_ITERATIONS}} + pods_per_namespace: {{.PODS_PER_NAMESPACE}} + netpols_per_namespace: {{.NETPOLS_PER_NAMESPACE}} + local_pods: {{.LOCAL_PODS}} + pod_selectors: {{.POD_SELECTORS}} + single_ports: {{.SINGLE_PORTS}} + port_ranges: {{.PORT_RANGES}} + peer_namespaces: {{.REMOTE_NAMESPACES}} + peer_pods: {{.REMOTE_PODS}} + cidr_rules: {{.CIDRS}} + - objectTemplate: egress-np.yml + replicas: {{.NETPOLS_PER_NAMESPACE}} + inputVars: + namespaces: {{.JOB_ITERATIONS}} + pods_per_namespace: {{.PODS_PER_NAMESPACE}} + netpols_per_namespace: {{.NETPOLS_PER_NAMESPACE}} + local_pods: {{.LOCAL_PODS}} + pod_selectors: {{.POD_SELECTORS}} + single_ports: {{.SINGLE_PORTS}} + port_ranges: {{.PORT_RANGES}} + peer_namespaces: {{.REMOTE_NAMESPACES}} + peer_pods: {{.REMOTE_PODS}} + cidr_rules: {{.CIDRS}} diff --git a/cmd/config/network-policy/np-allow-from-proxy.yml b/cmd/config/network-policy/np-allow-from-proxy.yml new file mode 100644 index 00000000..dd4c96bb --- /dev/null +++ b/cmd/config/network-policy/np-allow-from-proxy.yml @@ -0,0 +1,16 @@ +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: allow-from-proxy +spec: + ingress: + - from: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: network-policy-proxy + podSelector: + matchLabels: + app: network-policy-proxy + ports: + - protocol: TCP + port: 9001 diff --git a/cmd/config/network-policy/np-deny-all.yml b/cmd/config/network-policy/np-deny-all.yml new file mode 100644 index 00000000..e5a9a99d --- /dev/null +++ b/cmd/config/network-policy/np-deny-all.yml @@ -0,0 +1,7 @@ +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: deny-all +spec: + podSelector: {} + ingress: [] diff --git a/cmd/config/network-policy/pod.yml b/cmd/config/network-policy/pod.yml new file mode 100644 index 00000000..9f3196a2 --- /dev/null +++ b/cmd/config/network-policy/pod.yml @@ -0,0 +1,32 @@ +apiVersion: v1 +kind: Pod +metadata: + name: test-pod-{{.Replica}} + labels: + num: "{{.Replica}}" +spec: + containers: + - name: webserver + image: quay.io/cloud-bulldozer/sampleapp:latest + resources: + requests: + memory: "10Mi" + cpu: "10m" + imagePullPolicy: Always + ports: + - containerPort: 8080 + protocol: TCP + securityContext: + privileged: false + - name: curlapp + image: quay.io/cloud-bulldozer/netpolvalidator:latest + resources: + requests: + memory: "10Mi" + cpu: "10m" + imagePullPolicy: Always + ports: + - containerPort: 9001 + protocol: TCP + securityContext: + privileged: false diff --git a/cmd/ocp.go b/cmd/ocp.go index 8bcc9247..554b6ede 100644 --- a/cmd/ocp.go +++ b/cmd/ocp.go @@ -107,9 +107,10 @@ func openShiftCmd() *cobra.Command { ocp.NewClusterDensity(&wh, "cluster-density-v2"), ocp.NewClusterDensity(&wh, "cluster-density-ms"), ocp.NewCrdScale(&wh), - ocp.NewNetworkPolicy(&wh, "networkpolicy-multitenant"), - ocp.NewNetworkPolicy(&wh, "networkpolicy-matchlabels"), - ocp.NewNetworkPolicy(&wh, "networkpolicy-matchexpressions"), + ocp.NewNetworkPolicy(&wh, "network-policy"), + ocp.NewNetworkPolicyLegacy(&wh, "networkpolicy-multitenant"), + ocp.NewNetworkPolicyLegacy(&wh, "networkpolicy-matchlabels"), + ocp.NewNetworkPolicyLegacy(&wh, "networkpolicy-matchexpressions"), ocp.NewNodeDensity(&wh), ocp.NewNodeDensityHeavy(&wh), ocp.NewNodeDensityCNI(&wh), diff --git a/network-policy.go b/network-policy.go new file mode 100644 index 00000000..b8d557da --- /dev/null +++ b/network-policy.go @@ -0,0 +1,201 @@ +// Copyright 2022 The Kube-burner Authors. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package ocp + +import ( + "context" + "fmt" + "os" + "strconv" + "time" + + "github.com/kube-burner/kube-burner/pkg/config" + kutil "github.com/kube-burner/kube-burner/pkg/util" + "github.com/kube-burner/kube-burner/pkg/workloads" + routev1 "github.com/openshift/api/route/v1" + log "github.com/sirupsen/logrus" + "github.com/spf13/cobra" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" + "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/rest" + "k8s.io/utils/ptr" + + openshiftrouteclientset "github.com/openshift/client-go/route/clientset/versioned" +) + +var networkPolicyProxyPort int32 = 9002 +var networkPolicyProxy = "network-policy-proxy" +var networkPolicyProxyLabel = map[string]string{"app": networkPolicyProxy} +var networkPolicyProxyRouteName string + +var networkPolicyProxyPod = &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: networkPolicyProxy, + Namespace: networkPolicyProxy, + Labels: networkPolicyProxyLabel, + }, + Spec: corev1.PodSpec{ + TerminationGracePeriodSeconds: ptr.To[int64](0), + Containers: []corev1.Container{ + { + Image: "quay.io/cloud-bulldozer/netpolproxy:latest", + Name: networkPolicyProxy, + ImagePullPolicy: corev1.PullAlways, + SecurityContext: &corev1.SecurityContext{ + AllowPrivilegeEscalation: ptr.To[bool](false), + Capabilities: &corev1.Capabilities{Drop: []corev1.Capability{"ALL"}}, + RunAsNonRoot: ptr.To[bool](true), + SeccompProfile: &corev1.SeccompProfile{Type: corev1.SeccompProfileTypeRuntimeDefault}, + RunAsUser: ptr.To[int64](1000), + }, + }, + }, + }, +} + +var networkPolicyProxySvc = &corev1.Service{ + ObjectMeta: metav1.ObjectMeta{ + Name: networkPolicyProxy, + Namespace: networkPolicyProxy, + Labels: networkPolicyProxyLabel, + }, + Spec: corev1.ServiceSpec{ + Ports: []corev1.ServicePort{ + { + Protocol: corev1.ProtocolTCP, + TargetPort: intstr.Parse(fmt.Sprintf("%d", networkPolicyProxyPort)), + Port: 80, + Name: "http", + }, + }, + Type: corev1.ServiceType("ClusterIP"), + Selector: networkPolicyProxyLabel, + }, +} + +var networkPolicyProxyRoute = routev1.Route{ + ObjectMeta: metav1.ObjectMeta{ + Name: networkPolicyProxy, + Labels: networkPolicyProxyLabel, + }, + Spec: routev1.RouteSpec{ + Port: &routev1.RoutePort{TargetPort: intstr.FromString("http")}, + To: routev1.RouteTargetReference{ + Name: networkPolicyProxy, + }, + }, +} + +// create proxy pod with route +func deployAssets(uuid string, clientSet kubernetes.Interface, restConfig *rest.Config) error { + var err error + orClientSet := openshiftrouteclientset.NewForConfigOrDie(restConfig) + nsLabels := map[string]string{"kube-burner-uuid": uuid} + if err = kutil.CreateNamespace(clientSet, networkPolicyProxy, nsLabels, nil); err != nil { + return err + } + if _, err = clientSet.CoreV1().Pods(networkPolicyProxy).Create(context.TODO(), networkPolicyProxyPod, metav1.CreateOptions{}); err != nil { + if errors.IsAlreadyExists(err) { + log.Warn(err) + } else { + return err + } + } + err = wait.PollUntilContextCancel(context.TODO(), 100*time.Millisecond, true, func(ctx context.Context) (done bool, err error) { + pod, err := clientSet.CoreV1().Pods(networkPolicyProxy).Get(context.TODO(), networkPolicyProxy, metav1.GetOptions{}) + if err != nil { + return true, err + } + if pod.Status.Phase != corev1.PodRunning { + return false, nil + } + return true, nil + }) + _, err = clientSet.CoreV1().Services(networkPolicyProxy).Create(context.TODO(), networkPolicyProxySvc, metav1.CreateOptions{}) + if err != nil && !errors.IsAlreadyExists(err) { + return err + } else { + r, err := orClientSet.RouteV1().Routes(networkPolicyProxy).Create(context.TODO(), &networkPolicyProxyRoute, metav1.CreateOptions{}) + if err != nil && !errors.IsAlreadyExists(err) { + return err + } + networkPolicyProxyRouteName = r.Spec.Host + } + + return err +} + +// NewNetworkPolicy holds network-policy workload +func NewNetworkPolicy(wh *workloads.WorkloadHelper, variant string) *cobra.Command { + var iterations, podsPerNamespace, netpolPerNamespace, localPods, podSelectors, singlePorts, portRanges, remoteNamespaces, remotePods, cidrs int + var netpolLatency bool + var rc int + + kubeClientProvider := config.NewKubeClientProvider("", "") + clientSet, restConfig := kubeClientProvider.ClientSet(0, 0) + err := deployAssets(wh.Config.UUID, clientSet, restConfig) + if err != nil { + log.Fatal("Error: ", err) + os.Exit(1) + } + cmd := &cobra.Command{ + Use: variant, + Short: fmt.Sprintf("Runs %v workload", variant), + PreRun: func(cmd *cobra.Command, args []string) { + os.Setenv("JOB_ITERATIONS", fmt.Sprint(iterations)) + os.Setenv("PODS_PER_NAMESPACE", fmt.Sprint(podsPerNamespace)) + os.Setenv("NETPOLS_PER_NAMESPACE", fmt.Sprint(netpolPerNamespace)) + os.Setenv("LOCAL_PODS", fmt.Sprint(localPods)) + os.Setenv("POD_SELECTORS", fmt.Sprint(podSelectors)) + os.Setenv("SINGLE_PORTS", fmt.Sprint(singlePorts)) + os.Setenv("PORT_RANGES", fmt.Sprint(portRanges)) + os.Setenv("REMOTE_NAMESPACES", fmt.Sprint(remoteNamespaces)) + os.Setenv("REMOTE_PODS", fmt.Sprint(remotePods)) + os.Setenv("CIDRS", fmt.Sprint(cidrs)) + os.Setenv("NETPOL_LATENCY", strconv.FormatBool(netpolLatency)) + os.Setenv("NETWORK_POLICY_PROXY_ROUTE", networkPolicyProxyRouteName) + }, + Run: func(cmd *cobra.Command, args []string) { + setMetrics(cmd, "metrics-aggregated.yml") + rc = wh.Run(cmd.Name()) + }, + PostRun: func(cmd *cobra.Command, args []string) { + log.Info("Deleting namespace ", networkPolicyProxy) + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute) + defer cancel() + labelSelector := fmt.Sprintf("kubernetes.io/metadata.name=%s", networkPolicyProxy) + kutil.CleanupNamespaces(ctx, clientSet, labelSelector) + log.Info("👋 Exiting kube-burner ", wh.Config.UUID) + os.Exit(rc) + }, + } + cmd.Flags().IntVar(&iterations, "iterations", 10, fmt.Sprintf("%v iterations", variant)) + cmd.Flags().IntVar(&podsPerNamespace, "pods-per-namespace", 10, "Number of pods created in a namespace") + cmd.Flags().IntVar(&netpolPerNamespace, "netpol-per-namespace", 10, "Number of network policies created in a namespace") + cmd.Flags().IntVar(&localPods, "local-pods", 2, "Number of pods on the local namespace to receive traffic from remote namespace pods") + cmd.Flags().IntVar(&podSelectors, "pod-selectors", 1, "Number of pod and namespace selectors to be used in ingress and egress rules") + cmd.Flags().IntVar(&singlePorts, "single-ports", 2, "Number of TCP ports to be used in ingress and egress rules") + cmd.Flags().IntVar(&portRanges, "port-ranges", 2, "Number of TCP port ranges to be used in ingress and egress rules") + cmd.Flags().IntVar(&remoteNamespaces, "remotes-namespaces", 2, "Number of remote namespaces to accept traffic from or send traffic to in ingress and egress rules") + cmd.Flags().IntVar(&remotePods, "remotes-pods", 2, "Number of pods in remote namespaces to accept traffic from or send traffic to in ingress and egress rules") + cmd.Flags().IntVar(&cidrs, "cidrs", 2, "Number of cidrs to accept traffic from or send traffic to in ingress and egress rules") + cmd.Flags().BoolVar(&netpolLatency, "networkpolicy-latency", true, "Enable network policy latency measurement") + cmd.MarkFlagRequired("iterations") + return cmd +} diff --git a/networkpolicy.go b/networkpolicy.go index 8c72d164..199af840 100644 --- a/networkpolicy.go +++ b/networkpolicy.go @@ -24,7 +24,7 @@ import ( ) // NewNetworkPolicy holds network-policy workload -func NewNetworkPolicy(wh *workloads.WorkloadHelper, variant string) *cobra.Command { +func NewNetworkPolicyLegacy(wh *workloads.WorkloadHelper, variant string) *cobra.Command { var iterations, churnPercent, churnCycles int var churn bool var churnDelay, churnDuration time.Duration