Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

network policy workload #117

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
104 changes: 86 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,30 +178,98 @@ For User-Defined Network (UDN) L3 segmentation testing. It creates two deploymen

## Network Policy workloads

With the help of [networkpolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) object we can control traffic flow at the IP address or port level in Kubernetes. A networkpolicy can come in various shapes and sizes. Allow traffic from a specific namespace, Deny traffic from a specific pod IP, Deny all traffic, etc. Hence we have come up with a few test cases which try to cover most of them. They are as follows.
Network policy scale testing tooling involved 3 components:
1. Template to include all network policy configuration options
2. Latency measurement through connection testing
3. Flow tracking through Convergence tracker

### networkpolicy-multitenant
A network policy defines the rules for ingress and egress traffic between pods in local and remote namespaces. These remote namespace addresses can be configured using a combination of namespace and pod selectors, CIDRs, ports, and port ranges. Given that network policies offer a wide variety of configuration options, we developed a unified template that incorporates all these configuration parameters. Users can specify the desired count for each option.

- 500 namespaces
- 20 pods in each namespace. Each pod acts as a server and a client
- Default deny networkpolicy is applied first that blocks traffic to any test namespace
- 3 network policies in each namespace that allows traffic from the same namespace and two other namespaces using namespace selectors
```console
spec:
podSelector:
matchExpressions:
- key: num
operator: In
values:
- "1"
- "2"
ingress:
- from:
- namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- network-policy-perf-13
- network-policy-perf-14
podSelector:
matchExpressions:
- key: num
operator: In
values:
- "1"
- "2"
ports:
- port: 8080
protocol: TCP

```

### networkpolicy-matchlabels
### Scale Testing and Unique ACL Flows
In our scale tests, we aim to create between 10 to 100 network policies within a single namespace. The primary focus is on preventing duplicate configuration options, which ensures that each network policy generates unique Access Control List (ACL) flows. To achieve this, we carefully designed our templating approach based on the following considerations:

**Round-Robin Assignment:** We use a round-robin strategy to distribute
1. remote namespaces among ingress and egress rules across kube burner job iterations
2. remote namespaces among ingress and egress rules in the same kube burner job iteration

This ensures that we don’t overuse the same remote namespaces in a single iteration or among multiple interations. For instance, if namespace-1 uses namespace-2 and namespace-3 as its remote namespaces, then namespace-2 will start using namespace-4 and namespace-5 as remote namespaces in the next iteration.

**Unique Namespace and Pod Combinations:** To avoid redundant flows, the templating system generates unique combinations of remote namespaces and pods for each network policy. Initially, we iterate through the list of remote namespaces, and once all remote namespaces are exhausted, we move on to iterate through the remote pods. This method ensures that every network policy within a namespace is assigned a distinct combination of remote namespaces and remote pods, avoiding duplicate pairs.

**Templating Logic**
Our templating logic is implemented as follows:
``` console
// Iterate over the list of namespaces to configure network policies.
for namespace := namespaces {

// Each network policy uses a combination of a remote namespace and a remote pod to allow traffic.
for networkPolicy := networkPolicies {

/*
Iterate through the list of remote pods. Once all remote namespaces are exhausted,
continue iterating through the remote pods to ensure unique namespace/pod combinations.
*/
for i, remotePod := range remotePods {
// Stop when we reach the maximum number of remote pods allowed.
if i == num_remote_pods {
break
}

// Iterate through the list of remote namespaces to pair with the remote pod.
for idx, remoteNamespace := range remoteNamespaces {
// Combine the remote namespace and pod into a unique pair for ACL configuration.
combine := fmt.Sprintf("%s:%s", remoteNamespace, remotePod)

// Stop iterating once we’ve exhausted the allowed number of remote namespaces.
if idx == num_remote_namespace {
break
}
}
}
}
}

```

- 5 namespaces
- 100 pods in each namespace. Each pod acts as a server and a client
- Each pod with 2 labels and each label shared is by 5 pods
- Default deny networkpolicy is applied first
- Then for each unique label in a namespace we have a networkpolicy with that label as a podSelector which allows traffic from pods with some other randomly selected label. This translates to 40 networkpolicies/namespace
**CIDRs and Port Ranges**
We apply the same round-robin and unique combination logic to CIDRs and port ranges, ensuring that these options are not reused in network policies within the same namespace.

### networkpolicy-matchexpressions
**Connection Testing Support**
kube-burner measures network policy latency through connection testing. Currently, all pods are configured to listen on port 8080. As a result, client pods will send requests to port 8080 during testing.

- 5 namespaces
- 25 pods in each namespace. Each pod acts as a server and a client
- Each pod with 2 labels and each label shared is by 5 pods
- Default deny networkpolicy is applied first
- Then for each unique label in a namespace we have a networkpolicy with that label as a podSelector which allows traffic from pods which *don't* have some other randomly-selected label. This translates to 10 networkpolicies/namespace
### Convergence tracker
Convergence tracker https://github.com/npinaeva/k8s-netpol-scale/tree/main/kube-burner-workload/openshift/openflow-tracker is integrated into network policy workload. It creates pods on each worker node which moniots OVS flows. It measures when OVS flows gets stabilized and report this as a metric.

## EgressIP workloads

Expand Down
92 changes: 92 additions & 0 deletions cmd/config/network-policy/convergence_tracker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: convergence-tracker-{{.Replica}}
spec:
selector:
matchLabels:
app: convergence-tracker
template:
metadata:
labels:
name: convergence-tracker
app: convergence-tracker
spec:
serviceAccountName: convergence-tracker
containers:
- image: quay.io/cloud-bulldozer/convergencetracker:latest
name: tracker
securityContext:
privileged: true
command: [ "/bin/bash", "-c", "python openflow-tracker.py"]
resources:
requests:
memory: "25Mi"
cpu: "25m"
volumeMounts:
- name: openvswitch
mountPath: /var/run/openvswitch
- name: host-var-log-ovs
mountPath: /var/log/openvswitch
- name: ovn
mountPath: /var/run/ovn
- name: ovn-ic
mountPath: /var/run/ovn-ic
- name: pod-logs
mountPath: /var/log/pods
env:
- name: CONVERGENCE_PERIOD
value: "{{.convergence_period}}"
- name: CONVERGENCE_TIMEOUT
value: "{{.convergence_timeout}}"
- name: POLL_TIMEOUT
value: "5"
- name: ES_SERVER
value: {{.es_server}}
- name: ES_INDEX_NETPOL
value: {{.es_index}}
- name: UUID
value: {{.UUID}}
- name: METADATA
value: "{{.metadata}}"
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
imagePullPolicy: IfNotPresent
volumes:
- name: openvswitch
hostPath:
path: /var/run/openvswitch
- name: ovn
hostPath:
path: /var/run/ovn/
- name: ovn-ic
hostPath:
path: /var/run/ovn-ic/
- name: ovn-kubernetes
hostPath:
path: /var/run/ovn-kubernetes
- name: host-var-log-ovs
hostPath:
path: /var/log/openvswitch
- name: pod-logs
hostPath:
path: /var/log/pods
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/worker
operator: Exists
- key: node-role.kubernetes.io/infra
operator: DoesNotExist
- key: node-role.kubernetes.io/workload
operator: DoesNotExist
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
restartPolicy: Always

95 changes: 95 additions & 0 deletions cmd/config/network-policy/egress-np.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: egress-{{.Iteration}}-{{.Replica}}
spec:
{{- $lpList := list }}
{{- range $lp, $e := until $.local_pods }}
{{- $nextPod := add 1 $lp }}
{{- $lps := (toString $nextPod)}}
{{- $lpList = append $lpList $lps }}
{{- end }}
{{- $lpNames := toJson $lpList }}
podSelector:
matchExpressions:
- key: num
operator: In
values: {{$lpNames}}
egress:
{{- $peerPodIdx := mul $.Replica .pod_selectors .peer_namespaces}}
{{- $peerPodIdx = div (sub $peerPodIdx 1) $.namespaces}}
{{- $peerPodIdx = mul $peerPodIdx .peer_pods}}
{{- $peerPodList := list }}
{{- range $lp, $e := until $.peer_pods }}
{{- $nextPod := add 1 $lp $peerPodIdx }}
{{- if gt $nextPod $.pods_per_namespace }}
{{- $nextPod = add (mod $nextPod $.pods_per_namespace) 1 }}
{{- end }}
{{- $lps := (toString $nextPod)}}
{{- $peerPodList = append $peerPodList $lps }}
{{- end }}
{{- $peerPodNames := toJson $peerPodList }}
{{- $peerNsIdx := add (mul $.Iteration .pod_selectors .netpols_per_namespace .peer_namespaces) (mul (sub $.Replica 1) .pod_selectors .peer_namespaces) 1}}
{{- range $ps, $e := until $.pod_selectors }}
{{- $nsStart := add $peerNsIdx (mul $ps $.peer_namespaces) }}
{{- $nsList := list }}
{{- range $i, $v := until $.peer_namespaces }}
{{- $nextNs := add $nsStart $i }}
{{- if ge $nextNs $.namespaces }}
{{- $nextNs = mod $nextNs $.namespaces }}
{{- end }}
{{- $next_namespace := print "network-policy-perf-" $nextNs }}
{{- $nsList = append $nsList $next_namespace }}
{{- end }}
{{- $nsNames := toJson $nsList }}
- to:
- namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values: {{$nsNames}}
podSelector:
matchExpressions:
- key: num
operator: In
values: {{$peerPodNames}}
ports:
{{- $single_port := 8079 }}
{{- range $i, $e := until $.single_ports }}
{{- $single_port = add $single_port 1 }}
- protocol: TCP
port: {{$single_port}}
{{- end }}
{{- $rangeStart := 5000 }}
{{- range $i, $e := until $.port_ranges }}
{{- $rangeEnd := add $rangeStart 5 }}
- protocol: TCP
port: {{$rangeStart}}
endPort: {{$rangeEnd}}
{{ $rangeStart = add $rangeStart 10}}
{{- end }}
{{- end }}
{{- $subnetStartIdx := add (mul $.Replica $.cidr_rules) 1 }}
{{- range $i, $e := until .cidr_rules }}
{{- $subnetIdx := add $subnetStartIdx $i }}
- to:
- ipBlock:
cidr: {{GetSubnet24 (int $subnetIdx) }}
ports:
{{- $single_port := 1000 }}
{{- range $i, $e := until $.single_ports }}
{{- $single_port = add $single_port 1 }}
- protocol: TCP
port: {{$single_port}}
{{- end }}
{{- $rangeStart := 5000 }}
{{- range $i, $e := until $.port_ranges }}
{{- $rangeEnd := add $rangeStart 5 }}
- protocol: TCP
port: {{$rangeStart}}
endPort: {{$rangeEnd}}
{{ $rangeStart = add $rangeStart 10}}
{{- end }}
{{- end }}
policyTypes:
- Egress
Loading