From 43d49b7bda8225e2bce7c6b24b529dc20a3aff0e Mon Sep 17 00:00:00 2001 From: Chris Breu Date: Thu, 31 Oct 2024 08:40:02 -0500 Subject: [PATCH 1/7] docs: update contrib to allow one review (#518) --- CONTRIBUTING.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 33d9617a..22c4839a 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -35,7 +35,7 @@ you see a problem, feel free to fix it. build. 2. Make sure you haven't added any extraneous files to the repository (secrets, .DS_Store, etc.) and double-check .gitignore if you have a new type of change. -4. Update the README.md / Wiki with details of changes to the interface, this includes new environment +3. Update the README.md / Wiki with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters. -5. You may merge the Pull Request in once you have the sign-off of two other developers, or if you - do not have permission to do that, you may request the second reviewer to merge it for you. +4. You may merge the Pull Request in once you have the sign-off of one other developer, or if you + do not have permission to do that, you may request the reviewer to merge it for you. From ccf96dcab49f43854d940d4c46b2d4f6b6c4de48 Mon Sep 17 00:00:00 2001 From: "phillip.toohill" Date: Thu, 31 Oct 2024 13:57:28 -0500 Subject: [PATCH 2/7] Fix: Updating openstack metrics polling interval and timeout (#524) The openstack metrics exporter is a heavy exporter by default. Any additional strain on the system causes timeouts to occur. This fix should eliminate the timeout failures. --- .../openstack-metrics-exporter-helm-overrides.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base-helm-configs/monitoring/openstack-metrics-exporter/openstack-metrics-exporter-helm-overrides.yaml b/base-helm-configs/monitoring/openstack-metrics-exporter/openstack-metrics-exporter-helm-overrides.yaml index dad5da0c..f55a44ba 100644 --- a/base-helm-configs/monitoring/openstack-metrics-exporter/openstack-metrics-exporter-helm-overrides.yaml +++ b/base-helm-configs/monitoring/openstack-metrics-exporter/openstack-metrics-exporter-helm-overrides.yaml @@ -12,8 +12,8 @@ image: pullPolicy: Always serviceMonitor: - interval: 3m - scrapeTimeout: 30s + interval: 5m + scrapeTimeout: 90s nodeSelector: openstack-control-plane: enabled From 00ab5194823d00d90dca285f52000156551ebf18 Mon Sep 17 00:00:00 2001 From: Sowmya Nethi Date: Fri, 1 Nov 2024 02:00:16 +0530 Subject: [PATCH 3/7] Adjust layout to correctly position Application Credentials tab in User Center page (#523) * Add username to RabbitMQ secrets for Magnum and Barbican * Adjust layout to correctly position Application Credentials tab --- base-kustomize/skyline/base/deployment-apiserver.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/base-kustomize/skyline/base/deployment-apiserver.yaml b/base-kustomize/skyline/base/deployment-apiserver.yaml index 76d6116c..a94e0312 100644 --- a/base-kustomize/skyline/base/deployment-apiserver.yaml +++ b/base-kustomize/skyline/base/deployment-apiserver.yaml @@ -317,7 +317,7 @@ spec: key: prometheus_endpoint optional: true - name: skyline-apiserver-db-migrate - image: "ghcr.io/rackerlabs/skyline-rxt:master-ubuntu_jammy-1729251422" + image: "ghcr.io/rackerlabs/skyline-rxt:master-ubuntu_jammy-1730100728" imagePullPolicy: IfNotPresent resources: requests: @@ -340,7 +340,7 @@ spec: readOnly: true containers: - name: skyline-apiserver - image: "ghcr.io/rackerlabs/skyline-rxt:master-ubuntu_jammy-1729251422" + image: "ghcr.io/rackerlabs/skyline-rxt:master-ubuntu_jammy-1730100728" imagePullPolicy: IfNotPresent resources: limits: From 68aadc090826d03792d98b919767eb2494a54c70 Mon Sep 17 00:00:00 2001 From: Brian Abshier <68286817+brianabshier@users.noreply.github.com> Date: Thu, 31 Oct 2024 17:08:56 -0500 Subject: [PATCH 4/7] docs: updated logo and fixed bad img refs (#526) * Update infrastructure-design.md Removed duplicate lines referencing wrong image filenames. Corrected image filenames where appropriate to avoid 404s * Add files via upload Uploaded 2 new Logos * Update index.md Altered Logo Code * Update cloud-onboarding-welcome.md Updated Logo * Update deployment-guide-welcome.md Updated Logo * fix: apply suggestions from code review Confirmed these look fine in my browser after rendering with mkdocs. Nice! Co-authored-by: Kevin Carter * fix: remove trailing whitespace from page --------- Co-authored-by: Luke Repko Co-authored-by: Kevin Carter --- docs/assets/images/ospc_flex_logo_red.svg | 43 +++++++++++++++++++++ docs/assets/images/ospc_flex_logo_white.svg | 43 +++++++++++++++++++++ docs/cloud-onboarding-welcome.md | 2 +- docs/deployment-guide-welcome.md | 2 +- docs/index.md | 2 +- docs/infrastructure-design.md | 25 ++++++------ 6 files changed, 100 insertions(+), 17 deletions(-) create mode 100644 docs/assets/images/ospc_flex_logo_red.svg create mode 100644 docs/assets/images/ospc_flex_logo_white.svg diff --git a/docs/assets/images/ospc_flex_logo_red.svg b/docs/assets/images/ospc_flex_logo_red.svg new file mode 100644 index 00000000..d2770bc4 --- /dev/null +++ b/docs/assets/images/ospc_flex_logo_red.svg @@ -0,0 +1,43 @@ + + + + + + diff --git a/docs/assets/images/ospc_flex_logo_white.svg b/docs/assets/images/ospc_flex_logo_white.svg new file mode 100644 index 00000000..314ae2c1 --- /dev/null +++ b/docs/assets/images/ospc_flex_logo_white.svg @@ -0,0 +1,43 @@ + + + + + + diff --git a/docs/cloud-onboarding-welcome.md b/docs/cloud-onboarding-welcome.md index 3ed1e320..2d01a289 100644 --- a/docs/cloud-onboarding-welcome.md +++ b/docs/cloud-onboarding-welcome.md @@ -1,4 +1,4 @@ -![Genestack Logo](assets/images/genestack-cropped-small.png){ align=left : style="filter:drop-shadow(#3c3c3c 0.5rem 0.5rem 10px);" } +![Rackspace Cloud Software](assets/images/ospc_flex_logo_red.svg){ align=left : style="max-width:175px" } # Welcome to Cloud On Boarding diff --git a/docs/deployment-guide-welcome.md b/docs/deployment-guide-welcome.md index 7bf59326..f2da6b12 100644 --- a/docs/deployment-guide-welcome.md +++ b/docs/deployment-guide-welcome.md @@ -1,4 +1,4 @@ -![Genestack Logo](assets/images/genestack-cropped-small.png){ align=left : style="filter:drop-shadow(#3c3c3c 0.5rem 0.5rem 10px);" } +![Rackspace Cloud Software](assets/images/ospc_flex_logo_red.svg){ align=left : style="max-width:175px" } # What is Genestack? diff --git a/docs/index.md b/docs/index.md index 063333b3..3ed10a49 100644 --- a/docs/index.md +++ b/docs/index.md @@ -9,7 +9,7 @@ hide:
- :material-heart:{ .lg } __A Welcoming Community__ - ![Rackspace R](assets/images/r-Icon-RGB-Red.svg){ align=left : style="filter:drop-shadow(#3c3c3c 0.5rem 0.5rem 10px);max-width:125px" } + ![Rackspace Cloud Software](assets/images/ospc_flex_logo_red.svg){ align=left : style="max-width:125px" } Rackspace would like to once again welcome you to the cloud. If you're developing applications, wanting to contribute to OpenStack, or just looking for a better platform; you're in the right place. diff --git a/docs/infrastructure-design.md b/docs/infrastructure-design.md index 71bec9e3..20261ba2 100644 --- a/docs/infrastructure-design.md +++ b/docs/infrastructure-design.md @@ -1,8 +1,8 @@ -## Genestack Infrastructure Design Notes +## Genestack Infrastructure Design Notes -### Ironic for bare-metal provisioning +### Ironic for bare-metal provisioning -Our internal deployment team uses OpenStack bare metal provisioning, a.k.a **Ironic**, which provides bare metal machines instead of virtual machines, forked from the Nova baremetal driver. It is best thought of as a bare metal hypervisor API and a set of plugins which interact with the bare metal hypervisors. By default, it will use PXE and IPMI in order to provision and turn on/off machines, but Ironic also supports vendor-specific plugins which may implement additional functionality. +Our internal deployment team uses OpenStack bare metal provisioning, a.k.a **Ironic**, which provides bare metal machines instead of virtual machines, forked from the Nova baremetal driver. It is best thought of as a bare metal hypervisor API and a set of plugins which interact with the bare metal hypervisors. By default, it will use PXE and IPMI in order to provision and turn on/off machines, but Ironic also supports vendor-specific plugins which may implement additional functionality. After switch and firewall configuration, deployment nodes are created with in the environment which host the required Ironic services: @@ -14,11 +14,11 @@ After switch and firewall configuration, deployment nodes are created with in th ### Ironic Diagram -![conceptual_architecture](./assets/images/conceptual_architecture.png) +![conceptual_architecture](./assets/images/ironic-design.png) #### Benefits of Ironic -​ With a standard API and lightweight footprint, Ironic can serve as a driver for a variety of bare metal infrastructure. Ironic allows users to manage bare metal infrastructure like they would virtual machines and provides ideal infrastructure to run container orchestration frameworks like Kubernetes to optimize performance. +​ With a standard API and lightweight footprint, Ironic can serve as a driver for a variety of bare metal infrastructure. Ironic allows users to manage bare metal infrastructure like they would virtual machines and provides ideal infrastructure to run container orchestration frameworks like Kubernetes to optimize performance. @@ -38,7 +38,7 @@ After switch and firewall configuration, deployment nodes are created with in th - **Leaf switches.** Servers and storage connect to leaf switches and consist of access switches that aggregate traffic from servers. They connect directly to the spine. ![leaf-spline](assets/images/leaf-spline.png) -[conceptual_architecture](./assets/images/conceptual_architecture.png) + #### Advantages of Leaf-Spline Architecture - **Redundancy.** Each leaf switch connects to every spine switch, which increases the amount of redundancy while also reducing potential bottlenecks. @@ -47,26 +47,24 @@ After switch and firewall configuration, deployment nodes are created with in th - **Scalability.** Additional spine switches can be added to help avoid oversubscriptionand increase scalability. - **Reduces spending.** A spine-leaf architecture increases the number of connections each switch can handle, so data centers require fewer devices to deploy. -![image-20241018150739704](./assets/images/spine-leaf.png.png) - -#### Network Design Details +#### Network Design Details ​ Rackspace utilizes Spline-leaf network architecture where server to server traffic (east-west) has higher importance than external connectivity of the deployed application. This is ideal for single or multi tenant deployments that process large workloads of internal data kept in **block** or **object** storage. ### Commodity Storage Solutions -​ Commodity storage hardware, sometimes known as off-the-shelf storage, is relatively inexpensive storage systems utilizing standard hard rives that are widely available and basically interchangeable with other drives of its type. These drives are housed in simple JBOD (just a bunch of disks) chassis or in smarter storage solutions such as Dell EMC or NetApp enclosures. +​ Commodity storage hardware, sometimes known as off-the-shelf storage, is relatively inexpensive storage systems utilizing standard hard rives that are widely available and basically interchangeable with other drives of its type. These drives are housed in simple JBOD (just a bunch of disks) chassis or in smarter storage solutions such as Dell EMC or NetApp enclosures. #### Cost effectiveness -​ A major advantage of using commodity storage is for data resilience and reduced storage costs. Because of their ability to spread data across spans of inexpensive disks, data loss risk is greatly reduced when a drive inevitably fails. Data is automatically rebalanced to healthy drives before a degraded drive is removed from the cluster to be replaced as time permits by support staff. +​ A major advantage of using commodity storage is for data resilience and reduced storage costs. Because of their ability to spread data across spans of inexpensive disks, data loss risk is greatly reduced when a drive inevitably fails. Data is automatically rebalanced to healthy drives before a degraded drive is removed from the cluster to be replaced as time permits by support staff. #### Genestack Storage Integrations -​ Genestack easily integrates commodity storage into its cloud solutions by leveraging it for Ceph (block/object storage) and Swift (object storage) storage targets. +​ Genestack easily integrates commodity storage into its cloud solutions by leveraging it for Ceph (block/object storage) and Swift (object storage) storage targets. -​ **Ceph** is an open source software-defined storage solution designed to address the block, file and object storage needs of modern enterprises. Its highly scalable architecture sees it being adopted as the new norm for high-growth block storage, object stores, and data lakes. +​ **Ceph** is an open source software-defined storage solution designed to address the block, file and object storage needs of modern enterprises. Its highly scalable architecture sees it being adopted as the new norm for high-growth block storage, object stores, and data lakes. - **Scalability**: Ceph can scale to support hundreds of petabytes of data and tens of billions of objects. - **Self-managed**: Ceph is designed to be self-healing and self-managing, so it can handle failures without interruption. @@ -79,4 +77,3 @@ After switch and firewall configuration, deployment nodes are created with in th - **Cost-effectiveness:** Swift can use inexpensive **commodity hard drives** and servers instead of more expensive equipment. - **Scalability:** Swift uses a distributed architecture with no central point of control, which allows for greater scalability, redundancy, and permanence. - **API-accessible:** Swift provides an API-accessible storage platform that can be integrated directly into applications. - From 40a146b851d09134900059ebd1564266e3356ee1 Mon Sep 17 00:00:00 2001 From: Kevin Carter Date: Fri, 1 Nov 2024 23:22:12 -0500 Subject: [PATCH 5/7] fix: update docs to be sure we have a functional build (#529) The docs were changed to point to a playbook that is broken, this change updates the docs to ensure readers have a functional setup. The docs still have the broken references but they've been moved to experimental. Signed-off-by: Kevin Carter --- docs/k8s-kubespray.md | 40 +++++++++++++++++++++++++++++----------- docs/k8s-labels.md | 12 ++++++++---- 2 files changed, 37 insertions(+), 15 deletions(-) diff --git a/docs/k8s-kubespray.md b/docs/k8s-kubespray.md index b1882308..e4177f7e 100644 --- a/docs/k8s-kubespray.md +++ b/docs/k8s-kubespray.md @@ -93,22 +93,40 @@ source /opt/genestack/scripts/genestack.rc ansible-playbook host-setup.yml ``` -The `private-key` option can be used to instruct ansible to use a custom SSH key for the SSH connection - -``` shell - --private-key ${HOME}/.ssh/openstack-keypair.key -``` +!!! note + The RC file sets a number of environment variables that help ansible to run in a more easy to understand way. ### Run the cluster deployment -This is used to deploy kubespray against infra on an OpenStack cloud. If you're deploying on baremetal you will need to setup an inventory that meets your environmental needs. +=== "Kubespray Direct _(Recommended)_" -The playbook `setup-kubernetes.yml` is used to invoke the selected provider installation and label and configure a kube config: + This is used to deploy kubespray against infra on an OpenStack cloud. If you're deploying on baremetal you will need to setup an inventory that meets your environmental needs. + Change the directory to the kubespray submodule. -``` shell -source /opt/genestack/scripts/genestack.rc -ansible-playbook setup-kubernetes.yml -``` + The cluster deployment playbook can also have overrides defined to augment how the playbook is executed. + Confirm openstack-flex-inventory.yaml matches what is in /etc/genestack/inventory. If it does not match update the command to match the file names. + + ``` shell + cd /opt/genestack/submodules/kubespray + ansible-playbook --inventory /etc/genestack/inventory/openstack-flex-inventory.ini \ + --private-key /home/ubuntu/.ssh/openstack-flex-keypair.key \ + --user ubuntu \ + --become \ + cluster.yml + ``` + +=== "Setup-Kubernetes Playbook _(Experimental)_" + + The `private-key` option can be used to instruct ansible to use a custom SSH key for the SSH connection + + ``` shell + --private-key ${HOME}/.ssh/openstack-keypair.key + ``` + + ``` shell + source /opt/genestack/scripts/genestack.rc + ansible-playbook setup-kubernetes.yml + ``` !!! tip diff --git a/docs/k8s-labels.md b/docs/k8s-labels.md index 1e173374..de23dda8 100644 --- a/docs/k8s-labels.md +++ b/docs/k8s-labels.md @@ -1,12 +1,16 @@ # Label all of the nodes in the environment -The labeling of nodes is automated as part of the `setup-kubernetes.yml` playbook based on ansible groups. -For understanding the use of k8s labels is defined as following, automation and documented deployment -steps build ontop of the labels referenced here: +To use the K8S environment for OpenStack all of the nodes MUST be labeled. The following Labels will be used within your environment. +Make sure you label things accordingly. !!! note - The following example assumes the node names can be used to identify their purpose within our environment. That may not be the case in reality. Adapt the following commands to meet your needs. + The labeling of nodes is automated as part of the `setup-kubernetes.yml` playbook based on ansible groups. + For understanding the use of k8s labels is defined as following, automation and documented deployment + steps build ontop of the labels referenced here: + + The following example assumes the node names can be used to identify their purpose within our environment. + That may not be the case in reality. Adapt the following commands to meet your needs. ## Genestack Labels From 415dda86fec8fe2898ad9c86b1bf2808bb7295a7 Mon Sep 17 00:00:00 2001 From: Kevin Carter Date: Fri, 1 Nov 2024 23:23:04 -0500 Subject: [PATCH 6/7] fix: update default_availability_zones to a value (#528) This change updates default_availability_zones to ensure it has a default value. While this option is supposed to assume the value of the bound az, openstack helm returns a `` value which makes the services very upset. By setting a value we make the service happy, and we all want happy services. Signed-off-by: Kevin Carter --- base-helm-configs/cinder/cinder-helm-overrides.yaml | 2 +- base-helm-configs/neutron/neutron-helm-overrides.yaml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/base-helm-configs/cinder/cinder-helm-overrides.yaml b/base-helm-configs/cinder/cinder-helm-overrides.yaml index bb365102..b64959d7 100644 --- a/base-helm-configs/cinder/cinder-helm-overrides.yaml +++ b/base-helm-configs/cinder/cinder-helm-overrides.yaml @@ -759,7 +759,7 @@ conf: cinder: DEFAULT: storage_availability_zone: az1 - default_availability_zone: null + default_availability_zone: az1 allow_availability_zone_fallback: true scheduler_default_filters: AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter volume_usage_audit_period: hour diff --git a/base-helm-configs/neutron/neutron-helm-overrides.yaml b/base-helm-configs/neutron/neutron-helm-overrides.yaml index 3fc63d19..f0225811 100644 --- a/base-helm-configs/neutron/neutron-helm-overrides.yaml +++ b/base-helm-configs/neutron/neutron-helm-overrides.yaml @@ -1754,7 +1754,7 @@ conf: # NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. bind_port: null - default_availability_zones: null + default_availability_zones: az1 network_scheduler_driver: neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler router_scheduler_driver: neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler dhcp_load_type: networks From f1d9b84eb3be96f6316242a9201d160647fb05d7 Mon Sep 17 00:00:00 2001 From: Kevin Carter Date: Sat, 2 Nov 2024 19:45:14 -0500 Subject: [PATCH 7/7] feat: netapp-volume-worker support (#527) This creates a new Kustomize deployment for cinder with a netapp volume backend. The worker uses an existing cinder deployment managed by helm kustomizes the configuration values and produces a netapp specific volume container which has the ability to run with multiple netapp backends. To use this container the following secret must be created ``` shell kubectl --namespace openstack \ create secret generic cinder-netapp \ --type Opaque \ --from-literal=BACKENDS="netapp-backend-1,root,10.0.0.1,80,vserver1,qos-something,True,True,True,True,enabled" ``` Each backend has 11 values which correspond to the needed configuration. Multiple backends are supported using a semicolon. Signed-off-by: Kevin Carter --- .../kustomize-cinder-volume-netapp.yaml | 28 ++ .../cinder/netapp/configmap-etc.yaml | 53 +++ .../cinder/netapp/deploy-volume-netapp.yaml | 346 ++++++++++++++++++ .../netapp/hpa-cinder-volume-netapp.yaml | 19 + .../cinder/netapp/kustomization.yaml | 18 + docs/openstack-cinder-netapp.md | 65 ++++ mkdocs.yml | 1 + 7 files changed, 530 insertions(+) create mode 100644 .github/workflows/kustomize-cinder-volume-netapp.yaml create mode 100644 base-kustomize/cinder/netapp/configmap-etc.yaml create mode 100644 base-kustomize/cinder/netapp/deploy-volume-netapp.yaml create mode 100644 base-kustomize/cinder/netapp/hpa-cinder-volume-netapp.yaml create mode 100644 base-kustomize/cinder/netapp/kustomization.yaml create mode 100644 docs/openstack-cinder-netapp.md diff --git a/.github/workflows/kustomize-cinder-volume-netapp.yaml b/.github/workflows/kustomize-cinder-volume-netapp.yaml new file mode 100644 index 00000000..6cb8069d --- /dev/null +++ b/.github/workflows/kustomize-cinder-volume-netapp.yaml @@ -0,0 +1,28 @@ +name: Kustomize GitHub Actions for cinder-volume-netapp + +on: + pull_request: + paths: + - base-kustomize/cinder/netapp/** + - .github/workflows/kustomize-cinder-volume-netapp.yaml +jobs: + kustomize: + name: Kustomize + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v4 + - name: Kustomize Install + working-directory: /usr/local/bin/ + run: | + if [ ! -f /usr/local/bin/kustomize ]; then + curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | sudo bash + fi + - name: Run Kustomize Build + run: | + kustomize build base-kustomize/cinder/netapp/ > /tmp/rendered.yaml + - name: Return Kustomize Build + uses: actions/upload-artifact@v4 + with: + name: kustomize-cinder-volume-netapp-artifact + path: /tmp/rendered.yaml diff --git a/base-kustomize/cinder/netapp/configmap-etc.yaml b/base-kustomize/cinder/netapp/configmap-etc.yaml new file mode 100644 index 00000000..ffc52713 --- /dev/null +++ b/base-kustomize/cinder/netapp/configmap-etc.yaml @@ -0,0 +1,53 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: cinder-volume-netapp-config + namespace: openstack +data: + cinder-volume.sh: | + #!/bin/bash + set -ex + exec cinder-volume --config-file /etc/cinder/cinder.conf \ + --config-file /tmp/pod-shared/backends.conf \ + --config-file /tmp/pod-shared/internal_tenant.conf \ + --config-file /tmp/pod-shared/cinder-netapp.conf + generate-backends.py: | + #!/usr/bin/env python3 + import configparser + import os + netapp_backends = os.environ.get('NETAPP_BACKENDS') + config = configparser.ConfigParser() + for backend in netapp_backends.split(';'): + backend = backend.split(',') + assert len(backend) == 11 + config.add_section(backend[0]) + config.set(backend[0], 'netapp_login', backend[1]) + config.set(backend[0], 'netapp_password', backend[2]) + config.set(backend[0], 'netapp_server_hostname', backend[3]) + config.set(backend[0], 'netapp_server_port', backend[4]) + config.set(backend[0], 'netapp_storage_family', 'ontap_cluster') + config.set(backend[0], 'netapp_storage_protocol', 'iscsi') + config.set(backend[0], 'netapp_transport_type', 'http') + config.set(backend[0], 'netapp_vserver', backend[5]) + config.set(backend[0], 'netapp:qos_policy_group', backend[6]) + config.set(backend[0], 'netapp_dedup', backend[7]) + config.set(backend[0], 'netapp_compression', backend[8]) + config.set(backend[0], 'netapp_thick_provisioned', backend[9]) + config.set(backend[0], 'netapp_lun_space_reservation', backend[10]) + config.set(backend[0], 'volume_driver', 'cinder.volume.drivers.netapp.common.NetAppDriver') + config.set(backend[0], 'volume_backend_name', backend[0]) + print(f'Added backend {backend[0]}') + with open('/tmp/pod-shared/backends.conf', 'w') as configfile: + config.write(configfile) + print('Generated backends.conf') + + config = configparser.ConfigParser() + backends = ','.join([i.split(',')[0] for i in netapp_backends.split(';')]) + config.set('DEFAULT', 'enabled_backends', backends) + config.set('DEFAULT', 'host', 'cinder-volume-netapp-worker') + with open('/tmp/pod-shared/cinder-netapp.conf', 'w') as configfile: + config.write(configfile) + print('Updated cinder.conf') + ssh_known_hosts: | + # Empty SSH host file managed by cinder-volume-netapp diff --git a/base-kustomize/cinder/netapp/deploy-volume-netapp.yaml b/base-kustomize/cinder/netapp/deploy-volume-netapp.yaml new file mode 100644 index 00000000..56151301 --- /dev/null +++ b/base-kustomize/cinder/netapp/deploy-volume-netapp.yaml @@ -0,0 +1,346 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: cinder-volume-netapp + namespace: openstack +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: cinder-openstack-cinder-volume-netapp + namespace: openstack +rules: + - apiGroups: + - "" + - extensions + - batch + - apps + verbs: + - get + - list + resources: + - services + - endpoints + - jobs + - pods + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: cinder-cinder-volume-netapp + namespace: openstack +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: cinder-openstack-cinder-volume-netapp +subjects: + - kind: ServiceAccount + name: cinder-volume-netapp + namespace: openstack + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: cinder-volume-netapp + labels: + release_group: cinder + application: cinder + component: volume +spec: + replicas: 1 + selector: + matchLabels: + release_group: cinder + application: cinder + component: volume + revisionHistoryLimit: 3 + strategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + maxSurge: 3 + template: + metadata: + labels: + release_group: cinder + application: cinder + component: volume + spec: + serviceAccountName: cinder-volume-netapp + securityContext: + runAsUser: 42424 + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchExpressions: + - key: release_group + operator: In + values: + - cinder + - key: application + operator: In + values: + - cinder + - key: component + operator: In + values: + - volume + topologyKey: kubernetes.io/hostname + weight: 10 + nodeSelector: + openstack-control-plane: enabled + initContainers: + - name: init + image: image-kubernetes-entrypoint-init + imagePullPolicy: IfNotPresent + securityContext: + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + runAsUser: 65534 + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: NAMESPACE + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + - name: INTERFACE_NAME + value: eth0 + - name: PATH + value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ + - name: DEPENDENCY_SERVICE + value: "openstack:keystone-api,openstack:cinder-api" + - name: DEPENDENCY_JOBS + value: "cinder-db-sync,cinder-ks-user,cinder-ks-endpoints" + - name: DEPENDENCY_DAEMONSET + value: "" + - name: DEPENDENCY_CONTAINER + value: "" + - name: DEPENDENCY_POD_JSON + value: "" + - name: DEPENDENCY_CUSTOM_RESOURCE + value: "" + command: + - kubernetes-entrypoint + volumeMounts: [] + - name: init-cinder-conf + securityContext: + readOnlyRootFilesystem: true + runAsUser: 0 + image: image-heat-conf-init + imagePullPolicy: IfNotPresent + command: + - /tmp/retrieve-internal-tenant.sh + volumeMounts: + - name: pod-tmp + mountPath: /tmp + - name: cinder-bin + mountPath: /tmp/retrieve-internal-tenant.sh + subPath: retrieve-internal-tenant.sh + readOnly: true + - name: pod-shared + mountPath: /tmp/pod-shared + env: + - name: OS_IDENTITY_API_VERSION + value: "3" + - name: OS_AUTH_URL + valueFrom: + secretKeyRef: + name: cinder-keystone-admin + key: OS_AUTH_URL + - name: OS_REGION_NAME + valueFrom: + secretKeyRef: + name: cinder-keystone-admin + key: OS_REGION_NAME + - name: OS_INTERFACE + valueFrom: + secretKeyRef: + name: cinder-keystone-admin + key: OS_INTERFACE + - name: OS_ENDPOINT_TYPE + valueFrom: + secretKeyRef: + name: cinder-keystone-admin + key: OS_INTERFACE + - name: OS_PROJECT_DOMAIN_NAME + valueFrom: + secretKeyRef: + name: cinder-keystone-admin + key: OS_PROJECT_DOMAIN_NAME + - name: OS_PROJECT_NAME + valueFrom: + secretKeyRef: + name: cinder-keystone-admin + key: OS_PROJECT_NAME + - name: OS_USER_DOMAIN_NAME + valueFrom: + secretKeyRef: + name: cinder-keystone-admin + key: OS_USER_DOMAIN_NAME + - name: OS_USERNAME + valueFrom: + secretKeyRef: + name: cinder-keystone-admin + key: OS_USERNAME + - name: OS_PASSWORD + valueFrom: + secretKeyRef: + name: cinder-keystone-admin + key: OS_PASSWORD + - name: OS_DEFAULT_DOMAIN + valueFrom: + secretKeyRef: + name: cinder-keystone-admin + key: OS_DEFAULT_DOMAIN + - name: INTERNAL_PROJECT_NAME + value: "internal_cinder" + - name: INTERNAL_USER_NAME + value: "internal_cinder" + - name: SERVICE_OS_REGION_NAME + valueFrom: + secretKeyRef: + name: cinder-keystone-user + key: OS_REGION_NAME + - name: SERVICE_OS_PROJECT_DOMAIN_NAME + valueFrom: + secretKeyRef: + name: cinder-keystone-user + key: OS_PROJECT_DOMAIN_NAME + - name: SERVICE_OS_PROJECT_NAME + valueFrom: + secretKeyRef: + name: cinder-keystone-user + key: OS_PROJECT_NAME + - name: SERVICE_OS_USER_DOMAIN_NAME + valueFrom: + secretKeyRef: + name: cinder-keystone-user + key: OS_USER_DOMAIN_NAME + - name: SERVICE_OS_USERNAME + valueFrom: + secretKeyRef: + name: cinder-keystone-user + key: OS_USERNAME + - name: SERVICE_OS_PASSWORD + valueFrom: + secretKeyRef: + name: cinder-keystone-user + key: OS_PASSWORD + - name: cinder-volume-netapp-init + image: image-cinder-volume-netapp-init + imagePullPolicy: IfNotPresent + securityContext: + readOnlyRootFilesystem: true + command: + - /var/lib/openstack/bin/python3 + - /tmp/generate-backends.py + env: + - name: NETAPP_BACKENDS + valueFrom: + secretKeyRef: + name: cinder-netapp + key: BACKENDS + terminationMessagePath: /var/log/termination-log + resources: + limits: + memory: "1Gi" + requests: + memory: "256Mi" + cpu: "250m" + volumeMounts: + - name: cinder-netapp-data + mountPath: /tmp/generate-backends.py + subPath: generate-backends.py + readOnly: true + - name: pod-shared + mountPath: /tmp/pod-shared + containers: + - name: cinder-volume-netapp + image: image-cinder-volume-netapp + imagePullPolicy: IfNotPresent + securityContext: + capabilities: + add: + - SYS_ADMIN + readOnlyRootFilesystem: true + command: + - /tmp/cinder-volume.sh + env: [] + terminationMessagePath: /var/log/termination-log + resources: + limits: + memory: "1Gi" + requests: + memory: "256Mi" + cpu: "250m" + volumeMounts: + - name: pod-tmp + mountPath: /tmp + - name: cinder-netapp-data + mountPath: /tmp/cinder-volume.sh + subPath: cinder-volume.sh + readOnly: true + - name: pod-shared + mountPath: /tmp/pod-shared + - name: cinder-conversion + mountPath: /var/lib/cinder/conversion + - name: cinder-etc + mountPath: /etc/cinder/cinder.conf + subPath: cinder.conf + readOnly: true + - name: cinder-etc + mountPath: /etc/cinder/logging.conf + subPath: logging.conf + readOnly: true + - name: cinder-coordination + mountPath: /var/lib/cinder/coordination + - name: cinder-netapp-data + mountPath: /var/lib/cinder/ssh_known_hosts + subPath: ssh_known_hosts + - name: cinder-etc + mountPath: /etc/sudoers.d/kolla_cinder_sudoers + subPath: cinder_sudoers + readOnly: true + - name: cinder-etc + mountPath: /etc/sudoers.d/kolla_cinder_volume_sudoers + subPath: cinder_sudoers + readOnly: true + - name: cinder-etc + mountPath: /etc/cinder/rootwrap.conf + subPath: rootwrap.conf + readOnly: true + - name: cinder-etc + mountPath: /etc/cinder/rootwrap.d/volume.filters + subPath: volume.filters + readOnly: true + volumes: + - name: pod-tmp + emptyDir: {} + - name: cinder-bin + configMap: + name: cinder-bin + defaultMode: 0555 + - name: cinder-etc + secret: + secretName: cinder-etc + defaultMode: 0444 + - name: pod-shared + emptyDir: {} + - name: cinder-conversion + emptyDir: {} + - name: cinder-coordination + emptyDir: {} + - name: cinder-netapp-data + configMap: + name: "cinder-volume-netapp-config" + defaultMode: 0555 diff --git a/base-kustomize/cinder/netapp/hpa-cinder-volume-netapp.yaml b/base-kustomize/cinder/netapp/hpa-cinder-volume-netapp.yaml new file mode 100644 index 00000000..6dd3b5ee --- /dev/null +++ b/base-kustomize/cinder/netapp/hpa-cinder-volume-netapp.yaml @@ -0,0 +1,19 @@ +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: cinder-volume-netapp + namespace: openstack +spec: + maxReplicas: 9 + minReplicas: 3 + metrics: + - resource: + name: cpu + target: + averageUtilization: 50 + type: Utilization + type: Resource + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: cinder-volume-netapp diff --git a/base-kustomize/cinder/netapp/kustomization.yaml b/base-kustomize/cinder/netapp/kustomization.yaml new file mode 100644 index 00000000..f921fb7f --- /dev/null +++ b/base-kustomize/cinder/netapp/kustomization.yaml @@ -0,0 +1,18 @@ +images: + - name: image-kubernetes-entrypoint-init + newName: quay.io/airshipit/kubernetes-entrypoint + newTag: v1.0.0 + - name: image-heat-conf-init + newName: docker.io/openstackhelm/heat + newTag: 2024.1-ubuntu_jammy + - name: image-cinder-volume-netapp-init + newName: docker.io/openstackhelm/cinder + newTag: 2024.1-ubuntu_jammy + - name: image-cinder-volume-netapp + newName: docker.io/openstackhelm/cinder + newTag: 2024.1-ubuntu_jammy + +resources: + - configmap-etc.yaml + - deploy-volume-netapp.yaml + - hpa-cinder-volume-netapp.yaml diff --git a/docs/openstack-cinder-netapp.md b/docs/openstack-cinder-netapp.md new file mode 100644 index 00000000..e7ac3644 --- /dev/null +++ b/docs/openstack-cinder-netapp.md @@ -0,0 +1,65 @@ +# NetApp Volume Worker Configuration Documentation + +This document provides information on configuring NetApp backends for the isolated Cinder volume worker. Each backend is defined by a set of +11 comma-separated options, and multiple backends can be specified by separating them with semicolons. + +## Backend Options + +Below is a table detailing each option, its position in the backend configuration, a description, and the expected data type. + +| Option Index | Option Name | Description | Type | +|--------------|-------------------------------|------------------------------------------------------------------------------|---------| +| 0 | `backend_name` | The name of the backend configuration section. Used as `volume_backend_name`.| String | +| 1 | `netapp_login` | Username for authenticating with the NetApp storage system. | String | +| 2 | `netapp_password` | Password for authenticating with the NetApp storage system. | String | +| 3 | `netapp_server_hostname` | Hostname or IP address of the NetApp storage system. | String | +| 4 | `netapp_server_port` | Port number to communicate with the NetApp storage system. | Integer | +| 5 | `netapp_vserver` | The name of the Vserver on the NetApp storage system. | String | +| 6 | `netapp:qos_policy_group` | The name of the QoS policy group. | String | +| 7 | `netapp_dedup` | Enable (`True`) or disable (`False`) deduplication. | Boolean | +| 8 | `netapp_compression` | Enable (`True`) or disable (`False`) compression. | Boolean | +| 9 | `netapp_thick_provisioned` | Use thick (`True`) or thin (`False`) provisioning. | Boolean | +| 10 | `netapp_lun_space_reservation`| Enable (`enabled`) or disable (`disabled`) LUN space reservation. | String | + +### Detailed Option Descriptions + +- **`backend_name`**: A unique identifier for the backend configuration. This name is used internally by Cinder to distinguish between different backends. +- **`netapp_login`**: The username credential required to authenticate with the NetApp storage system. +- **`netapp_password`**: The password credential required for authentication. Ensure this is kept secure. +- **`netapp_server_hostname`**: The address of the NetApp storage system. This can be either an IP address or a fully qualified domain name (FQDN). +- **`netapp_server_port`**: The port number used for communication with the NetApp storage system. Common ports are `80` for HTTP and `443` for HTTPS. +- **`netapp_vserver`**: Specifies the virtual storage server (Vserver) on the NetApp storage system that will serve the volumes. +- **`netapp:qos_policy_group`**: The Quality of Service (QoS) policy group name that will be applied to volumes for this backend. +- **`netapp_dedup`**: A boolean value to enable or disable deduplication on the storage volumes. Acceptable values are `True` or `False`. +- **`netapp_compression`**: A boolean value to enable or disable compression on the storage volumes. Acceptable values are `True` or `False`. +- **`netapp_thick_provisioned`**: Determines whether volumes are thick (`True`) or thin (`False`) provisioned. +- **`netapp_lun_space_reservation`**: A String indicating whether to enable space reservation for LUNs. If `enabled`, space is reserved for the entire LUN size at creation time. + +## Example opaque Configuration + +Before deploying the NetApp volume worker, create the necessary Kubernetes secret with the `BACKENDS` environment variable: + +```shell +kubectl --namespace openstack create secret generic cinder-netapp \ + --type Opaque \ + --from-literal=BACKENDS="backend1,user1,password1,host1,80,vserver1,qos1,True,True,False,enabled" +``` + +### `BACKENDS` Environment Variable Structure + +The `BACKENDS` environment variable is used to pass backend configurations to the NetApp volume worker. Each backend configuration consists of 11 options +in a specific order. + +!!! Example "Replace the placeholder values with your actual backend configuration details" + + ```shell + BACKENDS="backend1,user1,password1,host1,80,vserver1,qos1,True,True,False,disabled;backend2,user2,password2,host2,443,vserver2,qos2,False,True,True,enabled" + ``` + +## Run the deployment + +With your configuration defined, run the deployment with a standard `kubectl apply` command. + +``` shell +kubectl --namespace openstack apply -k /etc/genestack/kustomize/cinder/netapp +``` diff --git a/mkdocs.yml b/mkdocs.yml index a3e8adf7..5f97d72b 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -187,6 +187,7 @@ nav: - Block Storage: - Cinder: openstack-cinder.md - LVM iSCSI: openstack-cinder-lvmisci.md + - NETAPP: openstack-cinder-netapp.md - FIPS Cinder Encryption: openstack-cinder-fips-encryption.md - Compute Kit: - Compute Overview: openstack-compute-kit.md