Skip to content

Commit

Permalink
Kubespray docs update (#505)
Browse files Browse the repository at this point in the history
- Update inventory usage with example yaml and builtin inventory directory
- Generalizing cloud environment names
- Remove no longer needed GENESTACK_PRODUCT environment
  • Loading branch information
BjoernT authored Oct 23, 2024
1 parent 340b00d commit b7f6bba
Show file tree
Hide file tree
Showing 13 changed files with 66 additions and 189 deletions.
3 changes: 0 additions & 3 deletions ansible/playbooks/infra-deploy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -507,7 +507,6 @@
os_keypair_name: "{{ os_network_name }}-keypair"
# ansible_ssh_common_args: "-F {{ lookup('env', 'HOME') }}/.ssh/{{ os_keypair_name }}.config"
ansible_ssh_private_key_file: "{{ lookup('env', 'HOME') }}/.ssh/{{ os_keypair_name }}.key"
genestack_product: openstack-flex
tasks:
- name: Create ssh directory on jump host
ansible.builtin.file:
Expand Down Expand Up @@ -605,8 +604,6 @@
msg: "This will install ansible, collections, etc."
- name: Genestack bootstrap
command: /opt/genestack/bootstrap.sh
environment:
GENESTACK_PRODUCT: "{{ genestack_product }}"
- name: Source Genestack venv via .bashrc
ansible.builtin.lineinfile:
path: /root/.bashrc
Expand Down
13 changes: 4 additions & 9 deletions bootstrap.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,6 @@ cd "${BASEDIR}" || error "Could not change to ${BASEDIR}"

source scripts/lib/functions.sh

# Set GENESTACK_PRODUCT to 'genestack'
GENESTACK_PRODUCT="genestack"
export GENESTACK_PRODUCT

set -e

success "Environment variables:"
Expand All @@ -50,22 +46,21 @@ test -L "$GENESTACK_CONFIG" 2>&1 || mkdir -p "${GENESTACK_CONFIG}"

# Set config
test -f "$GENESTACK_CONFIG/provider" || echo "${K8S_PROVIDER}" > "${GENESTACK_CONFIG}/provider"
test -f "$GENESTACK_CONFIG/product" || echo "${GENESTACK_PRODUCT}" > "${GENESTACK_CONFIG}/product"
mkdir -p "$GENESTACK_CONFIG/inventory/group_vars" "${GENESTACK_CONFIG}/inventory/credentials"

# Copy default k8s config
PRODUCT_DIR="ansible/inventory/genestack"
if [ "$(find ${GENESTACK_CONFIG}/inventory -name \*.yaml -o -name \*.yml 2>/dev/null | wc -l)" -eq 0 ]; then
if [ "$(find "${GENESTACK_CONFIG}/inventory" -name \*.yaml -o -name \*.yml 2>/dev/null | wc -l)" -eq 0 ]; then
cp -r "${PRODUCT_DIR}"/* "${GENESTACK_CONFIG}/inventory"
fi

# Copy gateway-api example configs
test -d "$GENESTACK_CONFIG/gateway-api" || cp -a "${BASEDIR}/etc/gateway-api" "$GENESTACK_CONFIG"/

# Create venv and prepare Ansible
python3 -m venv ~/.venvs/genestack
~/.venvs/genestack/bin/pip install pip --upgrade
source ~/.venvs/genestack/bin/activate && success "Switched to venv ~/.venvs/genestack"
python3 -m venv "${HOME}/.venvs/genestack"
"${HOME}/.venvs/genestack/bin/pip" install pip --upgrade
source "${HOME}/.venvs/genestack/bin/activate" && success "Switched to venv ~/.venvs/genestack"
pip install -r "${BASEDIR}/requirements.txt" && success "Installed ansible package"
ansible-playbook "${BASEDIR}/scripts/get-ansible-collection-requirements.yml" \
-e collections_file="${ANSIBLE_COLLECTION_FILE}" \
Expand Down
15 changes: 8 additions & 7 deletions docs/adding-new-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,20 @@ Lets assume we are adding one new worker node: `computegpu001.p40.example.com` a

1. Add the node to your ansible inventory file
```shell
vim /etc/genestack/inventory/openstack-flex-inventory.ini
vim /etc/genestack/inventory/inventory.yaml
```

2. Ensure hostname is correctly set and hosts file has 127.0.0.1 entry

3. Run scale.yaml to add the node to your cluster
```shell
ansible-playbook -i /etc/genestack/inventory/openstack-flex-inventory.yaml scale.yml --limit computegpu001.p40.example.com --become
source /opt/genestack/scripts/genestack.rc
ansible-playbook scale.yml --limit compute-12481.rackerlabs.dev.local --become
```

Once step 3 competes succesfully, validate that the node is up and running in the cluster
```shell
kubectl get nodes | grep computegpu001.p40.example.com
kubectl get nodes | grep compute-12481.rackerlabs.dev.local
```

### PreferNoSchedule Taint
Expand All @@ -35,7 +36,7 @@ pods and the Nova VMs therein.
!!! tip "Setting this is a matter of architerural preference:"

```shell
kubectl taint nodes computegpu001.p40.example.com key1=value1:PreferNoSchedule
kubectl taint nodes compute-12481.rackerlabs.dev.local key1=value1:PreferNoSchedule
```

## Adding the node in openstack
Expand All @@ -45,16 +46,16 @@ labels and annotations.

1. Export the nodes to add
```shell
export NODES='computegpu001.p40.example.com'
export NODES='compute-12481.rackerlabs.dev.local'
```

2. For compute node add the following labels
```shell
# Label the openstack compute nodes
kubectl label node computegpu001.p40.example.com openstack-compute-node=enabled
kubectl label node compute-12481.rackerlabs.dev.local openstack-compute-node=enabled

# With OVN we need the compute nodes to be "network" nodes as well. While they will be configured for networking, they wont be gateways.
kubectl label node computegpu001.p40.example.com openstack-network-node=enabled
kubectl label node compute-12481.rackerlabs.dev.local openstack-network-node=enabled
```

3. Add the right annotations to the node
Expand Down
4 changes: 2 additions & 2 deletions docs/build-test-envs.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,10 +123,10 @@ The lab deployment playbook will build an environment suitable for running Genes

### SSH to lab

If you have not set your .ssh config do not forget to put in your path for your openstack-flex-keypair. Your Ip will be present after running the infra-deploy.yaml.
If you have not set your .ssh config do not forget to put in your path for your openstack-keypair. Your Ip will be present after running the infra-deploy.yaml.

``` shell
ssh -i /path/to/.ssh/openstack-flex-keypair.key [email protected]
ssh -i /path/to/.ssh/openstack-keypair.key [email protected]
```

Expand Down
5 changes: 1 addition & 4 deletions docs/genestack-getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,6 @@ git clone --recurse-submodules -j4 https://github.com/rackerlabs/genestack /opt/

The basic setup requires ansible, ansible collection and helm installed to install Kubernetes and OpenStack Helm:

The environment variable `GENESTACK_PRODUCT` is used to bootstrap specific configurations and alters playbook handling.
It is persisted at /etc/genestack/product` for subsequent executions, it only has to be used once.

``` shell
/opt/genestack/bootstrap.sh
```
Expand All @@ -25,6 +22,6 @@ It is persisted at /etc/genestack/product` for subsequent executions, it only ha

If running this command with `sudo`, be sure to run with `-E`. `sudo -E /opt/genestack/bootstrap.sh`. This will ensure your active environment is passed into the bootstrap command.

Once the bootstrap is completed the default Kubernetes provider will be configured inside `/etc/genestack/provider`
Once the bootstrap is completed the default Kubernetes provider will be configured inside `/etc/genestack/provider` and currently defaults to kubespray.

The ansible inventory is expected at `/etc/genestack/inventory`
7 changes: 6 additions & 1 deletion docs/k8s-config.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
# Retrieving the Kube Config

!!! note
This step is optional once the `setup-kubernetes.yml` playbook has been used to deploy Kubernetes

Once the environment is online, proceed to login to the environment and begin the deployment normally. You'll find the launch node has everything needed, in the places they belong, to get the environment online.



## Install `kubectl`

Install the `kubectl` tool.
Expand Down Expand Up @@ -34,7 +39,7 @@ Retrieve the kube config from our first controller.

``` shell
mkdir -p ~/.kube
rsync -e "ssh -F ${HOME}/.ssh/openstack-flex-keypair.config" \
rsync -e "ssh -F ${HOME}/.ssh/openstack-keypair.config" \
--rsync-path="sudo rsync" \
-avz [email protected]:/root/.kube/config "${HOME}/.kube/config"
```
Expand Down
57 changes: 16 additions & 41 deletions docs/k8s-kubespray.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,6 @@

Currently only the k8s provider kubespray is supported and included as submodule into the code base.

!!! info

Existing OpenStack Ansible inventory can be converted using the `/opt/genestack/scripts/convert_osa_inventory.py` script which provides a `hosts.yml`

### Before you Deploy

Kubespray will be using OVN for all of the network functions, as such, you will need to ensure your hosts are ready to receive the deployment at a low level.
Expand Down Expand Up @@ -42,30 +38,30 @@ you will need to prepare your networking infrastructure and basic storage layout

A default inventory file for kubespray is provided at `/etc/genestack/inventory` and must be modified.

Checkout the [openstack-flex/prod-inventory-example.yaml](https://github.com/rackerlabs/genestack/blob/main/ansible/inventory/openstack-flex/inventory.yaml.example) file for an example of a target environment.
Checkout the [inventory.yaml.example](https://github.com/rackerlabs/genestack/blob/main/ansible/inventory/genestack/inventory.yaml.example) file for an example of a target environment.

!!! note

Before you deploy the kubernetes cluster you should define the `kube_override_hostname` option in your inventory. This variable will set the node name which we will want to be an FQDN. When you define the option, it should have the same suffix defined in our `cluster_name` variable.

However, any Kubespray compatible inventory will work with this deployment tooling. The official [Kubespray documentation](https://kubespray.io) can be used to better understand the inventory options and requirements. Within the `ansible/playbooks/inventory` directory there is a directory named `openstack-flex` and `openstack-enterprise`. These directories provide everything we need to run a successful Kubernetes environment for genestack at scale. The difference between **enterprise** and **flex** are just target environment types.
However, any Kubespray compatible inventory will work with this deployment tooling. The official [Kubespray documentation](https://kubespray.io) can be used to better understand the inventory options and requirements.

### Ensure systems have a proper FQDN Hostname

Before running the Kubernetes deployment, make sure that all hosts have a properly configured FQDN.

``` shell
source /opt/genestack/scripts/genestack.rc
ansible -i /etc/genestack/inventory/openstack-flex-inventory.ini -m shell -a 'hostnamectl set-hostname {{ inventory_hostname }}' --become all
ansible -i /etc/genestack/inventory/openstack-flex-inventory.ini -m shell -a "grep 127.0.0.1 /etc/hosts | grep -q {{ inventory_hostname }} || sed -i 's/^127.0.0.1.*/127.0.0.1 {{ inventory_hostname }} localhost.localdomain localhost/' /etc/hosts" --become all
ansible -m shell -a 'hostnamectl set-hostname {{ inventory_hostname }}' --become all
ansible -m shell -a "grep 127.0.0.1 /etc/hosts | grep -q {{ inventory_hostname }} || sed -i 's/^127.0.0.1.*/127.0.0.1 {{ inventory_hostname }} localhost.localdomain localhost/' /etc/hosts" --become all
```

!!! note

In the above command I'm assuming the use of `cluster.local` this is the default **cluster_name** as defined in the group_vars k8s_cluster file. If you change that option, make sure to reset your domain name on your hosts accordingly.


The ansible inventory is expected at `/etc/genestack/inventory`
The ansible inventory is expected at `/etc/genestack/inventory` and automatically loaded once `genestack.rc` is sourced.

### Prepare hosts for installation

Expand All @@ -76,7 +72,7 @@ cd /opt/genestack/ansible/playbooks

!!! note

The RC file sets a number of environment variables that help ansible to run in a more easily to understand way.
The rc file sets a number of environment variables that help ansible to run in a more easily to understand way.

While the `ansible-playbook` command should work as is with the sourced environment variables, sometimes it's necessary to set some overrides on the command line.
The following example highlights a couple of overrides that are generally useful.
Expand All @@ -89,50 +85,29 @@ ansible-playbook host-setup.yml

#### Example host setup playbook with overrides

Confirm openstack-flex-inventory.yaml matches what is in /etc/genestack/inventory. If it does not match update the command to match the file names.
Confirm `inventory.yaml` matches what is in `/etc/genestack/inventory`. If it does not match update the command to match the file names.

``` shell
source /opt/genestack/scripts/genestack.rc
# Example overriding things on the CLI
ansible-playbook host-setup.yml --inventory /etc/genestack/inventory/openstack-flex-inventory.ini \
--private-key ${HOME}/.ssh/openstack-flex-keypair.key
```

### Run the cluster deployment

This is used to deploy kubespray against infra on an OpenStack cloud. If you're deploying on baremetal you will need to setup an inventory that meets your environmental needs.

Change the directory to the kubespray submodule.

``` shell
cd /opt/genestack/submodules/kubespray
ansible-playbook host-setup.yml
```

Source your environment variables
The `private-key` option can be used to instruct ansible to use a custom SSH key for the SSH connection

``` shell
source /opt/genestack/scripts/genestack.rc
--private-key ${HOME}/.ssh/openstack-keypair.key
```

!!! note

The RC file sets a number of environment variables that help ansible to run in a more easy to understand way.

Once the inventory is updated and configuration altered (networking etc), the Kubernetes cluster can be initialized with

``` shell
ansible-playbook cluster.yml
```
### Run the cluster deployment

The cluster deployment playbook can also have overrides defined to augment how the playbook is executed.
Confirm openstack-flex-inventory.yaml matches what is in /etc/genestack/inventory. If it does not match update the command to match the file names.
This is used to deploy kubespray against infra on an OpenStack cloud. If you're deploying on baremetal you will need to setup an inventory that meets your environmental needs.

The playbook `setup-kubernetes.yml` is used to invoke the selected provider installation and label and configure a kube config:

``` shell
ansible-playbook --inventory /etc/genestack/inventory/openstack-flex-inventory.ini \
--private-key /home/ubuntu/.ssh/openstack-flex-keypair.key \
--user ubuntu \
--become \
cluster.yml
source /opt/genestack/scripts/genestack.rc
ansible-playbook setup-kubernetes.yml
```

!!! tip
Expand Down
5 changes: 3 additions & 2 deletions docs/k8s-labels.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# Label all of the nodes in the environment

To use the K8S environment for OpenStack all of the nodes MUST be labeled. The following Labels will be used within your environment.
Make sure you label things accordingly.
The labeling of nodes is automated as part of the `setup-kubernetes.yml` playbook based on ansible groups.
For understanding the use of k8s labels is defined as following, automation and documented deployment
steps build ontop of the labels referenced here:

!!! note

Expand Down
24 changes: 12 additions & 12 deletions docs/multi-region-support.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,18 +30,18 @@ The structure may look something like:
!!! example
```
├── my-genestack-configs
│ ├── sjc
│ ├── region1
│ │ ├── inventory
│ │ │ ├── my-sjc-inventory.ini
│ │ │ ├── inventory.yaml
│ │ ├── helm-configs
│ │ │ ├── nova
│ │ │ │ ├── my-custom-sjc-nova-helm-overrides.yaml
│ ├── dfw
│ │ │ │ ├── region1-custom-nova-helm-overrides.yaml
│ ├── region2
│ │ ├── inventory
│ │ │ ├── my-dfw-inventory.ini
│ │ │ ├── -inventory.yaml
│ │ ├── helm-configs
│ │ │ ├── nova
│ │ │ │ ├── my-custom-dfw-nova-helm-overrides.yaml
│ │ │ │ ├── region2-custom-nova-helm-overrides.yaml
└── .gitignore
```

Expand All @@ -68,15 +68,15 @@ For our example we just want to override the cpu_allocation as they are differen

Create the override files within the respective structure as noted above with the contents of:

!!! example "my-custom-sjc-nova-helm-overrides.yaml"
!!! example "region1-custom-nova-helm-overrides.yaml"
```
conf:
nova:
DEFAULT:
cpu_allocation_ratio: 8.0
```

!!! example "my-custom-dfw-nova-helm-overrides.yaml"
!!! example "region2-custom-nova-helm-overrides.yaml"
```
conf:
nova:
Expand All @@ -95,18 +95,18 @@ For the rest of the workflow example we'll be working with the `sjc` environment

!!! example "symlink the repo"
``` shell
ln -s /opt/my-genestack-configs/sjc /etc/genestack
ln -s /opt/my-genestack-configs/region1 /etc/genestack
```

This will make our `/etc/genestack` directory look like:

!!! example "/etc/genestack/"
```
├── inventory
│ │ ├── my-sjc-inventory.ini
│ │ ├── inventory.yaml
├── helm-configs
│ ├── nova
│ │ ├── my-custom-sjc-nova-helm-overrides.yaml
│ │ ├── region1-custom-nova-helm-overrides.yaml
```

#### Running helm
Expand All @@ -127,7 +127,7 @@ helm upgrade --install nova ./nova \
--namespace=openstack \
--timeout 120m \
-f /etc/genestack/helm-configs/nova/nova-helm-overrides.yaml \
-f /etc/genestack/helm-configs/nova/my-custom-sjc-nova-helm-overrides.yaml \
-f /etc/genestack/helm-configs/nova/region1-nova-helm-overrides.yaml \
--set conf.nova.neutron.metadata_proxy_shared_secret="$(kubectl --namespace openstack get secret metadata-shared-secret -o jsonpath='{.data.password}' | base64 -d)" \
--set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \
--set endpoints.identity.auth.nova.password="$(kubectl --namespace openstack get secret nova-admin -o jsonpath='{.data.password}' | base64 -d)" \
Expand Down
Loading

0 comments on commit b7f6bba

Please sign in to comment.